id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
214550258 | pes2o/s2orc | v3-fos-license | Global Motion Detection and Censoring in High-Density Diffuse Optical Tomography
Motion-induced artifacts can significantly corrupt optical neuroimaging, as in most neuroimaging modalities. For high-density diffuse optical tomography (HD-DOT) with hundreds to thousands of source-detector pair measurements, motion detection methods are underdeveloped relative to both functional magnetic resonance imaging (fMRI) and standard functional near-infrared spectroscopy (fNIRS). This limitation restricts the application of HD-DOT in many interesting subject populations (e.g., bedside monitoring and children). Here, we evaluate a new motion detection method for multichannel optical imaging systems that leverages spatial patterns across channels. Specifically, we introduce a global variance of temporal derivatives (GVTD) metric as a motion detection index. We show that GVTD strongly correlates with external measures of motion and has high sensitivity and specificity to instructed motion - with area under the receiver operator characteristic curve of AUC = 0.88, calculated based on five different types of instructed motion. Additionally, we show that applying GVTD-based motion censoring on both task and resting state HD-DOT data with natural head motion results in an improved spatial similarity to fMRI mapping for the same respective protocols (task or rest). We then compare the GVTD similarity scores with several commonly used motion correction methods described in the fNIRS literature, including correlation-based signal improvement (CBSI), temporal derivative distribution repair (TDDR), wavelet filtering, and targeted principal component analysis (tPCA). We find that GVTD motion censoring outperforms other methods and results in spatial maps more similar to matched fMRI data.
Introduction
High-density diffuse optical tomography (HD-DOT) has tremendous potential to be a surrogate for functional magnetic resonance imaging (fMRI) [1][2][3][4][5][6]. However, methods for dealing with detection and suppression of motion artifacts in HD-DOT data are relatively underdeveloped, which limits its application to many important clinical populations. While fMRI has become a gold standard for cognitive neuroimaging, it is contraindicated in subjects with metal implants and cannot be used in many clinical settings, and studies seeking more naturalistic imaging environments. In contrast, fNIRS-based methods are portable, suitable for naturalistic imaging, and not contraindicated in subjects with electronic or metal implants [7][8][9][10][11][12][13][14][15]. Sparse fNIRS imaging arrays yield poor resolution and low image quality. HD-DOT provides improved image resolution and depth profiling, particularly when used with anatomical head models [16][17][18]. However, as in both fMRI and fNIRS, detection, classification, and removal of motion-induced artifacts remains a challenge for HD-DOT.
Multiple fMRI studies have documented the spurious effects of motion artifacts in blood oxygen level-dependent (BOLD) fMRI despite the use of common motion suppression methods [19][20][21][22][23][24]. Motion-induced changes in T2*-weighed fMRI signals are shared across brain voxels, hence generate spatially structured artifacts. Such artifacts alter functional connectivity by decreasing long-distance correlations and increasing short-distance correlations [19,[22][23][24]. However, two simple data quality indices, frame-wise displacement (FD) and root mean squared (RMS) signal change over sequential frames (DVARS), are commonly used in fMRI data processing pipelines to identify and exclude data segments (motion censoring or scrubbing) from behaviorally relevant fMRI measures [19,25,26].
In HD-DOT, similar to fMRI, the effects of head motion are global across the field of view (FOV) and impact a majority of measurements or voxels. In fMRI, head movements shift the position of the brain in space and modulate the BOLD signal [27,28], in HD-DOT, head motion induces a torque on the fibers in the optical imaging array that, in turn, modulates the location (Fig. 1B center), angle, or both location and angle of optode-scalp coupling (Fig. 1B right). Thus, motion induces artifacts in the optical signals that can appear as brief transient spikes or baseline shifts. These artifacts propagate from measurement space to voxel space in the image reconstruction process and corrupt the neuroimaging results.
Numerous strategies for managing motion-induced artifact have been described in the fNIRS literature. However, a consensus on how best to correct for motion artifacts has not emerged [29][30][31]. Extant motion correction methods in fNIRS largely involve two steps: first, motion detection, and second, signal correction [32][33][34][35][36][37]. The fNIRS literature has largely focused on correcting motion artifacts on individual source-detector pair measurements and much less attention has been placed on multichannel or full arrays assessments. Moreover, most fNIRS studies have not assessed the efficacy of the denoising methods through comparison against fMRI.
We address these limitations by conducting a comprehensive evaluation of motion artifact removal methods for HD-DOT data by including independent measures of motion (accelerometry) and comparisons against gold-standard matched fMRI datasets. We introduce a novel index of motion, the global variance of the temporal derivatives (GVTD), and show that it strongly correlates with directly transduced measures of motion and outperforms two commonly used temporal motion detection indices in fNIRS based on the single-channel changes in the signal amplitude. We then optimize the use of GVTD-based motion detection in the HD-DOT processing pipeline by measuring the artifact-to-background ratio in in vivo resting state datasets collected with different HD-DOT devices in adults and infants. Using GVTD as a quantitative index, we show that GVTD predicts the quality of the task-based brain response maps (where the quality is defined based on the voxel-wise similarity between the HD-DOT images and matched fMRI data). Finally, we investigate the efficiency of the GVTD-based motion detection and censoring on the HD-DOT task and resting state datasets from exemplar HD-DOT datasets. These analyses demonstrate that GVTD censoring outperforms current fNIRS motion correction methods.
The global variance of the temporal derivatives (GVTD)
GVTD indexes global instantaneous change in the optical time-courses. For each time point, GVTD is computed as the RMS of the temporal derivatives across a set of measurements (Eq. 1). In this paper, the first nearest neighbor measurements (nn1) with a source-detector (SD) distance of 13 mm (10 mm for infants) were chosen, as they are more sensitive to changes in the fiber-scalp coupling and relatively insensitive to brain dynamics in comparison to longer distance measures [38].
The simple analytic formula for GVTD is where is the GVTD vector, ℝ is the optical density change or molar HbO2 at spatial coordinate . indexes the time points, is the number of coordinates, and is the number of time-points.
Motion censoring in HD-DOT data
Motion censoring (scrubbing) excludes the time-points (blocks) exceeding the GVTD noise threshold from further analysis of resting state and task data [39,40]. Details concerning the noise threshold criterion are explained in §3.3. This proposed HD-DOT censoring strategy follows a similar practice that resulted in statistical improvements in the resting state as well as task fMRI data [19,[41][42][43].
Other comparison motion correction methods 2.1.2.1 Correlation-based signal improvement (CBSI)
CBSI motion correction is based on the assumption that oxygenated and deoxygenated hemoglobin signals are negatively correlated under all circumstances. In the presence of motion artifacts, the correlation between these two signals becomes more positive. CBSI corrects the oxyhemoglobin concentrations by subtracting the scaled deoxy-hemoglobin to match the variance of the oxygenated signal. This process removes the positive correlation content between the two signals, taking into account their different amplitudes. Then, the corrected deoxy-hemoglobin is calculated by multiplying the corrected oxy-hemoglobin by the inverse of the same scaling factor between the original signals [35]. In this paper, we performed this motion correction method after spectroscopy on the down-sampled 1 Hz (Fig. S1).
Targeted principal component analysis (tPCA)
Principal component analysis (PCA) projects an arbitrary set of signals onto orthogonal principal components. Then, the principal components with the least variance are excluded, and the signal is reconstructed from the remaining components. Targeted PCA (tPCA) applies PCA to temporal epochs of the data that is identified to contain motion artifacts. tPCA reduces the risk of eliminating the physiological content in the motion-free epochs of the signal [44]. Hence, this method is followed by a prior step of motion detection in the temporal domain. Conventionally, this motion detection is performed by setting a threshold on signal amplitudes or the windowed signal amplitude changes. In this paper, we used the Homer function "hmrMotionArtifactByChannel" to detect noisy timepoints and "hmrMotionCorrectPCA" to perform PCA, and set the parameters of this algorithm in the similar range as in the original study [44]; tMotion = 0.5, tMask = 2, STDEVthresh = 20, AMPthresh = 0.5, nSV = 0.97 (Tables S1 and S2, Fig. S1).
Wavelet filtering
Wavelet-based motion correction is based on discrete wavelet transformation of single-channel measurements. This method assumes that the distribution of the wavelet coefficients of a motionfree signal should follow a Gaussian distribution. Therefore, motion artifacts are detected based on the deviations from the Gaussian distribution. By setting an outlier detection threshold, the coefficients associated with motion artifacts are excluded, and the clean signal is reconstructed based on the remaining wavelet coefficients [36]. We used the "hmrMotionCorrectWavelet" function, setting the interquartile parameter as 1.5, as suggested in the original paper [36] (Tables S1 and 2S, Fig. S1).
Kurtosis-based wavelet filtering (kbWF)
The kurtosis-based wavelet filtering (kbWF) method optimizes the use of the wavelet filtering motion correction by setting the threshold based on the kurtosis of the coefficient distributions [37]. The "hmrMotionCorrectKurtosisWavelet" function was used with the kurtosis threshold parameter set to 3.3, as recommended in the original paper [37] (Tables S1 and 2S, Fig. S1).
Hybrid (Spline + Savitzky Golay)
The spline and Savitzky-Golay hybrid method is a three-step algorithm that aims to identify and correct different types of motion artifact [34]. First, single-channel measurements are passed through a Sobel filter to identify time-points exceeding a threshold of 1.5 times the interquartile interval of the signal gradient. Second, this method performs a spline interpolation on those epochs containing motion to remove the baseline shifts and slow spikes. Steps 1 and 2 were introduced in a previous fNIRS motion removal method, commonly known as the motion artifact removal algorithm (MARA) [32]. After this step, the hybrid method then applies a Savitsky-Golay smoothing filter to remove the remaining fast spikes. We used the "hmrMotionCorrectSplineSG" function defined in the original paper with its default parameters and setting p = .99 and FrameSize_sec = 1.5 [34] (Tables 1S and 2S, Fig. S1).
Temporal derivative distribution repair (TDDR)
Temporal Derivative Distribution Repair (TDDR) also is a three-step algorithm that aims to automatically identify and correct motion artifacts at the single-channel level. First, by computing the temporal derivative of the signal, TDDR initializes the vector of observation weights. Second, it iteratively estimates the robust observation weights by applying the resulting robust weights to the centered temporal derivative to produce the corrected derivative. Finally, it integrates the corrected temporal derivative to yield the corrected signal [33].
Independent measurement of head motion
A motion sensor (3-space TM USB/RS232; Yost Labs, Portsmouth, Ohio) was attached to the top strap of the HD-DOT cap in a subset of the data acquired with instructed motion (more details in 2.6.2). This sensor includes a triaxial inertial measurement unit (IMU), which uses a gyroscope, an accelerometer, and a compass sensor (Fig. S2). Onboard electronics compute and report in realtime, the quaternion-based orientation relative to an absolute reference. We synchronized the outputs of the motion sensor with our HD-DOT data acquisition system using audio pulses at the start and end of data streams. The motion sensor data were down-sampled from 200 Hz to 1 Hz to match the final sampling rate of the HD-DOT data. Then, the motion sensor and HD-DOT signals were aligned by delaying the earlier signal based on the cross-correlation delay time with maximum correlation value.
Angular rotation
The angular rotation ( ) time-course was defined as the norm of the temporal derivatives of the head orientation in terms of Euler angles ( roll, pitch, and yaw), measured by the motion sensor. This index was defined in a manner similar to that of GVTD to facilitate comparisons between GVTD and motion sensor outputs (Eq. 2).
In this notation, indexes the time points, and is the number of time-points.
Artifact-to-background ratio (ABR)
To quantify the magnitude of the motion artifacts, we defined the artifact-to-background ratio (ABR; ), where ABR is the mean GVTD of all time-points above the noise threshold (defined in §3.3), divided by the mean GVTD of all the time-points below the noise threshold (Eq. 3).
In this formula, is the GVTD value at time index , ℎ ℎ is the threshold value, n is the number of time-points below the threshold, and m is the number of time-points above the threshold.
Datasets and general data processing 2.5.1 Datasets and their objective
Dataset 1: For validation, we collected an fMRI dataset in which adult subjects (n = 8) were scanned in both the resting state and during a hearing words (HW) task. This dataset served as ground truth. Dataset 2: As a positive control, in this HD-DOT dataset, healthy adults (n = 12) performed instructed motion while performing the same HW task performed during fMRI. Dataset 3: In this HD-DOT dataset, adult subjects (n = 14) performed the same HW task without instructed motion. Dataset 4: In this HD-DOT dataset, healthy adults (n = 8) were scanned while awake in a task-free (resting) state. Dataset 5: In this HD-DOT dataset, healthy term infants (n = 11) were imaged in the resting state (awake or asleep). This is a previously published dataset [3]. Demographic information and the objective of using each dataset are reported in Table 1. All aspects of these studies were approved by the Human Research Protection Office of the Washington University School of Medicine. All adult participants in the previous and new datasets were right-handed, native English speakers, and reported no history of neurological or psychiatric disorders. Adults were recruited from the Washington University campus and the surrounding community (IRB 201101896, IRB 201609028). All full-term infants were recruited from the Newborn Nursery at Barnes-Jewish Hospital in St Louis, Missouri, within the first 48 hours of life (IRB 201101813). All subjects (or their guardians) gave informed consent and were compensated for their participation in accordance with institutional and national guidelines.
HD-DOT systems, image reconstruction, and spectroscopy
All adult HD-DOT datasets (datasets 2, 3, and 4) were collected using a previously described continuous-wave HD-DOT system comprising 96 sources (LEDs, at both 750 and 850 nm) and 92 detectors (coupled to avalanche photodiodes, APDs) [1]. Acquisition in infants was performed at the bedside using a previously reported portable continuous-wave HD-DOT system with an optode array consisting of 32 sources (LEDs, at both 750 and 850 nm) and 34 detectors [3]. More detailed descriptions of the imaging systems are given in the corresponding references. Light modeling was computed using the standard MNI atlas-based absorption model; details can be found in [16]. Volumetric movies of relative changes in absorption at 750 nm and 850 nm were reconstructed after inverting the sensitivity matrix using Tikhonov regularization and spatially variant regularization [1]. Relative changes in hemoglobin concentration were obtained via a spectral decomposition of the absorption data, as previously described [1, 3].
Functional MRI (fMRI) system and imaging
All fMRI data were collected on a research-dedicated Siemens 3.0T Magnetom Prisma system (Siemens Medical Solutions, Erlangen, Germany), with an iPAT compatible 20-channel head coil. Blood Oxygenation Level Dependent (BOLD) sensitized fMRI data with TR = 1230 ms, TE = 33 ms, voxel resolution = 2.4 mm 3 , FA = 63 degrees, with a multi-band factor of 4 for both resting state functional connectivity MRI (3 runs each 10 min) and HW task BOLD (1 run, 3.5 min) were acquired for all subjects in dataset 1.
Paradigms
Hearing words: Subjects were seated for HD-DOT or supine for fMRI and instructed to fixate on a white crosshair against a gray background while listening to words. The HW task was administered as a block design. Each trial consisted of 15 seconds of hearing words followed by 15 seconds of silence. Each run included multiple trials, n = 10 for dataset 2, and n = 6 for datasets 1 and 3. The total number of acquired runs per session was 7 (dataset 2) or 1 (datasets 1 and 3).
Instructed motion: The instructed motion was performed by subjects during the HW task (dataset 2), with 15% of the trials including instructed motion. Participants viewed a screen with a crosshair and were instructed to perform a specific motion type when the crosshair color changed.
Movements were performed for about 2 seconds every 3-5 seconds over a 15-second word presentation section. Subjects were monitored in real-time using a digital camera to ensure that they were engaged in the assigned tasks. Specific motions included (i) head turn to the left and back to center (roll, Fig. 1A left), (ii) head nod up and back to center (pitch, Fig. 1A center), (iii) shifting body position, (iv) taking deep breaths, and (v) raising eyebrows. Head twist (yaw, Fig. 1A right) motion was avoided to prevent cap displacement.
Resting state: Resting state data in adults (datasets 1 and 4) were collected over 10 min runs while subjects were seated for HD-DOT or supine for fMRI and visually fixating on a white crosshair against a gray background. Subjects were asked to stay awake and still during data acquisition. The number of runs per session was 3 (dataset 1) or 1 (datasets 4). Resting state HD-DOT in infants was acquired at the bedside (dataset 5) within the first 24-48 hours of life during natural (unmedicated) sleep or quiet rest [3].
Data processing 2.7.1 HD-DOT pre-processing
All HD-DOT data were processed using the NeuroDOT toolbox following the flowchart in Fig. S1 [1, 45,46]. HD-DOT light measurement data were converted to log-ratio (using the temporal mean of a given SD-pair measurement as the relative baseline for that measurement). Noisy measurements were empirically defined as those with greater than 7.5% temporal standard deviation in the least noisy (lowest mean GVTD) 60 seconds of each run [17], and were excluded from further processing. Then the data were high-pass filtered (0.02 Hz cut-off for task-based datasets, 0.009 Hz for resting state datasets) to remove low-frequency drift. To serve as an estimate of the global superficial signal, we computed the average of all remaining first nearest neighbor measurements (13 mm SD-pair separation in the adult system and 10 mm SD-pair separation in the infant system). This global signal estimate was regressed from all measurements [38]. After low-pass filtering (0.5 Hz cut-off for task-based data sets, 0.08 Hz for resting-state data sets), the time-courses were down-sampled from 10 Hz to 1 Hz and then used for image reconstruction. The efficacy of GVTD was evaluated at four stages of the HD-DOT processing pipeline, as indicated in Fig. S1 (green boxes) on 10 Hz sampled data. All other motion correction methods except CBSI were also performed on the 10 Hz sampled optical density signals (immediately after the log-ratio step) (Fig. S1).
fMRI pre-processing
fMRI pre-processing was performed using in-house 4dfp tools [47]: 1. correction for systematic slice-dependent time shifts; 2. elimination of odd-even slice intensity differences due to interleaved acquisition; 3. rigid-body realignment for head motion within and across runs; 4. normalization of signal intensity to a mode value of 1000. Signal intensity normalization enables identification of artifact by evaluation of the signal temporal derivative. Atlas transformation was computed by composition of affine transforms derived by a sequence of coregistration of the fMRI volumes via the T2-weighted and MP-RAGE structural scans. Head motion correction and atlas transformation was applied in a single resampling step that generated volumetric time series in (3mm) 3 atlas space. Data underwent spatial smoothing (6 mm full width at half maximum in each cardinal direction) and temporal band-pass filtering (0.02-0.5 Hz for the HW task and 0.009-0.08 for resting state). Nuisance regressors included six rigid body values derived from head motion correction, white matter, and CSF signals, and the mean whole-brain signal. Motion artifacts were reduced through DVARS-based motion scrubbing using session-specific thresholding expressible as ℎ ℎ =̃+ 2.5 (see Eq. 5 below) [48]. The fraction of censored frames was 21% ± 12%.
HW task response mapping in datasets 1, 2, and 3
Another objective of acquiring HW task data was to evaluate GVTD as an index of HD-DOT data quality (dataset 2). To this end, 70 trials of HW (15 sec of HW (On), 15 sec of silence (Off)) were acquired in each session; 10 trials included instructed motion; the remaining 60 trials (ordinary trials) did not. The reconstructed voxel-wise data represent the changes in the hemoglobin concentration ( [Hb 2 ]) in units of mol/L [49]. The quantitative response magnitude was then calculated with a standard general linear model (GLM). The design matrix was constructed by convolving the experimental design with a canonical HRF using a two-gamma function fitted to the in-vivo HD-DOT data, as described in [50]. Extracted hemodynamic response estimations for each subject were then combined in a simple group-level fixed effects analysis [51]. Fixed effect analysis was adopted as we expect the variance in our dataset to be most strongly driven by scanto-scan variability rather than from subject-to-subject differences.
Seed-based correlation analysis of functional connectivity in datasets 1 and 4
Seed regions were 5 mm radius spheres centered on coordinates used in our previous study [1]. Five seeds representing the auditory (AUD), visual (VIS), somatomotor (MOT), dorsal attention network (DAN), and frontoparietal network (FPN) networks were selected within the HD-DOT FOV. Correlation maps were generated by calculating the Pearson correlation between the timeseries of each seed region with all other voxels in the FOV. Correlation maps in individuals were Fisher's z-transformed and averaged across subjects.
Effect of motion artifacts on HD-DOT data
We investigated the effects of various types of movements on HD-DOT data using instructed motion. During the HW task, subjects performed five different types of instructed motion including large movements (head rotation) and small movements (raising eyebrows) ( §2.6.2). One way to track the effect of motion is to spatially display the measurement pair channels (Fig. 2B). For example, for all the second nearest neighbor (nn2) pairs, we can mark sources and detectors with very high standard deviations over time during instances of instructed roll rotation (pink circles) and eyebrow motion (blue circles) (Fig. 2B, see §2.7.1). Alternatively, one can analyze an SD-pair measurement (pair highlighted by large circles in Fig. 2B) by comparing its time-course during runs without instructed motion ("ordinary"), or with different levels of instructed motion, i.e., low eyebrow motion or gross roll rotation (Fig. 2C). The difference in signal quality between the clean and corrupted responses are evident after block averaging (Fig. 2D).
We assessed the effects of different motion artifacts on the measurements by calculating the number of measurements with excessive noise for each type of motion artifact across all subjects. The HD-DOT array contains n = 1500 total measurements per wavelength within nn1 ~ 13 mm, nn2 ~ 30 mm, nn3 ~ 39 mm, and nn4 ~ 47 mm separations, respectively. All five motion types affected multiple SD channels distributed across the FOV; specifically, 51 ± 8% of the channels for gross body movement and 39 ± 4 % for small eyebrow movement. Based on these observations, we concluded that each type of motion generates global effects. Therefore, we adopted the GVTD as a global index of motion, taking into account optical signals over the full FOV.
GVTD and its correlation with the head angular rotation
The global effect of motion artifacts in HD-DOT can be visualized as a matrix where each row is a measurement signal and the columns index time (Fig. 3A). This type of visualization is similar to fMRI "gray plots" [19,52,53]. Inspection of Fig. 3A reinforces the notion that the effects of head motion in HD-DOT are global. GVTD time-course is computed in four steps. First, starting from the matrix of 850 nm nn1 optical density changes (Fig. 3A), the matrix of the backward differentiation of the selected time-courses is calculated (Fig. 3B). Then, from the matrix of the squares of backward differences (Fig. 3C), GVTD is defined as the square root of the mean across the selected measurement array (Fig. 3D). This sequence of steps progressively increases the sensitivity and specificity of the measure to motion (Fig. 3A-D). To evaluate the sensitivity of GVTD to motion, we concurrently recorded accelerometry as an independent measure in a subset of our instructed motion dataset (Fig. 3E-H). The graded quantitative motion capture of the accelerometer provided insight into the sensitivity and specificity of GVTD to motion. To facilitate comparisons between the accelerometer and GVTD, the angular rotation was calculated based on the final head orientation time-course ( §2.3, Fig. 3I).
We evaluated the efficacy of GVTD and angular rotation for motion detection through different scenarios. First, we compared these two motion indices for a gross and a small artifact and found that GVTD shows a higher amplitude spike than the angular rotation in the case of small artifacts such as eyebrow motion (Fig. 4A, B). To quantify these comparisons, we first calculated the Pearson correlation between GVTD and angular rotation ( ) for all runs containing instructed motion. The correlations were averaged over the six subjects that had concurrent HD-DOT and motion sensor data for all runs in the session (Fig. 4C). These correlations were greatest in cases of head rotations (r = 0.86 ± 0.06 for roll and pitch) and lowest for eyebrow motion (r = 0.46 ± 0.2). This difference most likely reflects the transducer characteristics of the motion sensor and the fact that it is not sensitive to the small muscle movements when attached to the top of the HD-DOT cap.
To evaluate the sensitivity of the GVTD to motion, we leveraged the ground truth built into our instructed motion paradigm. Experimental receiver operator characteristic (ROC) curves for GVTD and angular rotation were created for a binary classification of clean and noisy time-points by sweeping the detection threshold (Fig. 4D). We defined ground truth for motion as the time- points during which the subjects performed instructed movements. We also plotted these ROC curves for two common temporal motion detection methods in fNIRS, i.e., absolute single-channel signal amplitudes and windowed amplitude changes for all motion types and all 850 nm nn1 measurements (Fig. S3) and compared the mean of these ROC curves against GVTD and angular rotation (Fig. 4D). In all motion types, GVTD showed better or similar performance (AUC) compared to angular rotation, absolute signal amplitude, and windowed amplitude change (Table 3).
Motion index GVTD Angular rotation
Signal amplitude Windowed amplitude change AUC 0.88 ± 0.07 0.77 ± 0.08 0.6 ± 0.04 0.76 ± 0.04 Table 3: The area under the curve (AUC) of the experimental receiver operator characteristic (ROC) of GVTD and angular rotation (based on the motion sensor outputs), the mean of the ROC of the absolute signal amplitude and windowed amplitude changes based on the instructed motion as ground truth in dataset 2.
We used the instructed motion protocol to examine the relation between GVTD and angular rotation for all runs with instructed motion (Fig. 4E). Low vs. high motion time-points (black vs. red in Fig. 4E) were determined based on the ground truth of the instructed motion protocol (high motion as defined as time-points when the subject performed instructed motion). When the motion was low (black dots) GVTD and angular rotation were not correlated (r = 0.05 ± 0.05), but when the motion was high (red dots), GVTD and angular rotation were highly correlated (r = 0.8 ± 0.1). The same log-log scatter plots for absolute signal amplitudes (Fig. 4F) and the windowed amplitude (Fig. 4G), show much lower correlations with the angular rotation (0.2 and 0.1, respectively) compared to GVTD (0.7).
In summary, these results show that GVTD can be used as an alternative or in conjunction with motion sensors in detecting noisy time-points of data.
Motion detection strategy using GVTD
To censor data using the GVTD time-course, we developed an outlier detection strategy that separates good data from motion artifacts.
We assume that the detected signal, ( ), is a linear combination of the true physiological signal, ( ), and noise, ( ): angular rotation for all runs with instructed motion. The correlation between the GVTD and the motion sensor is higher than both amplitudes and the windowed amplitude changes. The cutoff between black and red dots is based on the instructed motion time-points.
We followed the fMRI approaches for DVARS and FD and developed a data distribution driven strategy for finding motion criterion. In fMRI, ( ) is approximately normally distributed [54]. Accordingly, the DVARS distribution is right-skewed [55]. Therefore, we investigated the skew of the GVTD distribution as a potential index of head motion artifact in HD-DOT. We evaluated the GVTD distribution for HD-DOT data from a still Styrofoam phantom, a low motion trial, and a high motion trial. The phantom GVTD histogram peaked at a relatively small value (mode = 4 × 10 −5 ) and exhibited a small rightward skew (Fig. 5A). In the low motion human data, GVTD values had a higher mode and proportionately smaller skew (Fig. 5B). In data with instructed motion (high motion), the GVTD distribution is strongly skewed to the right (Fig. 5C). These results suggest that the skew provides a basis for censoring HD-DOT data.
Thus, we defined a noise threshold ( ℎ ℎ ) based on the GVTD distribution mode () plus a constant ( ) times the standard deviation computed on the left (low) side of the mode ( ). The right tail of the GVTD distribution corresponds to motion artifacts (Eq. 5). Thus, where ̃ is the histogram mode and is computed as = √ (1/ ) ∑ ( − ̃) 2 < ̃, where is the number of GVTD time-points less than . The value of controls the trade-off between the exclusion of artifact vs. data loss.
Determining the best stage for performing GVTD-based motion detection and censoring
GVTD is a generic measure that can be applied to any data in the form of channels (or voxels) by time. Therefore, we needed to determine where in the processing pipeline, GVTD should be performed. We evaluated four potential locations (green boxes in Fig. S1). To evaluate GVTD's ability to separate noise from the signal, we defined an artifact-to-background ratio (ABR) as the mean of the GVTD values above a noise threshold over the mean of the GVTD values below the threshold. Specifically, GVTD was calculated for; a) SD-pair log-mean optical densities ("after log-mean"; unit = optical density change per second (∆ / )), b) after temporal filtering before superficial signal regression (SSR) ("after filtering no SSR"; unit = ∆ / ), c) after both temporal filtering and SSR ("after filtering with SSR"; unit = ∆ / ), and d) on reconstructed image voxels ("after reconstruction"; unit = molar HbO2/s). These results were compared based on their ABR means on two different datasets with natural motion to determine the most effective GVTD strategy. GVTD time-courses, GVTD histograms, and their associated gray plots calculated at these four stages for resting state data collected with two HD-DOT systems (example of a run from the adult HD-DOT data in Fig. 6A-C). The ABR index (Eq. 3) was calculated using the motion threshold defined as ℎ ℎ = ̃+ 4 (Eq. 5). Results showed that ABR was consistently highest after both filtering and superficial signal regression but before image reconstruction in both datasets 5 and 6 (Fig. 6D).
Indexing data quality with GVTD in task HD-DOT data by comparison against fMRI
Dataset 2 was used to evaluate the ability of GVTD to index the HD-DOT data quality. HD-DOT responses to hearing words were compared to the group-mean fMRI response to the same task, which was independently acquired in a separate experiment and treated as a "gold standard". We rank-ordered ordinary HD-DOT trials for each subject according to their mean GVTD value; for each subject, the ten lowest and ten highest GVTD ordinary trials were defined as "low motion" and "medium motion". The instructed motion trials were defined as "high motion". Responses were extracted from a fixed ROI defined as P < 0.05 in the fMRI dataset (Fig. 8A, 3 fMRI time-courses were computed for each of the three HD-DOT conditions (Fig. 7B-D). This correlation progressively decreased from 0.97 for low motion to 0.86 for medium motion, to 0.78 for instructed motion (Fig. 7E). Medium motion responses (Fig. 7C) were comparable to fMRI, but with a smaller peak value and higher mean squared error (0.08). Trials that GVTD identified as low motion (Fig. 7B) generated the cleanest maps with the lowest mean squared error (0.06). Accordingly, the GLM-derived beta-values were greater in the low as compared to high motion trials in most subjects (Fig. 7G).
A cautionary point regarding GLM-derived beta values is raised by the instructed motion trials, which generated the highest mean squared error (0.12) as well as the greatest BOLD response modulations, hence, the greatest GLM-derived beta values (Fig. 7H). These response time-courses were the least similar to those obtained by fMRI (Fig. 7F) and were accompanied by voxel-wise activations outside of the auditory cortex. Thus, the apparently strong HD-DOT responses in the instructed motion condition are attributable to motion artifact, as detected by GVTD (Fig. 7E). We conclude that the results shown in Fig. 7 demonstrates that GVTD effectively indexes HD-DOT data quality.
Additional results derived from the HW response analysis show a progressively lower similarity of the HW responses for fMRI results in association with greater GVTD values (Fig. 8 E, F). The relationship between low motion and medium motion data within each session shows that responses are systematically greater in low motion as opposed to medium motion (true in 15 out of 17 sessions). The responses are comparably compromised by spontaneous motion in medium motion trials (as indexed by greater GVTD) and spuriously higher in instructed motion trials with the highest GVTD scores (Fig.7 G, H).
Comparison between motion removal methods applied to HW task HD-DOT data
To compare the performance of different motion removal methods on HD-DOT data, we used dataset 3, acquired in older subjects (n = 13; 42 ± 19 years old) performing the hearing words task (no instructed motion). Dataset 3 included a wide range of motion contamination levels. The details of the various motion removal methods used in this analysis are explained in §2.1.2. Responses were evaluated in terms of statistical significance at the voxel and ROI levels as well as time-course similarity with fMRI.
Without motion removal, the group-level t-statistic map contained several spurious activations that are not present in the fMRI results (Fig. 8A, B). Moreover, the expected superior temporal cortex response did not achieve statistical significance at P < 0.05. In this analysis, the GVTD threshold was computed as ℎ ℎ = ̃+ 3 (Eq. 5). Exemplary low and motion and high motion blocks are illustrated in Supplementary Fig. S4. This threshold excluded all blocks in 6 subjects, leaving 7 subjects contributing to the final result illustrated in Fig. 8C. Results obtained with TDDR, tPCA, CBSI, kbWF, hybrid (Spline + Savitzky Golay), and wavelet filtering are illustrated in Fig. 8D-I. GVTD censoring, TDDR, and CBSI methods recovered bilateral superior temporal cortex activations in thresholded t-statistic maps (P < 0.05). tPCA and hybrid methods also recovered a unilateral right hemisphere activation. However, no statistically significant (P < 0.05) responses were obtained with the other methods (wavelet and kbWF).
We quantified the performance of the results shown in Fig. 8B-I using two metrics: 1. Similarity score, defined as the voxel-wise Pearson correlation between the non-thresholded maps and the fMRI gold standard map, and 2. Mean t-value in the auditory ROI defined P < 0.05 in the fMRI tmap (Fig. 8A, 3 rd column). The spatial similarity to fMRI was greatest for the GVTD-censored map, followed by TDDR, tPCA, hybrid, not-corrected, CBSI, wavelet, and kbWF maps (Fig. 8J). The mean ROI t-value was greatest for the GVTD-censored maps, followed by TDDR, CBSI, hybrid, not corrected, tPCA, kbWF, and wavelet corrections (Fig. 8K). As noted above in §3.5, artifacts can spuriously increase apparent response magnitudes, hence, GLM-derived t-values. This observation underscores the value of comparing HD-DOT results to those of fMRI.
Comparison between motion removal methods applied to resting state HD-DOT data
We compared the performance of different HD-DOT motion removal methods in application to resting state HD-DOT data using dataset 4 (n = 8 infants, 1.1 ± 0.4 days old). Seed-based functional connectivity (FC) was computed using the 5 seed ROIs ( §2.7.4, Fig. 9 top row). In parallel with §3.6, we quantified the performance of each correction method using two metrics: 1. similarity score, defined as the spatial similarity between the HD-DOT and fMRI FC maps; and 2.
Mean FC (Fisher z-transformed correlation) in functionally connected ROIs identified in the fMRI data. The spatial similarity was computed as the Fisher z-transformed Pearson spatial correlation between non-thresholded maps, evaluated over the HD-DOT FOV (white area illustrated in the top row of Fig. 9). Mean FC was evaluated in the colored ROIs illustrated in Fig. 9A. Thus, this measure reflected simple homotopic FC in primary cortical areas as well as ipsilateral FC in the higher-order networks (DAN and FPC). The GVTD threshold was computed as ℎ ℎ = ̃+ 10 (Eq. 5). This lenient threshold minimized data loss. On the basis of preliminary testing, GVTD censoring was extended to retain only epochs of duration at least 30 seconds.
The results obtained by the various correction methods are shown in Fig. 9B-I. The most extensive HD-DOT FC maps were obtained in uncorrected data (Fig. 9B). However, these maps were not spatially most similar to the fMRI gold standard dataset. Rather, GVTD censoring ( Fig. 9C) yielded HD-DOT FC maps most similar to fMRI (Fig. 9J). Of all censoring methods, GVTD yielded the greatest FC in the evaluation ROIs, followed by wavelet, CBSI, TDDR, tPCA, kbWF, and hybrid corrections (Fig. 9K). As in the HW task responses, strong FC in the evaluation of network ROIs does not necessarily indicate good data quality, especially when accompanied by spurious effects outside of the network identified on the basis of fMRI (e.g., as seen in the no correction, wavelet, and CBSI maps). On the other hand, some methods may overcorrect, leading to falsely weak correlations (TDDR, tPCA, kbWF, and hybrid methods).
A general summary of the novel strategies and findings
We developed a novel motion detection method suitable for high-density optical imaging arrays, inspired by the DVARS in fMRI [41]. Specifically, we defined the global measure of variance in the temporal derivative across measurement channels (GVTD) and developed a method for denoising structured artifacts in HD-DOT. We found that GVTD successfully indexes motion artifacts in HD-DOT and has higher sensitivity and specificity (evaluated using AUC of the ROC curve against the ground truth of instructed motion) for motion detection compared to an accelerometer motion sensor and to single-channel motion detection methods commonly used in fNIRS (absolute signal amplitudes and windowed amplitude changes).
While there are a number of papers evaluating motion removal methods for standard fNIRS [32-37, 44, 56-59], the literature on motion removal strategies for HD-DOT is limited. Previous studies lack some combination of HD-DOT datasets and comparisons to gold standard data (fMRI) for image quality validations and most are restricted to single-channel motion detection. In this paper, we introduce a novel approach for evaluating the efficacy of motion removal methods in HD-DOT by comparison against matched fMRI datasets.
We show that the mean GVTD score is correlated with the similarity of the HD-DOT task images to those of fMRI. Thus, the mean GVTD score can be used to classify datasets as either clean or noisy (Fig. 7). We also show that applying GVTD censoring to both task and resting state HD-DOT datasets outperforms other fNIRS-based motion correction methods and makes HD-DOT maps more similar to those of fMRI. Together, HD-DOT imaging arrays and anatomical atlasing combined with GVTD motion censoring, all aid in making HD-DOT data more comparable to fMRI and furthers the use of HD-DOT as a surrogate to fMRI.
Optimizing the implementation of GVTD in the HD-DOT processing pipeline
We optimized the use of GVTD motion detection in HD-DOT by testing it at different steps of the processing pipeline using the artifact-to-background ratio (ABR). In fMRI, DVARS has only been evaluated before and after filtering [53]. In contrast, in HD-DOT, we can consider GVTD in either measurement space or image space (after image reconstruction). Our results show that the ABR was highest in measurement space prior to image reconstruction and after filtering the highfrequency content of the data. It was also statistically better when performed after SSR, a common fNIRS and DOT processing step (in datasets 4 and 5). Therefore, based on our ABR analysis, we recommend performing GVTD after filtering the measurements, but prior to image reconstruction.
An important decision with GVTD is to determine the censoring threshold. Since the baseline GVTD value was different across people, similar to findings with DVARS in fMRI [48], we evaluated a noise detection strategy based on the GVTD distribution (histogram) specific to each subject. The differences in the baseline GVTD distribution is possibly due to variable physiological signal levels as well as respiratory patterns, heart rate, facial muscle activity, restlessness, tremor, etc. [19,53]. Therefore, we developed an outlier detection strategy individualized for each subject's data that semi-automates the noise threshold determination and takes into account subject differences. Specifically, we set the threshold using the GVTD distribution mode (̃) plus a constant ( ) determined based on the left side (lower side) of the mode of the GVTD distribution. For practical implementation, we recommend that the threshold be greater than the standard deviation of the baseline signal.
Evaluation and validation of denoising through comparisons to fMRI
Most fNIRS studies measure the efficiency of motion removal techniques based on the recovery of a synthetic HRF [29][30][31]34], or, in the case of real data, based on the variance across subjects or datasets [31]. However, since HD-DOT is focused on creating images comparable to fMRI, throughout this paper, we have used an fMRI dataset with the same task and resting state paradigm used in our HD-DOT datasets as a gold standard for evaluating the efficacy of different motion removal methods. The comparisons were based on the voxel-voxel Pearson correlation of the spatial HD-DOT maps and the fMRI maps. We find that, for both task and resting state functional connectivity, comparisons to fMRI enables identifying false negatives, false positives, and localization errors (Figs. 8,9), all of which would be difficult to determine without a target image.
In vivo imaging enables a much stronger evaluation than in silico simulations; fMRI data contains real image features, including the spatial extent, signal magnitude, distribution of spatial frequencies, and time-courses.
Using the fMRI comparisons, we ranked ordered several motion removal methods in both task and resting state data. The general pattern observed was that motion censoring using GVTD worked best, with near contenders being CBSI, TDDR, and following those, targeted PCA in both task and rest data. TDDR and tPCA both suppressed the mean t-value in the auditory ROI and FC in the evaluation ROIs, which may indicate overcorrection, i.e., removal of the true signal. Wavelet filtering ranked second after GVTD censoring in resting state data both in terms of similarity with fMRI and mean FC in the resting state networks. This is in distinction to its lower performance in the task data.
On different performance of motion correction methods in fNIRS literature
A striking aspect of the fNIRS literature is the variable performance of motion correction methods across different studies [29-31, 33, 34, 37]. One possible reason for the variability between the studies could be the different levels of motion present in each study. This variability has also been evaluated in a recent fNIRS study [31]. To address this topic, we performed a supplementary analysis of the low motion, medium motion, and high motion HW task data in dataset 2 (Fig. 7).
We evaluated the performance of different motion correction methods on different levels of motion artifacts in these three categories (Figs. S5, S6).
This analysis shows that, in the low motion group, all methods can preserve bilateral auditory cortex HW responses. In the medium and instructed motion groups, GVTD, TDDR, CBSI, and tPCA again outperformed other methods by recovering either a unilateral or bilateral HW activation with no obvious false positives in the P < 0.05 thresholded maps (Fig. S7). Note that, in the high motion data (instructed motion group), none of the motion correction techniques fully recovered bilateral auditory responses (present in fMRI). However, GVTD was able to distinguish between clean vs. motion-corrupted data (Fig. 7). We hypothesize that GVTD can provide a means of rank-ordering data based on quantitative motion estimation (as suggested in Fig. 7), something that is normally done subjectively prior to applying motion correction methods. Thus, GVTD may be useful also in denoising sparse fNIRS data. This notion could be tested by evaluating the efficacy of GVTD in sparse fNIRS arrays or by subsampling the HD-DOT imaging array.
GVTD focuses on motion detection, followed by simple censoring. GVTD could be used as an alternative to either absolute signal amplitudes or windowed amplitude changes included in the Homer2 code package [60]. Further, GVTD could be used in conjunction with motion correction methods such as spline interpolation (MARA) [32], Kalman filtering [56,61], PCA [56], tPCA [44], Hybrid methods [34], or any method that depends on motion detection in the temporal domain. However, we note that, in the results presented here, GVTD-based censoring alone provided better image quality than any of the alternative motion correction procedures.
Strengths and limitations of the GVTD-based motion censoring
When tested in HD-DOT, the most promising results were obtained using GVTD-based motion censoring. A likely reason for GVTD efficacy is that it leverages the effect of small artifacts across many measurements. The simplicity of GVTD censoring guarantees that the signal is neither oversmoothed nor overcorrected.
As described here, GVTD is used as a binary classifier to censor the time-points marked as noisy. However, it also could work with a non-binary weight associated with the time-points based on their GVTD value to soften the impact of threshold choice. For example, time points with GVTD values closer to the GVTD distribution mode could be assigned higher weights than ones further from the mode [62].
Another important challenge in scrubbing data is the tradeoff between losing signal vs. removing noise [63]. For motion criterion, one can ensure that sufficient data remains after censoring by tuning (Eq. 5). Another approach would be to use GVTD to determine the useable data yielded from a run and then adjust the data collection to either collect more data within the session or add sessions to the study. These active data quality approaches are currently being pioneered in fMRI with runtime assessment of motion [64][65][66].
Summarizing the consensus regarding the top-performing denoising strategies in the fNIRS literature
Among the fNIRS-based methods that worked best for HD-DOT, besides GVTD, CBSI performed well in both task and resting state data. CBSI does not require tuning of parameters but has been less recommended in the literature [33,34] as it relies on the assumption of a negative correlation between HbO and HbR. Therefore, it is limited to populations in which a normal correlation between HbO and HbR can be assumed [35].
The TDDR method performed well in the task HD-DOT data and fairly well in the resting state analysis. TDDR, like CBSI, does not require tuning of parameters. However, one disadvantage of TDDR is that it relies on the derivative of single measurements and, thus, is less sensitive to small motion artifacts such as eyebrow motion. Moreover, TDDR only performs an efficient motion correction on the low-frequency content of the data, because the higher frequencies inflate the variance of the temporal derivative distribution and create bias in the distribution of estimates [33]. However, we showed that the noise content is still present in the data after band-pass filtering (see post-filtering gray plots in Fig. 6C showing residual artifact during motion).
Targeted PCA also yielded HD-DOT maps similar to those in fMRI but with decreased response magnitudes in both task and resting state data. tPCA removes a fixed proportion of variance through the removal of the largest principal component; hence, as observed here, is prone to overcorrection [33,44].
Wavelet filtering, despite a poor performance in task data, showed good performance in resting state HD-DOT data. However, this method is computationally expensive. On average, for both HW and rest HD-DOT runs, wavelet filtering ran ten times slower than other motion correction or censoring methods. The kbWF method, while faster than the full wavelet approach, did not perform well in either task or rest HD-DOT data.
Conclusion
We developed GVTD, a novel motion detection metric and optimized its use in the HD-DOT preprocessing pipeline. GVTD can be used alone or in combination with other motion correction methods to increase the quality of data obtained with multi-channel optical imaging systems. We evaluated GVTD using several independent HD-DOT datasets, including an instructed motion protocol, accelerometer motion measures, and a matched fMRI dataset serving as ground truth. Although GVTD-based censoring removes data, the obtained HD-DOT maps were most similar to those of fMRI and it outperformed alternative motion correction methods previously described in the fNIRS literature.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2020-02-27T09:12:09.629Z | 2020-02-25T00:00:00.000 | {
"year": 2020,
"sha1": "1bcd1126f15cd0236e54ea52bdb5229598184c4d",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hbm.25111",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "ba794bf47ace10bc802dd09d6f66477567defae1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Computer Science"
]
} |
17677891 | pes2o/s2orc | v3-fos-license | Eupatilin Inhibits Gastric Cancer Cell Growth by Blocking STAT3-Mediated VEGF Expression
Purpose Eupatilin is an antioxidative flavone and a phytopharmaceutical derived from Artemisia asiatica. It has been reported to possess anti-tumor activity in some types of cancer including gastric cancer. Eupatilin may modulate the angiogenesis pathway which is part of anti-inflammatory effect demonstrated in gastric mucosal injury models. Here we investigated the anti-tumor effects of eupatilin on gastric cancer cells and elucidated the potential underlying mechanism whereby eupatilin suppresses angiogenesis and tumor growth. Materials and Methods The impact of eupatilin on the expression of angiogenesis pathway proteins was assessed using western blots in MKN45 cells. Using a chromatin immunoprecipitation assay, we tested whether eupatilin affects the recruitment of signal transducer and activator of transcription 3 (STAT3), aryl hydrocarbon receptor nuclear translocator (ARNT) and hypoxia-inducible factor-1α (HIF-1α) to the human VEGF promoter. To investigate the effect of eupatilin on vasculogenesis, tube formation assays were conducted using human umbilical vein endothelial cells (HUVECs). The effect of eupatilin on tumor suppression in mouse xenografts was assessed. Results Eupatilin significantly reduced VEGF, ARNT and STAT3 expression prominently under hypoxic conditions. The recruitment of STAT3, ARNT and HIF-1α to the VEGF promoter was inhibited by eupatilin treatment. HUVECs produced much foreshortened and severely broken tubes with eupatilin treatment. In addition, eupatilin effectively reduced tumor growth in a mouse xenograft model. Conclusions Our results indicate that eupatilin inhibits angiogenesis in gastric cancer cells by blocking STAT3 and VEGF expression, suggesting its therapeutic potential in the treatment of gastric cancer.
Introduction
It has been well demonstrated that neovascularization, or angiogenesis, is required for successful tumor growth and metastasis.
(1) In addition vascular endothelial growth factor (VEGF) is known to be one of the most important and well characterized inducers of angiogenesis. (2)(3)(4) VEGF expression and angiogenesis could be induced as a consequence of microenvironmental alterations, particularly hypoxia, (4) or genetic aberrations, (5,6) and including the activation of oncogenic kinases. (7,8) Hypoxia-inducible factor (HIF) is a transcription factor that is stabilized under reduced oxygen tension and plays a key role in the cellular response to hypoxia. HIF is a heterodimer consisting of two subunits, oxygen-sensitive HIF-α and constitutively expressed HIF-β [also known as aryl hydrocarbon receptor nuclear translocator (ARNT), the heterodimeric partner of aryl hydrocarbon receptor (AHR)]. (9) Upon hypoxia, HIF-1α heterodimerizes with the constitutively expressed HIF-1β subunit, and together they bind to DNA to increase the transcription of target genes including VEGF, erythropoietin, transferrin, endothelin 1, inducible nitric oxide synthase, and insulin-like growth factor II. (10)(11)(12) Constitutive activation of protein kinases is highly prevalent in a wide range of cancers and their role in VEGF induction and angiogenesis has been well documented. (7,8) Although diverse ki-Copyright © 2011 by The Korean Gastric Cancer Association www.jgc-online.org nases transduce signals through multiple routes, signal transducers and activators of transcription 3 (STAT3) comprises a convergence point of many signaling pathways (13,14) and transmits signals to the nucleus, where it binds to specific DNA promoter sequences and thereby regulate gene expression. (15) STAT proteins participate in tumorigenesis through up-regulation of genes encoding apoptosis inhibitors (myeloid cell leukemia sequence 1 (MCL1), BCL2like 1 (BCL2L1)) and cell-cycle regulators (cyclin D1/D2, MYC). While searching for an antiangiogenic agent that would inhibit HIF-1 activity, we identified a novel pharmacologic activity of eupatilin. Eupatilin, a phytopharmaceutical derived from Artemisia asiatica, has been reported to possess antioxidative and cytoprotective functions in various models of gastric mucosal damage. (18)(19)(20) We found that eupatilin inhibits HIF-1 activity in vitro. Eupatilin completely blocks HIF-1α expression at the post-transcriptional level and consequently inhibits the transcription factor activity of HIF-1 in cancer cells cultured under hypoxic conditions.
In this study, we demonstrated that eupatilin inhibits STAT3 activation in hypoxia-stimulated cancer cells. Further, the transcriptional activation of the VEGF promoter was mediated by active STAT3. Furthermore, active STAT3 interacted with HIF-1 and increased HIF-1 accumulation in the hypoxic cells.
Cell culture and hypoxic condition
The human gastric cancer cell line MKN45 was maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% heat-inactivated fetal bovine serum (FBS) and 1% antibiotics. (Fig. 1). We found that total STAT3 expression was increased by hypoxic conditions. Interestingly, active STAT3, the phosphorylated form, was more significantly reduced with eupatilin treatment in hypoxia than in normoxia. As shown in Fig. 1
STAT3 interacts with HIF-1 α and eupatilin inhibits STAT3 recruitment to the VEGF promoter
Our data showed that STAT3 protein expression was greater in hypoxic conditions compared to normoxia, suggesting the important role of STAT3 in regulating VEGF expression in hypoxia ( Fig. Fig. 1. Eupatilin inhibits the expression of angiogenesis gene products, HIF-1α, ARNT, STAT3 and VEGF. MKN45 cells were treated with indicated concentration of eupatilin before being cultured for 6 hr under normoxic (20% O 2 v/v) or hypoxic (1% O 2 v/v) conditions. Expression levels of HIF-1α, ARNT, STAT3, phospho-STAT3 were analyzed by immunoblotting. β-actin was used for loading control. Proteins were visualized by enhanced chemiluminescence. HIF-1α = hypoxia-inducible factor-1α; ARNT = aryl hydrocarbon receptor nuclear translocator; STAT3 = signal transducer and activator of transcription 3; VEGF = vascular endothelial growth factor. 1). We thus hypothesized that VEGF expression is cooperatively regulated by HIF-1α and ARNT, as well as by STAT3. To test this hypothesis, we first investigated possible interaction between HIF-1α, STAT3, ARNT and VEGF using the co-immunoprecipitation assays. MKN45 cells grown under either normoxic or hypoxic conditions were lysed and immunoprecipitated with an anti-HIF-1α antibody, followed by Western blotting with anti-STAT3, ARNT, or VEGF antibodies. We found that STAT3 and VEGF were co-precipitated with HIF-1α in normoxic or hypoxic cells ( Fig. 2A). Interestingly, for reasons yet to be explained, ARNT was co-precipitated with HIF-1α only in hypoxic conditions.
To investigate whether STAT3, HIF-1α, and ARNT might be recruited to the VEGF promoter and whether eupatilin may be inhibiting the interaction between the angiogenic proteins and the VEGF promoter, we performed ChIP assays on chromatin samples from normoxic and hypoxic cells with eupatilin treatment. As expected, a slight increased in the interaction of HIF-1α with the VEGF promoter was observed for hypoxia (Fig. 2B).
Eupatilin directly decreases HUVEC capillary tube formation
In light of the role of eupatilin in suppressing the angiogenic pathway as suggested above, we next investigated the effect of eupatilin on vascular endothelial cells under hypoxic conditions. In vitro angiogenesis assays were conducted using HUVECs. During angiogenesis, endothelial cells must break and traverse through their basement membrane to form new blood vessels. Hypoxia can stimulate endothelial cell invasion and tube formation. Eupatilin was administered to HUVECs seeded on Matrigel beds (10 mg/ ml) and incubated for 16 hr under hypoxic conditions. Eupatilin strongly inhibited the hypoxia-stimulated capillary network formation. With increasing doses of eupatilin, vasculogenesis was significantly inhibited as evidenced by the production of considerably foreshortened and severely broken tubes (Fig. 3). Tumors in eupatilin-treated mice were significantly smaller than those in vehicle-treated mice (Fig. 4A). The changes in tumor size was measured and plotted as average tumor size versus time (data not shown). When ex vivo tumor weight was measured upon sacrifice, there was a significant difference in tumor weight between the control (vehicle only) and EPT groups (Fig 4B). These results indicated that eupatilin effectively inhibited tumor growth in a xenograft tumor model.
Discussion
Angiogenesis is essential for the growth and metastasis of solid tumors, and the inhibition of angiogenesis is emerging as a promising strategy for cancer treatment. Increasing evidence has indicated that STAT3 activation is necessary for the malignant phenotype of many tumors. (25) Some previous studies have emphasized that STAT3 is a critical requirement for HIF-1α expression and that HIF-1α expression is blocked by STAT3 inhibitors. (24) In this study, we found that eupatilin inhibited STAT3 expression and markedly suppressed the activation of STAT3. Furthermore, we identified an interaction between HIF-1α and STAT3 at the VEGF promoter region using co-immunoprecipitation and ChIP assays, suggesting that both HIF-1α and STAT3 serve as transcriptional factors binding to the VEGF promoter. Indeed, increased interaction of HIF-1α to the VEGF promoter was observed in the hypoxia as expected (Fig. 2B). However, for reasons yet to be determined, the binding | 2014-10-01T00:00:00.000Z | 2011-03-01T00:00:00.000 | {
"year": 2011,
"sha1": "1999740e443af36d6392cea6b867956ca50f2760",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3204482?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f34f9c18c33f45ed40d74ffd118f55d68fdd4a51",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
255570006 | pes2o/s2orc | v3-fos-license | A Privacy Preserving Method with a Random Orthogonal Matrix for ConvMixer Models
In this paper, a privacy preserving image classification method is proposed under the use of ConvMixer models. To protect the visual information of test images, a test image is divided into blocks, and then every block is encrypted by using a random orthogonal matrix. Moreover, a ConvMixer model trained with plain images is transformed by the random orthogonal matrix used for encrypting test images, on the basis of the embedding structure of ConvMixer. The proposed method allows us not only to use the same classification accuracy as that of ConvMixer models without considering privacy protection but to also enhance robustness against various attacks compared to conventional privacy-preserving learning.
Introduction
Deep learning has been deployed in many applications including security-critical ones. Generally, data contains sensitive information such as personal informational, so privacypreserving methods for deep learning have become an urgent problem [1]. To achieve privacy-preserving learning, various methods have been proposed. One of them is Federated Learning (FL) [2], which is a type of distributed learning. FL allows us to train a model over multiple participants without directly sharing their raw data. However, FL have not considered the protection of test data in cloud environments so far. In this paper, we propose a novel method for protecting visual information on test images.
To protect visual information on plain images in untrusted cloud environments, many learnable encryption methods have been studied so far [3]- [13]. Learnable encryption has to satisfy three requirements in general: (a) having a high accuracy that is almost the same as that of plain models, (b) being robust enough against various attacks, and (c) easily updating a secret key. However, most of existing methods [3]- [11] degrade the accuracy of models due to the use of encrypted images, and moreover, need to retrain models to update the key. In contrast, the similarity between block-wise encryption and the architecture of isotropic networks has been pointed out to enable us to perfectly stratify the two requirements that the existing methods cannot [12] [13]. Information on embeddings in isotropic networks such as the vision transformer [14] and ConvMixer [15] is encrypted by random matrixes generated with secret keys for privacy-preserving learning. However, in the conventional methods [12] [13], simple permutation matrixes are used for image and model encryption, so encrypted images are not robust enough against various attacks. Accordingly, we propose the use of a novel random matrix, which is called a random orthogonal one generated by using the Gram-Schmidt orthonormalization. The proposed method allows us to enhance the visual protection of images, while maintaining the same as that of plain models and the easy update of a secret key.
ConvMixer
Before discussing the proposed method, we summarize ConvMixer and its properties briefly. ConvMixer is mainly used for image classification tasks and is known for its high classification performance [15]. The structure of ConvMixer is inspired by the Vision Transformer (ViT) [14]. ViT consists of two Embedding processes (Patch Embedding and Position Embedding) and a Transformer structure. On the other hand, ConvMixer consists of a Patch Embedding and a CNN structure. Figure 2 shows the structure of ConvMixer, which consists of two main structures: Patch Embedding and Con-vMixer Layer. In this paper, we focus on Patch Embedding. In Patch Embedding, an input image ∈ R × × of height , width , and number of channels is divided into patches of size × . Each patch is then transformed into a vector ∈ R 2 , multiplied by the learnable filter and linearly transform it into a vector of -dimensions by taking the prod- In previous studies [12][13], it is known that it is possible to protect the privacy of test images by transforming the filter with a secret key. In this paper, we propose a method to achieve stronger privacy preserving of test images by using random orthogonal matrices. The proposed method aims to protect visual information on test images. To achieve this aim, we encrypt test images and a transform model by using an random orthogonal matrix. The framework is summarized as below.
Proposed Method
• A third party (trusted) generates random numbers with a secret key (seed), and prepares a random orthogonal matrix from the random numbers and an inverse random orthogonal matrix −1 . • The third party trains a ConvMixer model with plane images. The trained model is transformed into an encrypted model by using −1 .
• The third party provides the random orthogonal matrix to a client (trusted) and model to a provider (untrusted). • The client transforms a test image into an encrypted imageˆby using . After that, the client sendsˆto the provider.
• The provider inputsˆinto model , and sends back a prediction result to the client.
Even if the provider is not trusted, the client does not give visual information of test images and matrix used for image encryption to the provider. Thus, the client can receive prediction results while maintaining the privacy preserving of test images.
Test Image Encryption
A test image ∈ R × × is transformed into an encrypted imageˆ∈ R × × as below.
1. Divide into blocks with a size of × such that = { 1 , ..., }, where × is the same size as the patch size used in a ConvMixer model, and is ( × )/ 2 .
Model Encryption
To avoid the performance degradation caused by encryption of test images, in Eq.(1) is transformed by using −1 as When replacing and with andˆ, respectively, vector z in Eq.(1) is reduced to as Thus, by substituting Eqs.
From Eq.(5), encrypted model allows us to have the same performance as that of the model trained with plane images, under the use of encrypted images.
Generation of Random Orthogonal Matrices
A random orthogonal matrix can be generated by using the Gram-Schmidt orthonormalization. The procedure for generating with a size of × is given as follows.
1. Generate an real matrix with a size of × by using a random number generator with a seed.
3. Compute a random orthogonal matrix from by using the Gram-Schmidt orthogonalization.
In this framework, any regular matrix can be used as A for image encryption. Several conventional methods for privacypreserving image classification use permutation matrices of pixel values, in which many elements have zero values in matrices as In contrast, the proposed random orthogonal matrices include no the zero values as elements. The use of such matrices allows us not only to more strongly protect visual information on plain images but to also enhance robustness against various attacks, while maintaining the same performance as that of models trained with plain images. In addition, −1 can easily be calculated as the transposed matrix of A.
Experiment Results
To verify the effectiveness of the proposed method, we ran a number of experiments on the CIFAR-10 dataset.
Setup
We used the CIFAR-10 dataset, which consists of 60,000 color images (dimension of 32 × 32 × 3) with 10 classes (6000 images for each class) where 50,000 images are for training and 10,000 for testing. ConvMixer was trained and tested on the CIFAR-10 dataset. In the model setting, we set the patch size to 4, the number of channels after patch embedding to 256, the kernel size of depth-wise convolution to 7 and the number of ConvMixer layers to 8. models were trained for 200 epochs with the Adam optimizer, where the learning rate was 0.001. We also used a random orthogonal matrix with a size of 48 × 48 for the encryption of test images and models. Figure 3 shows an example of images encrypted with a conventional encryption method [12] [13], in which pixel shuffling and negative-positive transformation are carried out for image encryption, and an example of images encrypted with the proposed method, where the images had × × = 512×512×3 as an image size, and the block sizes used for encryption were = 8 and = 16. When using an orthogonal matrix for encryption, transformed pixel values are real values, so (b) in Fig. 3 were displayed after normalizing the pixel values to the range of [0.1]. From the figures, the selection of a larger the block size gave smaller visual information. The use of random orthogonal matrices was also demonstrated to have a stronger visual protection performance than that of the conventional method. In addition to visual protection, encrypted images have to be robust enough against various attacks, which aim to restore visual information from encrypted images. We already confirmed that images encrypted with the proposed method are more robust against attacks including jigsaw puzzle solver attacks [16]. In particular, unlike ViT, ConvMixer models do have position embedding, so the position of patches cannot be changed. Therefore, privacy-preserving ConvMixer needs a stronger encryption method than ViT.
Classification Performance
We evaluated the classification performance of the proposed method as shown in Table 1, where plain and encrypted indicate plane test images and an encrypted test images, respectively, and plain model and encrypted model are models trained with plain images, and models trained with encrypted images. Table 1 shows the classification results for each combination. From the table, the proposed method (the combined use of Encrypted models and encrypted images) had the same classification accuracy as that of the baseline without privacy protection (plain models and plain images). Accordingly, the proposed method can not only protect the visual information of test image, but also classify the encrypted image without any degradation of classification accuracy.
Conclusion
In this paper, we proposed a novel method for protecting visual informational on test images under the use of ConvMixer models. The proposed method allows us to use a random orthogonal matrix for image encryption, and it was demonstrated not only to enhance the visual protection of images but to also maintain the same accuracy as that of models trained with plain images. | 2023-01-11T06:42:26.892Z | 2023-01-10T00:00:00.000 | {
"year": 2023,
"sha1": "e29f5b7c55dc655414d66d23308fd7ac3710399a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e29f5b7c55dc655414d66d23308fd7ac3710399a",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237720349 | pes2o/s2orc | v3-fos-license | Modelling height-diameter relationships in complex tropical rain forest ecosystems using deep learning algorithm
Modelling tree height-diameter relationships in complex tropical rain forest ecosystems remains a challenge because of characteristics of multi-species, multi-layers, and indeterminate age composition. Effective modelling of such complex systems required innovative techniques to improve prediction of tree heights for use for aboveground biomass estimations. Therefore, in this study, deep learning algorithm (DLA) models based on artificial intelligence were trained for predicting tree heights in a tropical rain forest of Nigeria. The data consisted of 1736 individual trees representing 116 species, and measured from 52 0.25 ha sample plots. A K-means clustering was used to classify the species into three groups based on height-diameter ratios. The DLA models were trained for each species-group in which diameter at beast height, quadratic mean diameter and number of trees per ha were used as input variables. Predictions by the DLA models were compared with those developed by nonlinear least squares (NLS) and nonlinear mixed-effects (NLME) using different evaluation statistics and equivalence test. In addition, the predicted heights by the models were used to estimate aboveground biomass. The results showed that the DLA models with 100 neurons in 6 hidden layers, 100 neurons in 9 hidden layers and 100 neurons in 7 hidden layers for groups 1, 2, and 3, respectively, outperformed the NLS and NLME models. The root mean square error for the DLA models ranged from 1.939 to 3.887 m. The results also showed that using height predicted by the DLA models for aboveground biomass estimation brought about more than 30% reduction in error relative to NLS and NLME. Consequently, minimal errors were created in aboveground biomass estimation compared to those of the classical methods.
Introduction
Tree height (h) and diameter (d) are important variables that are frequently measured in forest inventories for the determination of volume, biomass, and basal area (Gomez-Garcia et al. 2014;West 2015), and used for forest stand structure analysis . They provide information on the competitive status of a tree within a stand (West 2015) and their ratio is used as a stability index, i.e., tree slenderness coefficient (Sharma and Parton 2007;Zhang et al. 2020). Equally important, height and diameter measurements are used for assessing site productivity (West 2015). In fact, height-diameter allometry is regarded as the fundamental component of forest growth and yield models (Gomez-Garcia et al. 2014;Bravo et al. 2019).
The ease by which diameter and height are measured vary, with the former being easier to measure and at low cost (Ferraz-Filho et al. 2018). On the other hand, measurement Abstract Modelling tree height-diameter relationships in complex tropical rain forest ecosystems remains a challenge because of characteristics of multi-species, multi-layers, and indeterminate age composition. Effective modelling of such complex systems required innovative techniques to improve prediction of tree heights for use for aboveground biomass estimations. Therefore, in this study, deep learning algorithm (DLA) models based on artificial intelligence were trained for predicting tree heights in a tropical rain forest of Nigeria. The data consisted of 1736 individual trees representing 116 species, and measured from 52 0.25 ha sample plots. A K-means clustering was used to classify the species into three groups based on height-diameter ratios. The DLA models were trained for each species-group in which diameter at beast height, quadratic mean diameter and number of trees per ha were used as input variables. Predictions by the DLA models were compared with those developed by nonlinear least squares (NLS) and nonlinear mixed-effects (NLME) using different evaluation statistics and equivalence test. In addition, the predicted heights by the models were 1 3 of tree height is costly, often difficult and time-consuming (Özçelik et al. 2018;Ciceu et al. 2020;Magnussen et al. 2020), especially in complex forest ecosystems with closedcanopies (Larjavaara and Muller-Landau 2013), and as such, foresters find it more acceptable to estimate this variable (Temesgen et al. 2014). To do this, a few heights are measured and an appropriate height-diameter (h-d) function is then used to estimate other tree heights for which diameters have been measured (Kalbi et al. 2018). Modelling tree height-diameter relationships in even-aged, single-layer and monospecific or conspecific stands is straight-forward and less variable compared with complex tropical mixed forest ecosystems, characterised by multi-species, multi-layers, and indeterminate age composition (Temesgen et al. 2014).
A good example of a complex forest ecosystem is the tropical rain forest biome, regarded as one of the world's major vegetation types and the most diverse terrestrial ecosystem (Turner 2001). It serves as habitat for more fauna and flora species compared to other biomes (Turner 2001). Studies have shown that in Nigerian rainforests there are more than 4600 identified plant species (Sarumi et al. 1996) and a majority are locally endemic (Richards 1996). Turner (2001) also suggested that some tropical rain forests may have over 100 tree species with ≥ 10 cm diameter at breast height (1.3 m aboveground) on one hectare. Thus, the complex of species and structural composition within a small area makes it difficult to develop models for estimating some dendrometric variables e.g., tree height (Akindele and LeMay 2006;Bravo et al. 2019).
However, attempts have been made to develop heightdiameter (h-d) models for tropical forest ecosystems using different approaches. For example, Fang and Bailey (1998) developed h-d models for all species combined in a tropical forest in Hainan, China. Feldpausch et al. (2011) developed regional h-d allometry models for tropical forest ecosystems using the ordinary least square technique. A similar approach was used by Ogana (2019) to fit h-d models in tropical mixed forests in Nigeria. However, procedures that do not take into consideration species-specific variability may not give precise predictions of height (Temesgen et al. 2014). Another alternative that has been frequently used involves the identification of major tree species, arrange the species into groups if there are many major species and use ordinary least squares (OLS) or mixed-effect modelling technique to develop models for the groups. Temesgen et al. (2014) used this methodology to develop h-d relationships for major tree species in tropical forests in Northeast China. Kearsley et al. (2017) also used a similar procedure for tropical forests in the Congo basin. This approach seems appropriate and logical, however, when aboveground biomass estimates of a tropical mixed forest is the objective, the issue of major species selection may be irrelevant. Since tropical biomass equations like those developed by Chave et al. (2014) and Fayolle et al. (2018) require tree height as one of the input variables, it is therefore important to develop h-d models that would account for the complex nature of tropical forest ecosystems. In Nigeria, Chenge (2021) classified all the sampled species in Omo biosphere into groups and fitted both ordinary least squares (OLS) and non-linear mixed-effect (NLME) models to the group data.
A more recent approach that could be used to address the problem with modelling h-d relationships in a complex forest ecosystem is artificial neural networks (ANNs). ANNs are a subunit of artificial intelligence (AI) whose functionality mimics that of the human brain (Strobl and Forte 2007). ANNs have been consistently used in forestry with significant success for modelling tree height (Özçelik et al. 2013;Vieira et al. 2018;Bayat et al. 2020;Ercanli 2020a;Hamidi et al. 2021), tree taper (Nunes and Görgene 2016), site productivity (Aertsen et al. 2010), tree biomass and volume (Miguel et al. 2016;Özçelik et al. 2017), basal area increment (Ashraf et al. 2013) and mortality and regeneration (Hamidi et al. 2021). These researchers reported reasonable predictions of tree dendrometric variables with ANNs compared with ordinary least square and mixed-effect models. However, most of the studies have been limited to conspecific stands or stands with a few tree species. In addition to ANNs, the deep learning algorithm (DLA) is another form of AI that has been recently introduced. DLA models are multi-layered ANNs with at least three hidden layers and hundreds to thousands of neurons (Ercanli 2020a). They represent a more complex structure similar to the human brain than those of ANNs. Recent studies by Ercanli (2020a, b) showed that the DLA had better prediction of tree height in an even-aged pine stand compared to ANNs, mixed-effect and ordinary least square models.
Application of the DLA models in complex tropical forests of Africa, including Nigeria, has not apparently been documented. Yet accurate prediction of dendrometric variables such as total tree height is necessary for quantifying the aboveground biomass (AGB) of the region. When tree heights are accurately estimated for complex tropical forests, minimal errors will be introduced into the estimation of AGB. Therefore, the objectives of this study were to: (1) develop DLA models for a tropical rain forest of Nigeria; (2) compare the predictions from DLA with h-d models developed with classical methods; and, (3) evaluate the models based on aboveground biomass estimations.
Data
The data used for this study were collected in Cross River State of Nigeria during a REDD + research project funded by the African Forest Forum (AFF) in collaboration with the Swiss Agency for Development and Cooperation (SDC). Additional inventory data from research in the Ekuri Forest Reserve in the same state were also included. The data comprise diameter and total height of 1,736 individual trees representing 116 species measured from 52 0.25 ha sample plots. The number of individual trees (n) per species ranged from 1 to 378. Of this number, only 12 species ≥ 30. Because of the multiple tree species composition, it was not possible to develop species-specific height functions. Therefore, a cluster analysis was carried out.
A K-means clustering (MacQueen 1967) was used to classify the species into groups based on height-diameter ratios; this ensures high intra-class and low inter-class similarities. The Hartigan-Wong algorithm (Hartigan andWong 1979 cited in Kassambara 2017) was used. The algorithm minimizes the total intra-cluster variation, defined as the sum of squared Euclidean distances between the height-diameter ratio of the species and corresponding mean.
where TWSS is the total within sum of squared, W represents within, C k is the individual cluster (group), x i represents height-diameter ratio of a species belonging to the cluster C k , k is the mean value of the height-diameter ratio assigned to the cluster C k . The cluster (Maechler et al. 2019) and factoextra (Kassambara and Mundt 2020) packages both implemented in R (R Core Team 2020) were used in the analysis. The 116 tree species were classified into three groups: group 1 had 68 species, group 2 and 3 had 25 and 23 species, respectively, (see group 1 had 68 species, group 2 and 3 had 25 and 23 species, respectively, (see Appendix Tables S1, S2,and S3).).
Descriptive statistics of the tree variables: diameter (d in cm), total tree height (h in m) and height-diameter ratio (h-d r); computed stand variables: quadratic mean diameter (Dg, cm), basal area per ha (G, m 2 ha -1 ), basal area per ha of larger trees (BAL, m 2 ha -1 ) and number of trees per ha (N, trees ha -1 ); computed diversity indices: dominance, evenness, Simpson and Shannon indices of the data by speciesgroup are shown in Table 1. The species-group data were randomly split into training (85%) and validation (15%) sets. Diameter histograms (pooled data) and scatter plots by species-group are presented in Fig. 1a and b, respectively.
Modelling the height-diameter (h-d) relationships
Two sets of h-d models were developed for each speciesgroup (SG) data from tropical rain forest ecosystems of Nigeria: those based on classical methods, i.e., nonlinear least squares (NLS) and nonlinear mixed-effects (NLME), and those based on artificial intelligence (AI), i.e., the deep learning algorithm (DLA).
Models based on classical methods: NLS and NLME
Several nonlinear single predictor height-diameter functions have been used to describe tree height and diameter relationships in both even-aged and uneven-aged stands To select the base model for the complex tropical forests, 18 single predictor h-d models were initially evaluated. The models include: Curtis (1967), Meyer (1940), Chapman-Richards (Richards 1959), Michailoff (1943), Michaelis-Menten (Michaelis and Menten 1913), Korf (Lundqvist 1957), Näslund (1937), Power (Stoffels and van Soest 1953), modified power (Ogana and Gorgoso-Varela 2020), Prodan (Strand 1959), Gompertz (1825), Logistic (Pearl and Reed 1920), Ratkowsky (1990), Schenute (1981), Wykoff (Wykoff et al. 1982), modified Hossfeld IV , Weibull (Yang et al. 1978), and Burkhart (Burkhart and Strub 1974). Nonlinear least square (NLS) was used to fit the models in R (R Core Team 2020) and were evaluated and ranked based on five indices. Preliminary results showed that Meyer had the minimum rank sum (see Appendix Table S4). Thus, the model was selected and expanded. The Meyer model (Eq. 2) was expanded with the inclusion of stand variables and biodiversity indices. Stand variables (Dg, G, BAL and N) and biodiversity indices (dominance, evenness, Shannon and Simpson) in Table 1 were all evaluated first. However, only the inclusion of the quadratic mean diameter (Dg) and number of trees per ha (N) in a linear combination as replacement for the asymptotic parameter b 0 improved the models significantly. The generalised model is expressed as Eq. (3): where E(h) and d represent expected total tree height (m) and diameter at breast height (cm), respectively; Dg is quadratic mean diameter (cm), N is number of trees per ha (trees ha −1 ), a 0 , a 1 , a 2 , b 1 are model parameters. Equations (2) and (3) were both fitted with NLS and NLME to the individual species-group data. The NLS has only fixed-effects parameters which explain the trend in tree height common to the overall stand (Ercanli 2020a). Contrary to the NLS, NLME has both fixed and random effects parameters. The fixed effects parameters play a similar role as those of ONLS; the random effects parameters explain the variation in h-d relationships across the plots.
The NLME model is represented in the general equation (Pinheiro and Bates 2013) as: where m represents the number of grouping factors (one grouping factor was used in this study [plot]); n i represents the number of observation in the ith plot; h ij is the height of tree j on plot i, V ij is a covariate vector; f represents the (2) and (3)]; i is the vector r × 1; r is the model parameters; λ is a vector of the fixed parameters: p × 1 (p the number of fixed parameters), b i is a vector of the random parameter: q × 1 (q equal number of random parameters) (Corral-Rivas et al. 2019), A i is equalr × p and B i is r × q , respectively, and are the dimensional matrix for the fixed and random effects, for plot i (Corral-Rivas et al. 2019). The plot effects is presumed to have a common multivariate normal distribution with zero mean and variance-covariance matrix var(b i ) given as D for all values of i (Mehtätalo et al. 2015). The ij represents random error with zero mean and constant variance var ( ij ) = σ 2 . A power type variance function was used to account for heteroscedasticity in the residuals: 2 d 2 ij , where , is the power parameter to be estimated. The maximum likelihood through the 'nlme' function in R (R Core Team 2020) was used to estimate the parameters of the NLME models.
Deep learning algorithm (DLA)
The deep learning algorithm (DLA) is a multi-layer artificial neural networks (ANNs) with at least three hidden layers and hundreds to thousands of neurons, and gives a better representation of complex systems such as tropical forest ecosystems (Ercanli 2020a). The DLA requires sophisticated graphical processing units; thus, this study utilised the h2o.deeplearning function of the h2o package (LeDell et al. 2020) implemented in R (R Core Team 2020) to train the models. The h2o.deeplearning function has multi-layer feedback neural networks that provide well-supervised training procedures to predict output variable from input variable(s). In training the DLA models, diameter at beast height (d, cm), quadratic mean diameter (Dg, cm) and number of trees (N, trees per ha) were used as input variables, while tree height (h, m) was the output variable. The input variables were the independent variables used for the classical methods (NLS and NLME). The DLA was trained for each species-group.
Several factors influenced the convergence of DLA, e.g., number of hidden layers, number of neurons in the hidden layers, the activation function, distribution type, epochs, epsilon and rho. The adaptive learning rate algorithm called ADADELTA (Zeiler 2012 cited in Ercanli 2020b) was used to ensure fast convergence of the DLA. The ADADELTA has both momentum training and learning rate annealing. The rho parameter explains the rate of ADADELTA, while epsilon describes the strength of the learning rate during the training. Default values of 0.999 and 1 × 10 -8 for rho and epsilon, respectively, were used to train the DLA models. A default value of 1000 was also used for the epochs. A similar value was used in Ercanli (2020aErcanli ( , 2020b. The Gaussian distribution was selected amidst other distributions, (e.g., Bernoulli, Huber, Poisson, Multinomial, and Laplace) in the h2o.deeplearning function as the training distribution because it is a continuous distribution. The number of hidden layers initially evaluated in this study ranged from 3 to 10 and did not consider hidden layers > 10 because too complex a network makes it difficult to achieve convergence. For each hidden layer, 10 to 100 neurons with an increment of 10 per step were used. Of the three activation functions of the h2o.deeplearning function, the rectifier function was more suitable for the data set. The activation functions describe the nonlinear trends in the tropical data set (Ercanli 2020b). The best DLA models were selected for each species-group.
Model evaluation and equivalence test
The quality of model predictions was evaluated based on the comparisons of the root mean square error (RMSE), mean relative error (MRE), mean absolute percentage error (MAPE), critical error (E crit ) and Bayesian Information criteria (BIC). The smaller the RMSE, MRE, MAPE, E crit and BIC statistics, the better models.
where RSS is residual sum of squares; n is the number of observations; p the number of parameters; h i is average tree height; h i is observed tree height; ĥ i is the predicted height by the model; is the standard normal deviate (≈ 1.96 at probability level of = 0.05) and 2 crit was obtained for = 0.05. In addition, relative rank (Poudel and Cao 2013) was used to determine the relative location of each model based on the evaluation statistics. It is expressed as: where R i is relative rank of model i (i = 1, 2, …, m); m is the number of models evaluated, S i the evaluation statistic value of model i; S max and S min are the maximum and minimum values, respectively, of S i . Relative rank is a real number with 1 as the best. For each model, the relative ranks were summed across the five statistics (RMSE, MRE, MAPE, E crit and BIC). Thus, the relative rank sum was used to identify the best model for estimating tree height in complex tropical rain forest ecosystems. The equivalence test of Robinson et al. (2005) was used to further assess height prediction by classical methods (NLS and NLME) and by DLA using the validation dataset (15% of the data). In this test, the size of the region of dissimilarity between the observed tree heights and predicted heights is an important factor for deciding on the acceptability of the model/method. The test begins with the null hypothesis (H o ) of significant difference between the observed and predicted values. Thus, a rejection of the H o implies acceptance of the prediction of tree heights by the model.
The equivalence test was performed by regressing the relationships between the observed (X) and predicted (Y, predictions by NLS, NLME and DLA) heights and also by regressing the regression parameters with the intercepts ( b 0 ) and slope ( b 1 ) for this relation (Ercanli 2020b). Confidence intervals (CIs) for b 0 and b 1 were calculated using a two one-sided test (TOST) (Robinson et al. 2005). TOST tests the equality of slopes ( b 1 ) to 1 ± 10% and the equality of intercepts ( b 0 ) to y ± 10% (Ercanli 2020b). We used the nonparametric bootstrap technique described by Robinson et al. (2005) to obtain the predictions of the CIs for the parameters. The number of bootstrap replicates was set at 1000 as recommended and recently used by Ercanli (2020b). The equivalence test procedures for observed (X) and predicted (Y, predictions by NLS, NLME and DLA) heights were carried out using the "equivalence" package (Robinson 2016) implemented in R (R Core Team 2020).
Aboveground biomass estimations
A useful application of h-d models is the estimation of aboveground biomass (AGB). Different studies have shown that allometric models for estimating AGB perform better when information on tree height is incorporated (Chave et al. 2014;Popkin 2015;Kearsley et al. 2017;Fayolle et al. 2018). Thus, both observed and predicted tree heights by DLA and classical methods were used to estimate the AGB of the forests. The generalised pantropical AGB model (equation [12]) (Chave et al. 2014) was used.
(11) R i = 1 + (m − 1) S i − S min S max − S min where AGB est represents estimated aboveground biomass (kg); d is diameter at breast height (cm); h is tree height (m) and ρ is wood density (g cm -3 ). Wood density for each species was extracted from the global wood density database Zanne et al. 2009). For unidentified species, an average of 0.5 g cm −3 was used. A similar average was used by Ogana and Ogana (2019) in the same region. Reyes et al. (1992) also used 0.5 g cm −3 for wood density of tropical African species. The global wood density database and the AGB model (equation [12]) have been implemented in the BIOMASS package (Rejou-Mechain et al. 2017). They were obtained with "wdData" and "com-puteAGB" functions of the BIOMASS package in R. However, the AGB is in megagrams (Mg)-the conventional unit of AGB (Chave et al. 2014).
The observed AGB was calculated by substituting the density, and the measured diameters and heights into Eq. (12). The predicted AGB was obtained from the density, measured diameters and predicted heights by the classical methods (NLS and NLME) and DLA. Root mean square error (RMSE), critical error (Ecrit) and mean relative error (MRE) were used to assess the adequacy of the models for estimating AGB. A plot of relative error (i.e., predicted AGB minus observed AGB, divided by the observed AGB, in %) was also used to illustrate the bias in predicted AGB.
Height-diameter (h-d) models
The estimated parameters of Eqs. (2) and (3) fitted with NLS for the species groups (i.e., SG1, SG2 and SG3) are presented in Tables 2, 3 and 4. Also in the tables are the parameter estimations and variance components of the fitted nonlinear mixed effect (NLME) models expressed as Eqs. (13) and (14), and the best of the DLA models. In SG1 data, the parameters of the models by NLS and NLME had low standard errors and were significantly different from zero (p < 0.05), except for Eq. 14. 14 where a 0 was not significant (Table 2). Similarly, in SG2 data, parameters a 1 and a 2 were not significant in Eq. (13) and (14) ( Table 3). However, all parameters in the models were significant for the SG3 data set.
The results from the evaluation statistics (RMSE, MRE, MAPE, Ecrit and BIC) showed that the DLA models outperformed other models fitted by NLS and NLME for the three species-group (Tables 2, 3 4). The DLA models had the smallest statistics and lowest relative ranks (i.e., 1.00) across the five indices for the species groups. The optimal number of hidden layers and neurons for the DLA models were: 100 neurons in six hidden layers for SG1, 100 neurons in nine hidden layers for SG2, and 100 neurons in seven hidden layers for SG3. In these DLA models, the input variables were diameter, quadratic mean diameter and number of trees per ha. Thus, based on the relative rank sum, the order of ranking is: DLA models > NLME models > NLS models.
The graphical relationships between the observed (x-axis) and predicted (y-axis) tree heights by the best three models compared with the 1:1-line for each species-group is shown in Fig. 2. As seen in the graph, the DLA models 100 neurons in six hidden layers for SG1, 100 neurons in nine hidden layers for SG2 and 100 neurons in seven hidden layers for SG3 produced a more organised cluster of measured and predicted values along the main diagonal (i.e., 1:1-line) compared with those of NLS and NLME. Furthermore, the graph of residual against predicted tree heights by the models did not show any meaningful heteroscedasticity across the three species groups (Fig. 3).
The results from the equivalence test using the validation data showed that, for all models developed by NLS, NMLE and DLA, the null hypothesis (H 0 ) of dissimilarity for intercept ( b 0 ) parameters was rejected, for which the bootstrap intercept ( b 0 ) lies inside the equivalent regions ( y ± 10% ) ( Table 5). In the case of the null hypothesis for dissimilarity for slope parameters ( b 1 ), only the DLA models 100 neurons in six hidden layers for SG1, 100 neurons in nine hidden layers for SG2 and 100 neurons in seven hidden layers for SG3 were rejected, in which the bootstrap slope ( b 1 ) lies within the equivalent regions 1 ± 10% . The predicted bootstrap ( b 1 ) limit by the NLS and NLME models were not rejected for the three species groups. Since a rejection of the H o implies acceptance of the prediction of tree heights, the DLA models were selected for the tropical rain forest ecosystems.
Aboveground biomass estimation
Aboveground biomass (Mg) estimations using tree height predicted by NLS, NLME and DLA models were assessed by the root mean square (RMSE), the mean relative error (MRE) and critical error (Ecrit) ( Table 6). The results show that using tree heights predicted by DLA into the AGB Eq. (12) yielded the smallest RMSE (0.1931 Mg), MRE (0.0353) and critical error (0.4511 Mg) values. It brought about more than 30% reduction in the indices relative to NLS and NLME. The graph of relative error (%) also show that minimal error was inserted into the estimation of AGB using predicted heights by DLA compared with those of NLS and NLME models (Fig. 4). The DLA produced a near perfect smooth spline regression with little tendency toward overestimation and underestimation of aboveground biomass, whilst those of NLS and NLME were more irregular.
Discussion
This research developed models for predicting tree heights in the complex rain forest ecosystems of Nigeria using classical methods (nonlinear least square and nonlinear mixed effect) and a robust AI technique, i.e., a deep learning algorithm (DLA) with a view to improving aboveground biomass estimations. The DLA models produced the smallest evaluation statistics and, as such, were more suitable in predicting tree heights in complex tropical rain forests. Parallel observation was reported in Ercanli (2020a) who applied the DLA technique to predict tree heights of even-aged pure Anatolian Crimean pine in Turkey. The author found the DLA model 100 neurons in 9 hidden layers to be the best for predicting tree heights compared with nonlinear regression and nonlinear mixed-effect models. Similarly, Ercanli (2020b) observed that a DLA model with 100 neurons in 8 hidden layers produced the best height predictions in even-aged pure Turkish pine. In the case of the complex tropical rain forest ecosystems, DLA with 100 neurons in six hidden layers was more accurate for predicting tree heights in SG1. Species group 1 contains more than 60 tree species. For SG2 (25 tree species) and SG3 (23 tree species), 100 neurons in nine hidden layers and 100 neurons in seven hidden layers, respectively, produced the best predictions of tree height. The DLA models trained for the tropical rain forests resulted in more than 20% and 50% reduction in the RMSE and BIC values relative to NLS and NLME models across the species groups. As a rule of thumb, a minimum ΔBIC ≤ 2 is required for two models to be similar (Gorgoso-Varela et al. 2019). In addition, Temesgen et al. (2014) noted Table 3 Species-group 2: Information on parameters of models, root mean square error (RMSE), mean relative error (MRE), mean absolute percentage error (MAPE) critical error (Ecrit), Bayesian information criterion (BIC) and relative rank sum (∑R) SE standard error, values in parenthesis are relative ranks that the extension of a model is only necessary if the difference in RMSE is > 5%. Beside the evaluation statistics, only in the DLA models were the null hypothesis (H 0 ) of dissimilarity for intercept ( b 0 ) and slope ( b 1 ) parameters rejected. The performance of the DLA models in predicting tree heights could be attributed to the complex network of neurons with different numbers of hidden layers. The DLA models are multi-layered ANNs with at least 3 hidden layers and hundreds to thousands of neurons (Ercanli 2020a). This is the first attempt to apply DLA techniques to model height-diameter relationships in complex tropical rain forests. Although Hamidi et al. (2021) used two ANNs, i.e., multilayer perception (MLP) and radial basis function (RBF) to model height-diameter relationship and other dendrometric variables in complex Hyrcanian forests of northern Iran, few species composition exist compared to those of tropical rain forests. Moreover, the MLP and RBF contain fewer networks than those of DLA models. Ercanli (2020a) also reported better performance with DLA models compared with ANNs in pure pine stands. Bayat et al. (2020) used the ANNs and adaptive neurofuzzy inference system (ANFIS) to provide better estimation of tree heights in uneven-aged, mixed stands in Iran compared with regression analysis. Similar observation was reported by Vieira et al. (2018) for eucalyptus species. Özçelık et al. (2013) also showed that the use of ANNs improved height prediction of Crimean juniper. The ANNs model resulted in 20% reduction in RMSE compared to 13% by NLME. In addition, they noted that using ANNs is more advantageous than NLME because no height measurements are required for its application. In contrast, prior information is needed for mixed-effect model calibrations. Saudi et al. (2016) also asserted that random parameters in NLME may not be applicable for most prediction purposes except that calibration data are readily available. Data availability remains a limiting factor in complex tropical rain forests.
One important limitation of artificial intelligence is model transferability to other users (Hamidi et al. 2021). To ensure efficient transferability, the R syntax files of the DLA models was provided for the three species group in downloadable links via google drive (SG1: https:// drive. google. com/ file/d/ 1faIw y3ndB BCm39 GNpxx KG2wXY_ UqiT0E/ view? usp= shari ng; SG2: https:// drive. google. com/ file/d/ 13p9y W36_ 73M6U 0PY42 cxqWw FKNd5 MwOU/ view? usp= shari ng; SG3: https:// drive. google. com/ file/d/ 1-bgIOs P8o25_ HL-d6m2G pxNZ5 tNMKw h5/ view? usp= shari ng). A step-by-step guide for uploading the R syntax files of DLA models in R for tree height prediction purposes can be found in the appendix of Ercanli (2020b). This ensures accessibility so that forest practitioners can use the predicted heights to estimate other dendrometric variables like tree biomass and volume. Estimation of aboveground biomass of forest ecosystems is relevant, especially in the context of climate change. Accurate tree height predictions are required to improve AGB estimation (Kearsley et al. 2017). Using predicted tree heights by DLA in AGB equations resulted in a 30% reduction in the root mean square error, mean relative error and critical error. This implies that the number of errors introduced into the estimation of aboveground biomass is small. In contrast, errors produced by NLS and NLME in predicting tree heights of complex tropical rain forests are brought about in AGB estimations. Because tree diameters and wood density are fixed variables, i.e., the same for DLA, Fig. 2 Relationship between observed (x-axis) and predicted (y-axis) tree height by the three best models for each species group (SG1, SG2 and SG3) NLS and NLME, tree heights are the only source of variability. Several studies (Chave et al. 2014;Popkin 2015;Kearsley et al. 2017;Foyolle et al. 2018) have supported the use of local height-diameter model in generalised pan-tropical AGB models to minimise error in biomass estimations. Kearsley et al. (2017) quantified the size of error from using heights predicted by pan-tropical height-diameter values for aboveground estimation for the central Congo Basin. They reported a significant overestimation of tree heights which resulted in significant overestimation of AGB.
Besides the estimation of aboveground biomass, tree height predictions by DLA models could be applied to quantify the volumes of important timber species of the region. Volume equations developed for these species in the tropical rain forest by Akindele and LeMay (2006) require information on tree height as input variables. The predicted height by DLA models will improve the accuracy of estimated tree volumes, which could be scaled up to stand level.
Conclusions
The complexity of tropical rain forest ecosystems requires innovative techniques to improve the prediction of important dendrometric variables such as tree heights for aboveground biomass estimation. This study has shown the relevance of artificial intelligence (e.g., deep learning algorithm [DLA]) in addressing the problem of modelling tree height in complex tropical rain forest ecosystems. The DLA models outperformed other classical modelling techniques (nonlinear least square and nonlinear mixedeffects) in predicting tree heights in these ecosystems, consequently, minimizing the amount of error in aboveground biomass estimation. The input variables for the DLA models included diameter at breast height quadratic mean diameter and number of trees per ha. To facilitate the application of the DLA models by other users, a link is provided where the models can be downloaded and reused for tree height prediction. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
Fig. 4
Relative error (%) in the predicted AGB from the five h-d models; background and black lines represent data-point density and a spline regression data point, respectively | 2021-09-27T20:55:35.372Z | 2021-07-24T00:00:00.000 | {
"year": 2021,
"sha1": "edd6c45f7465019e4b7e60389daeabfba8d635f7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11676-021-01373-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "3b67d0f0a1f5dda77e23bee3bd9eb8d211c6e904",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
235480255 | pes2o/s2orc | v3-fos-license | Measuring the Cultural Competence of Latinx Domestic Violence Service Organizations
Domestic violence (DV) represents a significant public health concern in the United States, including among Latinx populations. Despite the negative consequences associated with experiencing DV, research has shown that Latinx DV survivors may be less likely than others to utilize important services. One potential barrier is cultural competence (CC) in the provision of services specific to Latinx survivors among DV organizations. Thus, a beneficial addition to the field of DV service provision for such survivors is a better understanding and measurement of CC for this unique population. The exploratory, cross-sectional study herein presents the development and evaluation of a novel instrument for measuring the CC of DV organizations. Exploratory factor analysis was used on a purposive sample of 76 organizations in North Carolina who completed a comprehensive survey on their characteristics, practices, norms, and values. Psychometric results found best support for a 29-item, 4-factor bifactor model with both a general CC factor as well as three sub-factors. The general scale was named “General Cultural Competence,” while the three sub-scales were named “Organizational Values and Procedures,” “Latinx Knowledge and Inclusion,” and “Latinx DV Knowledge.” The final measure also demonstrated convergent validity with key organizational characteristics. Overall, higher CC scores were associated with organizations having more DV services in Spanish, a higher percentage of staff attending CC training, a higher percentage of staff attending Latinx service provision training, and a medium or greater presence in the Latinx community, and a moderate or stronger relationship with the Latinx community. The development of this measure is particularly useful in addressing knowledge gaps regarding the measurement of CC for Latinx DV services. Implications have importance for both the measurement of organizational CC and the scope of the measure’s associations with organizational, provider, and client outcomes.
Introduction
The phenomenon of physical violence, psychological aggression, sexual violence, and stalking perpetrated by an intimate partner represents a significant public health concern in the United States (U.S.) (Smith et al., 2018). These acts are often collectively referred to as domestic violence (DV) among practitioners and service providers in the field (Serrata et al., 2020). Findings from a recent national survey estimate that over one in three women and about one in three men in the United States may experience lifetime DV perpetrated against them by a current or former intimate partner (Smith et al., 2018). These numbers are even more worrisome given that DV is associated with numerous deleterious short-and long-term consequences. In addition to immediate needs related to safety, research has found that DV victimization can lead to physical and mental health problems (Bacchus et al., 2018;Campbell, 2002;Devries et al., 2013;Lagdon et al., 2014) and economic/ housing instability (Adams et al., 2012;Pavao et al., 2007) among other concerns. Notably, compared to their male counterparts, female survivors tend to suffer more serious consequences and are more likely to seek DV-related services (Ansara & Hindin, 2010;D'Inverno et al., 2019).
Organizations focused on supporting DV survivors provide an array of services including crisis services, legal and medical advocacy, individual and group counseling, shelter, and others (Macy et al., 2009(Macy et al., , 2013(Macy et al., , 2018. Historically, DV services were developed and provided using a culturally neutral service delivery approach (Bent-Goodley, 2001, 2005Lehrner & Allen, 2009). However, researchers and practitioners have been increasingly vocal about the importance of integrating cultural competence (CC) in DV service provision given the significant role of race, ethnicity, and culture in understanding and addressing DV (Bent-Goodley, 2005;White et al., 2019). Although organizations across the U.S. are providing DV services to survivors from a variety of different cultural backgrounds, there is limited understanding of how culture may influence service provision and whether such organizations may demonstrate CC.
Domestic Violence and Latinx Survivors
One prominent population within the United States that constitutes a unique cultural force are those with family roots in Latin/Hispanic America who primarily speak the Spanish language. Varyingly referred to as Hispanic, Latino, Latina, or other names, and collectively referred to herein as Latinx people, this population is an important and growing group of U.S. residents that require targeted research attention vis-à-vis DV victimization.
A recent systematic review found that DV is likely common among such people in the US, especially women, with DV prevalence rates among Latinx women ranging from 4% to 80% (Gonzalez et al., 2020). Despite the wide prevalence range reflective of the methodological heterogeneity of studies, these findings overall suggest such women face similar or higher rates of DV compared to their White counterparts (Smith et al., 2017). Compounding these DV experience, research has found that Latinx survivors often experience significant levels of polyvictimization and revictimization (Cuevas et al., 2010(Cuevas et al., , 2012. For example, a national study examining interpersonal victimization among Latinx women found that among those who had experienced one form of victimization, approximately two-thirds reported experiencing more than one incident of interpersonal violence (Cuevas et al., 2012). Moreover, research has also found that the effects of DV may be unique among the Latinx population. Emerging research has found that Latinx women may be disproportionately impacted by physical and mental health outcomes resulting from DV, including persistent health problems, pain, difficulty sleeping, perceived poor health, depression, posttraumatic stress disorder, and anxiety (Bonomi et al., 2009;Cuevas et al., 2010;DiCorcia et al., 2016;Kelly, 2010;Stockman et al., 2015). Also, Latinx women have been found to be at a higher risk of intimate partner homicide compared to White women (Sabina & Swatt, 2015). Altogether, there is good evidence to believe that Latinx women in the United States constitute a particularly vulnerable population affected by DV.
Despite this increased vulnerability, findings have shown that Latinx survivors may be less likely than others to utilize important DV services (Ahmed & McCaw, 2010;Satyen et al., 2018). Among Latinx survivors, those who only speak Spanish and those with no or limited documentation report lower levels of formal help-seeking and service use (Ahmed & McCaw, 2010;Zadnik et al., 2014). Latinx survivors' underutilization of services has been theoretically and empirically connected to a multitude of help-seeking barriers (O'Neal & Beckman, 2017;Rizo & Macy, 2011). Although some of these barriers are common across many survivor groups, others are likely either unique or more pronounced for survivors from racial/ethnic groups that have been marginalized (Rizo & Macy, 2011;Robinson et al., 2020). Research suggests that such survivors, broadly, may experience culturally-based barriers to DV service receipt related to language, social isolation, and gender norms (O'Neal & Beckman, 2017;Parson et al., 2016;Postmus et al., 2014;Reina et al., 2014;Rizo & Macy, 2011). Also, such DV survivors may also face disproportionate socioeconomic barriers related to educational attainment, poverty, and distribution of resources (O'Neal & Beckman, 2017;Reina & Lohman, 2015;Vidales, 2010). Latinx survivors, specifically, may also experience barriers related to anti-immigrant and anti-Latinx policies, beliefs, and practices, such as fear of deportation and discriminatory treatment (O'Neal & Beckman, 2017;Parson et al., 2016;Postmus et al., 2014;Reina & Lohman, 2015;Rizo & Macy, 2011).
Overall, the lack of culturally competent services and negative prior helpseeking experiences are identified as barriers to Latinx survivors' DV-related help-seeking (Flicker et al., 2011;Rizo & Macy, 2011). In particular, research has emphasized the importance of culture in Latinx survivors' DV experiences as well as their experiences seeking and receiving services (O'Neal & Beckman, 2017;Postmus et al., 2014;Serrata et al., 2020).
Cultural Competence and DV Services for Latinx Survivors
In response to growing research on the unique experiences and needs of Latinx DV survivors, both researchers and practitioners are calling for more culturally competent services to increase access, help-seeking, and service engagement (Alvarez & Fedock, 2018;Alvarez et al., , 2016Robinson et al., 2020). DV organizations and service providers are being urged to develop a nuanced understanding of Latinx culture and identity to better understand the needs of these survivors (Serrata et al., 2020;Silva-Martínez & Murty, 2011). Recommendations include accounting for cultural barriers and incorporating cultural factors into services and service delivery (Parra-Cardona et al., 2013;Reina et al., 2014;Serrata et al., 2020). Culturally competent and affirming practices highlighted in the literature include hiring Latinx and Spanish-speaking staff, encouraging English-speaking staff to learn key phrases in Spanish, ensuring resources and materials are available in Spanish, engaging in culturally specific outreach to increase awareness, and promoting cultural traditions among others (O'Neal & Beckman, 2017;Parson et al., 2016;Serrata et al., 2020). Such practices have been found to enhance Latinx survivors' well-being over and above trauma-informed practices (Serrata et al., 2020).
Given that culturally competent practice requires organizational support and infrastructure (Balcazar et al., 2009;Sharifi et al., 2019), it is necessary to understand the CC of organizations providing DV services to Latinx survivors. Organizational CC is generally concerned with an organization's values, policies and procedures, planning and evaluation, communication, human resources, community and client engagement, services, and organizational resources (Harper et al., 2006;Zeitlin Schudrich, 2014). Limited research has examined the CC of DV organizations and practices, particularly as this relates to serving Latinx survivors (Lucero et al., 2020). One challenge to the advancement of such research is the lack of tailored instruments for measuring the CC of organizations providing DV services to Latinx survivors. Despite the existence of general organizational CC instruments, these instruments have undergone relatively little psychometric testing (Guerrero & Andrews, 2011)-and a review of the literature was unable to identify any that had been tested with DV organizations. Further, growing research highlights the importance of tailoring such instruments to specific client groups given that organizational CC can vary by culture, race, and ethnicity (Siegel et al., 2011).
An instrument specifically developed to assess the CC of organizations providing DV services to Latinx survivors could benefit the field in multiple ways. Organizations providing DV services to Latinx survivors could use such an instrument to monitor and improve the CC of their organization, service delivery approaches, and specific services. Researchers could also use the instrument to examine the CC of organizations providing such services nationally, as well as the malleable factors associated with enhancing organizational CC. A better understanding of the factors associated with organizational CC among organizations serving Latinx survivors could inform the development of interventions aimed to increasing the cultural appropriateness of such organizations.
Current Study
To advance research and practice focused on understanding and enhancing the CC of DV service provision for Latinx survivors, the current study presents the development and preliminary evaluation of an instrument for measuring the CC of organizations providing DV services for Latinx survivors. Thus, this exploratory study sought to address knowledge gaps regarding the measurement of CC among such organizations. A well-validated measure is critical to not only understanding the CC of organizations providing DV services, but also to enhancing the CC of such organizations (Zeitlin Schudrich, 2014). Therefore, the overall goal of this exploratory, cross-sectional study was to develop a psychometrically valid measure of organizational CC using exploratory factor analysis (EFA) to facilitate the measurement of DV service provision for Latinx survivors among organizations in North Carolina in the United States. The study featured the following aims: (a) to evaluate the factorial validity of a scale for use in Latinx DV service provision, and (b) to evaluate the construct validity of the scale relative to organizational characteristics. Thus, this study sought to both establish the measure's validity and then understand how it might differentiate among organizations.
Sample
The sample comprised organizations that identified as either being (a) a DV-specific organization or (b) a Latinx organization that served clients presenting with DV-related concerns. All organizations were located in North Carolina-the 9th largest state in the United States with almost 1,000,000 Latinx residents. These organizations were participants in a statewide study of DV service provision for Latinx survivors. The overall study aimed to better understand DV service provision for Latinx survivors, including (a) service gaps, (b) program needs, and (c) challenges experienced in providing culturally competent services to inform trainings and technical assistance, policy, and funding. The study was conducted jointly by a research team at the University of North Carolina at Chapel Hill (UNC-Chapel Hill) in collaboration primarily with the North Carolina Coalition Against Domestic Violence (NCCADV)-a key state-level DV organization in North Carolina. All research procedures were approved by UNC-Chapel Hill.
The study's sampling frame was constructed in two phases. First, the UNC-Chapel Hill team worked with the NCCADV to compile a full list of DV organizations within the state. Second, to identify organizations that primarily provide culturally specific services to Latinx populations, the UNC-Chapel Hill team searched online and emailed individual organizations to confirm their service provision information. In total, 99 organizations were contacted. Participating organizations were eligible for one of three $100 gift cards.
Measures
Data for analysis of the organizations' characteristics and practices were collected via a purposive, study-specific survey. The survey featured approximately 260 open-and closed-ended questions in total over six broad domains related to (a) community characteristics, (b) organization characteristics, (c) service delivery, (d) organizational CC, (e) barriers to service, and (f) respondent characteristics. Development of the survey was conducted according to best practices in measurement development (DeVellis, 2012). Specific steps included (a) conceptualization of key constructs, (b) development of an initial item pool, (c) determination of formatting, (d) initial expert review, (e) pilot testing, and (f) optimization and finalization. Initial development of items was determined by the research team's expertise and past work related to DV service provision, narrative reviews of literature and existing measures, and consultation with the NCCADV. Experts involved in review of the survey included staff at the NCCADV as well as other selected North Carolina DV service providers.
There were 32 questions in the survey related specifically to organizational CC of DV service provision for Latinx survivors. These questions were both adapted from external sources and developed internally by the research team. External sources that inspired the items included (a) the NCCADV's internal LGBTQ DV assessment instrument, (b) the Cultural Competence Self-Assessment Questionnaire (Mason, 1995), (c) the Cultural Competence Assessment Instrument (Balcazar et al., 2009), and (d) the Cultural Competence Assessment Scale (Siegel et al., 2011). The final 32 items covered a broad array of topics related to organization (a) characteristics, (b) practices, (c) norms, and (d) values. Questions were primarily Latinx-specific (n = 30, 93.8%; e.g., "Our organization prepares new staff to work with Latinx DV survivors"), with some additional generalized items (n = 2, 6.3%; e.g., "Our organization staff routinely discuss barriers to working across cultures"). Within the larger survey, these CC questions were demarcated within a box and preceded by a prompt asking respondents to "Please answer the following by marking the answer box that best reflects your level of agreement with each statement." Response options comprised "strongly disagree" (1), "disagree" (2), "neither agree/disagree" (3), "agree" (4), and "strongly agree" (5). As presented in the original survey completed, the 32 questions had a collective Flesch reading ease of 32.2, indicative of "difficult" readability (Flesch, 1948)-a level appropriate to college graduates such as those working in the sample of organizations.
One respondent, typically the organization's Executive Director, answered the survey on behalf of their entire organization. Respondents were offered the option to complete the survey electronically via Qualtrics (Qualtrics, Provo, UT), by paper form delivered and returned via mail, or by telephone with the assistance of a trained research assistant. The entire survey took approximately 60-75 minutes to complete. Data collection occurred from 2015 to 2016.
Analysis
EFA was chosen as the primary analytic approach as a psychometric data reduction method that explores variability among correlated observed items (i.e., the survey questions) in a measure within the context of specifying a parsimonious underlying latent variable. The analytic plan included five sequential phases. All non-EFA analyses were conducted using Stata 16.1 (StataCorp, College Station, TX) and all EFA-specific analyses were conducted in Mplus 7.3 (Muthén & Muthén, Los Angeles, CA). A statistical significance level of p < .05 (two-sided) was used throughout.
First, select organization characteristics were summarized using appropriate univariate statistics (e.g., frequency [n], proportion [%], mean [M], standard deviation [SD]) to describe (a) the nature of the sample and (b) targets for subsequent construct validity analyses. There were 15 total characteristics, with three characteristics for each post hoc determined domain of (a) service delivery and location, (b) staff numbers and characteristics, (c) staff training, (d) client profile, and (e) Latinx outreach.
Second, preliminary diagnostic tests were conducted as (a) omnibus tests of all 32 CC items jointly and (b) individual tests of each CC item. The primary goal of such tests was to reduce, if possible, the starting item pool to a more parsimonious set. A secondary goal was to better understand item characteristics and the hypothesized potential latent structure of the items. Omnibus tests were specified to focus primarily on analysis of communalities (h 2 ), or the total amount of variance explained by the hypothesized CC latent variable. A h 2 ≥ 0.70 criterion was set for inclusion in further analyses due to research consistently showing that high communalities are vital to acceptable fit and factor recovery when conducting EFA with small samples such as was the case herein (de Winter et al., 2009;Mundfrom et al., 2005;Preacher & MacCallum, 2002). An additional omnibus check was Bartlett's test of sphericity, with a statistically significant Χ 2 value sought. Kaiser-Meyer-Olkin (KMO) tests of sampling adequacy were specified both at an omnibus level and for each item, with KMO ≥ 0.80 being considered "meritorious" and desirable. Individual items' observation-level missingness was also calculated.
Third, factorial validity was tested via EFA on the total sample of 76 observations using a strategy that sought to compare competing solutions with varying (a) dimensionality and (b) factor number, essentially comprising sensitivity analyses of the factorial validity. Given the desire to explore a range of model solutions, no pre-EFA tests (e.g., Horn's parallel analysis) were conducted to determine an exact number of factors to be extracted. The approaches included (a) unidimensional, (b) multidimensional, and (c) bifactor models to provide a comprehensive exploration of the potential underlying latent structure of the hypothesized CC variable. These models can be visualized conceptually in Figure 1. The bifactor models, in particular, represented a novel approach within DV measurement. These models, which posit a general latent factor alongside distinct subfactors, have heretofore been underused in violence psychometric research but have become a powerful choice for EFA analyses in other fields (Bryan & Harris, 2019;Gracia et al., 2020;Mancini et al., 2019). Overall, the EFA analyses held a strong desire to keep the number of factors low due to (a) substantive concerns regarding applicability in real world settings of DV service provision, and (b) methodological concerns regarding model parsimony with small samples.
Each approach used principal axis factoring with an oblique geomin rotation using Mplus' weighted least squares estimator as appropriate for the ordinal nature of the items. Within each approach, models were assessed for (a) overall model fit and (b) individual item appropriateness using a priori specified criteria. Model fit was compared using a set of four estimates comprising (a) the root mean square error of approximation (RMSEA; point estimate and 90% confidence interval [CI] ≤ 0.08 = adequate, ≤ 0.06 = good), (b) the comparative fit index (CFI; ≥ 0.90 = adequate, ≥ 0.95 = good), (c) the Tucker-Lewis index (TLI; ≥ 0.90 = adequate, ≥ 0.95 = good), and (d) the standardized root mean square residual (SRMR; ≤ 0.08 = acceptable, ≤ 0.06 = good). All indices and criteria were chosen based on a review of expert recommendations (Browne & Cudeck, 1993;Hu & Bentler, 1999;West et al., 2012). Each model's Χ 2 statistic and was reported for intermodel comparison, but was not used as a criterion for final model selection. Individual items were assessed based on their factor loadings (λ), with a rule set that an item must feature at least one λ ≥ 0.50 on ≥ 1 factor. Items not meeting this criterion were deleted iteratively starting with the smallest maximum λ.
Fourth, proceeding from selection of a final model solution, the total scores were created and described. Aggregate scores were created by calculating the unweighted mean of all items for each factor. Internal consistency reliability estimates in the form of Cronbach's α were calculated with a target reliability level of α ≥ 0.70, equivalent to "acceptable" or greater, specified based on recommendations (Nunnally, 1978). Next, total mean interitem correlations (r) were calculated with estimates sought that were (a) positive, (b) approximately medium (r ≥ 0.30), and (c) significant. Identical criteria were then used to evaluate interscale Spearman's rank-order correlations among all scales. Finally, Flesch reading ease scores were calculated to determine if the final scales corresponded to levels considered approachable for individuals with either "college" (50.0-30.0) or "college graduate" (30.0-10.0) educational levels as appropriate for the sample (Flesch, 1948).
Fifth, construct validity was established using (a) Spearman's rank-order correlations for continuous organization characteristics and (b) point-biserial correlations for categorical characteristics. Characteristics to be included were a priori determined to be all 15 used to summarize the sample of organizations (see above). Convergent validity would be determined with (a) consistently positive, (b) approximately medium (r ≥ 0.30), and (c) significant correlations across factors. Divergent validity would be determined by (a) low (r < 0.30) and (b) nonsignificant correlations.
Organization Characteristics
Of the 99 organizations contacted, a total of 82 participated in the survey in some form. Among those, two exclusion criteria were applied to remove participants that either (a) reported not serving clients with DV/SA issues in the previous year (n = 3) or (b) did not answer any of the CC items (n = 3). The final analytic sample was 76, making for a final response rate of 76.8% of the total of 99 that were recruited.
Respondents (Table 1) indicated that two-thirds of the organizations were dual DV/SA organizations (n = 50; 66%), with the others being standalone DV organizations (n = 14; 18%) or culturally specific Latinx organizations (n = 12; 16%). Organizations were small, with a mean number of full-time staff of 10.0 (SD = 9.8) and a mean number of part-time staff of 6.4 (SD = 7.1). Over two-thirds (n = 51; 67%) had at least one Spanish-speaking Latinx staff member. Although the majority of staff (M = 79%; SD = 33.6) had attended DV training of some type, only 39% (SD = 32.2) had attended any Latinx service provision training. The mean number of clients served in the previous year (i.e., 2014) was 850.5 (SD = 1,333.6), an indication alongside the relatively high proportion of multicounty service (n = 31; 41%), that the organizations had a generally wide scope of operation. Latinx clients were unsurprisingly high given North Carolina's burgeoning population of Latinx residents, with on average 25% (SD = 28.9) and 24% (SD = 29.9) of all clients being Latinx or primarily speakers of Spanish, respectively. Half of the organizations reported a medium-to-high presence (n = 37; 50%) and approximately two-thirds reported a moderate-to-strong relationship (n = 47; 65%) with their Latinx community.
Factorial Validity
Item diagnostics. The omnibus test of the 32-item set revealed that two items should be dropped due to low communalities. The first item related to organizations' use of a "written cultural competence plan" for serving Latinx DV survivors (h 2 = 0.60), while the second assessed if organizations' Boards included "representative(s) from the Latinx community" (h 2 = 0.60). After removing these two items, the remaining set of k = 30 demonstrated good communalities (h 2 Mean = 0.83). Additionally, Bartlett's test rejected the null hypothesis that the correlation matrix is equal to an identity matrix, indicating that the observed items were likely indicators of an underlying latent construct (p < .001). The overall 30-item KMO value was acceptable at 0.87 and individual item KMO values ranged from 0.75 to 0.95, with only three being less than 0.80. Of the 30 items, fourteen (46.7%) had no missing values, eight (26.7%) had one, and four (13.3%) each had two or three missing values. No missing data were imputed in subsequent analyses.
Measure Summary
The final chosen model was the 4-factor bifactor model (Table 3). Although this solution featured slightly worse fit compared with the 5-factor bifactor model, it was chosen due to parsimony and face validity of the resultant three subscales. This model's suboptimal RMSEA values were not seen as a major limitation given the exploratory nature of the work. The study team named the general scale for this solution "General Cultural Competence" (GCC), while the three subscales were named "Organizational Values and Procedures" (OVP), "Latinx Knowledge and Inclusion" (LKI), and "Latinx DV Knowledge" (LDK).
The GCC general scale had a total mean score of 3.52 (SD = 0.68), with an internal consistency α = 0.96 and a mean interitem correlation of r = 0.44. Among the GCC's individual items (Table 3) Items adapted from the Cultural Competence Self-Assessment Questionnaire (Mason, 1995). 3 Items adapted from the Cultural Competence Assessment Instrument (Balcazar et al., 2009). 4 Items adapted from the Cultural Competence Assessment Scale (Siegel et al., 2011).
Construct Validity
The general scale and subscales all demonstrated construct validity vis-à-vis their associations with organizations' characteristics. In total, nine characteristics had ≥ 1 positive and significant correlations with ≥ 1 scale, totaling 24 such correlations out of 60 possible (40.0%; 0.23 ≥ r ≤ 0.47). The GCC scale was significantly associated with seven of the 15 characteristics, while the three OVP, LKI, and LDK subscales had five, eight, and four significant correlations, respectively (not pictured). Table 4 organizes the 11 characteristics with the most consistent (≥ 3 of 4 scales) relationships into post hoc determined convergent and divergent domains. Overall, higher CC scores were associated with organizations having (a) more DV services in Spanish, (b) a higher percentage of staff attending CC training, (c) a higher percentage of staff attending Latinx service provision training, (d) a medium or greater presence in the Latinx community, and (e) a moderate or stronger relationship with the Latinx community. Six characteristics were not significantly associated with any of the four scales (range: −0.14 to 0.20). Overall, higher CC was not associated with (a) serving more than one county, (b) serving only rural locations, (c) having more full-time staff, (d) having more part-time staff, (e) having a higher percentage of staff attend general DV training, or (f) having more total clients.
Discussion
This exploratory, cross-sectional study used EFA to develop a psychometrically valid measure of CC for Latinx DV service provision using data on 76 organizations in North Carolina in the United States. Taking inspiration from psychometric research on other measures that has brought attention to the utility of comprehensive testing of multiple competing factorial structures (Ebesutani et al., 2012;Mancini et al., 2019;Reise et al., 2010), the analytic approach compared unidimensional, multidimensional, and bifactor EFA approaches across seven individual models. Results demonstrated substantive and methodological preference for a 29-item, 4-factor bifactor EFA model with both a general CC factor/scale as well as three subfactors/scales. In addition to addressing a knowledge gap regarding the measurement of CC for Latinx DV service provision, the current study contributes to the limited psychometric testing of instruments for measuring organizational CC (Zeitlin Schudrich, 2014). Implications from the findings of this work have importance for both (a) the measurement of organizational CC and (b) the scope of the measure's associations with organizational, provider, and client outcomes.
Measurement Structure
At a broad level, this study demonstrated that it is possible to validly measure CC among DV service providers serving Latinx survivors-seemingly the first examination of its kind into this important consideration for DV service delivery. What remains inconclusive, however, is exactly how that CC should be measured given the findings pointing to a bifactor solution with two potential overarching measurement structures. This uncertainty could be ascribed to the study's CC measure and items or, potentially, to deeper uncertainty regarding exact nature of the CC latent construct itself. These dual options should be viewed as a strength of the current examination, and congruent with the exploratory nature of the work herein, which a priori outlined multiple approaches as a sensitivity analysis.
Each measurement approach/structure has appeal and drawbacks. A general appeal of having a unidimensional CC measure is the simplicity of scoring. Also, as seen in Table 3 there are potentially meaningful questions included in the holistic CC measure that are not in the subscales. Some extant research has found support for a unidimensional conceptualization and measurement of organizational CC. For example, Zeitlin Schudrich (2014) examined the psychometric properties of an organizational CC measure using confirmatory factor analysis among child welfare agencies/providers, with results pointing to a unidimensional (i.e., 1-factor) measurement structure that included items similar to those in the measure featured in the study herein. Specifically, Zeitlin Schudrich's final CC measure contained 6 items: (a) recruitment, hiring and retention practices; (b) representativeness of committees and councils; (c) presence of CC in monitoring and evaluation; (d) translation and interpretation; (e) appropriateness of materials; and (f) appropriateness of food (2014). Although these items are largely congruent with the items in the measure herein, and the bifactor solution suggests a possible 1-factor measure of organizational CC, drawbacks to a unidimensional approach should be considered and potentially include the lack of face validity to the notion of a single CC latent factor and loss of nuance from parsing out intra-CC factors.
Also, the bifactor solution herein suggests a second and differing multidimensional approach with multiple correlated domains within a broader CC construct. This second approach is also supported by extant research. For example, a study by Siegel et al. (2011) describes the development and evaluation of a CC scale for use in public mental health settings that included a 3-factor structure. The three factors included (a) administrative elements (e.g., commitment, staff trainings), (b) activities to understand and serve the community (e.g., gathering data, instituting recruiting/hiring/retention policies), and (c) activities directly related to clinical care (e.g., having interpreters and bilingual/bicultural staff, developing new services).
The current study determined a 4-factor bifactor model, which, to its benefit, argues for both approaches. Although similar to Zeitlin Schudrich (2014) the findings herein support a general CC factor/scale, like Siegel et al. (2011) the findings also support the notion of three subfactors/scales. The three subfactors/scales identified in the current study focus on (a) organizational values, policies, procedures, and norms; (b) cultural knowledge and inclusion; and (c) DV cultural knowledge. The first two subfactors/scales reflect broad CC related to organizational support and cultural knowledge when working with Latinx clients (Suarez- Balcazar et al., 2011). The third subfactor/scale examines knowledge regarding DV among Latinx people, including DV perceptions, experiences, needs, help-seeking, and available resources. Notably, the items in the final, reduced measure reflect domains common across other organizational CC instruments and studies including: (a) values, policies, and procedures; (b) communication; (c) community and client engagement; (d) services and service delivery; and (e) organizational resources (Cherner et al., 2014;Harper et al., 2006;Lucero et al., 2020;Zeitlin Schudrich, 2014). Ultimately, the measurement of organizational CC, and specifically within a Latinx DV service provision context, remains open for further exploration. It is likely that multiple conceptualizations and measurement approaches are valid.
Measurement Scope
Regardless of approach, this study is clear in finding that the measure presented herein is likely associated with organizational characteristics, both converging and diverging with various variables as would be expected. Broadly these findings suggest that (a) CC as a latent construct does indeed vary across DV service providing organizations and (b) the CC measure developed herein has the ability to detect such differences.
The final measure demonstrated convergent validity as the identified factors were significantly correlated with agency characteristics theoretically expect to be related to organizational CC. Despite limited research examining the psychometric properties of organizational CC measures, research regarding DV services and service provision for Latinx survivors has highlighted the importance of providing linguistically appropriate services, hiring Latinx and Spanish-speaking staff, and engaging in culturally specific outreach (O'Neal & Beckman, 2017;Parson et al., 2016;Serrata et al., 2020)-all of which were associated with at least one of the resultant CC factors. Notably, organizational CC in the form of infrastructure and support are critical for the provision of such culturally competent and affirming practices. Further, at least three of the factors were associated with a higher percentage of staff attending CC or Latinx trainings, a higher percentage of clients that were Latinx or Spanish-speakers, and a stronger presence in and relationship with the Latinx community, all of which would be expected to be positively correlated with organizational CC. The measure also demonstrated ample discriminant validity. As expected, none of the factors were associated with whether the organization served more than one county or only rural locations, the number of full-or part-time staff at the organization, the percentage of staff that had attended DV trainings, or the total number of clients served by the organization.
Importantly, these various significant and nonsignificant associations have practical utility for intra-and inter-organizational assessment. The characteristics that demonstrated convergent validity with the CC measure could be good targets for identifying intervention targets alongside CC. These associations suggest, perhaps, that improvement on such characteristics may be associated with improvements to CC. Not every study or evaluation has the ability to ask in-depth questions of such organizations regarding their culturally competent practices. Yet, provided with basic information regarding such variables as number of services, staff case-mix, and others that could serve as proxy indicators of CC. The numerous divergent variables, meanwhile, provide further insight into what may not be important for assessing CC in this context. Overall, the measure developed herein helps to clarify the picture of organizational CC vis-à-vis Latinx DV service provision-an important contribution to an overlooked practice and research concern.
Limitations
The study's findings should be considered in light of several limitations. Primarily, despite the high response rate and significant buy-in from stakeholders within North Carolina, the study sample size was small for a measurement-focused analysis. Although the analyses attempted to mitigate this limitation via the use of robust analyses and multiple modeling plan, the results should be considered very much within the realm of the exploratory. This fact, coupled with the single state location, limits the external validity of the findings and perhaps the overall generalizability of the CC measure to other DV service providers in other settings. Additional and more minor limitations include the potential for the survey to not have been comprehensive in its inclusion of CC-related items, the cross-sectional nature of the data, and the lack of survivor input into the measure's development.
Future Research
Although there remains a need for additional research regarding CC and Latinx DV service provision, the study's focus on measurement highlights specific foci for future examinations. To be sure, further research on the measure is likely required before widespread use can be recommended. It is also important to note that the chosen 4-factor bifactor model may not be the optimal solution. Researchers wishing to explore the similarly well-performing (a) 4-factor multidimensional or (b) 5-factor bifactor models should take the set of 29-items in Table 3 and delete items #6, #7, #12, #14, #15, and #29 to construct former, and item #24 for the latter. Data gathered on larger samples in additional settings would engender robust tests of the measure presented herein. Beyond acquiring new and more representative samples, future research should likely include analyses that seek to both (a) further refine the measure's structure and (b) test the measure's performance via confirmatory factor analyses and predictive validity analyses (e.g., receiver operating curve analyses) among others. All such analyses would build evidence for the validity and utility of the measure. This evidence, in turn, would work toward achieving the important distal goal of applying the measure to (a) practice-based intraorganizational assessment and (b) organizational-focused intervention and evaluation to improve CC among DV organizations serving Latinx survivors.
Conclusion
The current study contributes to the growing literature on organizational CC by developing and evaluating a preliminary measure tailored specifically to organizational CC in the context of Latinx DV service provision in the United States. In addition, the use of bifactor EFA advances the field as this approach has heretofore been underutilized in violence measurement. Although this work is exploratory, both the general measure of CC as well as the three subfactors/scales have potential to inform the delivery and evaluation of services to Latinx DV survivors in future practice and research endeavors. Organizations can use the measure in practice to assess and enhance CC by identifying opportunities for growth. The measure can also be used in research to better understand the CC of organizations providing Latinx DV services, including the factors that impede and facilitate organizational CC as well as related client outcomes. A particular strength of this research was the centering of organizational CC specific to the delivery of services for Latinx DV survivors. By focusing directly on the measurement of CC, this work sought to echo calls for DV service provision that acknowledges the importance of cultural diversity while at the same time advancing research on measures that i The author(s) declared a potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Dr. Cynthia Fraga Rizo is a member of the North Carolina Coalition Against Domestic Violence's Delta State Steering Committee. The goals of the Delta Project are to increase implementation of intimate partner violence primary prevention throughout North Carolina. As such, the committee members do not have any fiduciary responsibilities. Moreover, this committee's work did not lead to the decision to conduct the study, and Dr. Rizo was not a member of the committee before this study was conceptualized and initiated. No other authors have anything to disclose.
Lisi Martinez Lotz, PhD, is the director of planning and innovation at North Carolina Area Health Education Center. She has over 10 years of experience in the public health sector with more than 8 of those years in the domestic and sexual violence advocacy field. Her expertise includes supporting organizations in partnership building, systems change, and developing culturally appropriates services, especially for the Latinx community. At the center of Lisi's work is health equity through the improvement of social determinants of health. | 2021-06-20T06:17:06.353Z | 2021-06-18T00:00:00.000 | {
"year": 2021,
"sha1": "7048353811ee705e7d04131e14343512cb342ccf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/08862605211025602",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "f519693471d308216274b20e7f2b440338e8614a",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53684398 | pes2o/s2orc | v3-fos-license | Isolated elliptical galaxies in the local Universe
We have studied a sample of 89 very isolated, elliptical galaxies at z<0.08 and compared their properties with elliptical galaxies located in a high-density environment such as the Coma supercluster. Our aim is to probe the role of environment on the morphological transformation and quenching of elliptical galaxies as a function of mass. In addition, we elucidate the nature of a particular set of blue and star-forming isolated ellipticals identified here. We study physical properties of ellipticals such as color, specific star formation rate, galaxy size, and stellar age, as a function of stellar mass and environment based on SDSS data. We analyze the blue star-forming isolated ellipticals in more detail, through photometric characterization using GALFIT, and infer their star formation history using STARLIGHT. Among the isolated ellipticals ~ 20% are blue, 8% are star forming, and ~ 10% are recently quenched, while among the Coma ellipticals ~ 8% are blue and just<= 1% are star forming or recently quenched. There are four isolated galaxies (~ 4.5%) that are blue and star forming at the same time. These galaxies, with masses between 7 x 10^9 and 2 x 10^10 h-2 M_sun, are also the youngest galaxies with light-weighted stellar ages<= 1 Gyr and exhibit bluer colors toward the galaxy center. Around 30-60% of their present-day luminosity, but only<5% of their present-day mass, is due to star formation in the last 1 Gyr. The processes of morphological transformation and quenching seem to be in general independent of environment since most of elliptical galaxies are 'red and dead', although the transition to the red sequence should be faster for isolated ellipticals. In some cases, the isolated environment seems to propitiate the rejuvenation of ellipticals by recent (<1 Gyr) cold gas accretion.
Introduction
An essential aspect in the theory of formation and evolution of galaxies is the understanding of the mechanisms behind their morphological transformation and quenching of star formation. Among the wide diversity of morphological types, elliptical (E) galaxies (so defined by Hubble 1926) seem to be those in the final stage of transformation to quiescent objects with regular and smooth structures (cf. Vulcani et al. 2015). As a result of their spheroidal and compact structure, supported against gravity by velocity dispersion, E galaxies have been proposed to be the result of major and minor mergers of disk galaxies, after which the gas would have been depleted and star formation quenched. The mergers could have happened early, when the disks were gaseous, or late, between gas-poor stellar galaxies (dry mergers). Moreover, in general nearby E galaxies are located in high-density environments (Oemler 1974;Dressler 1980;Bamford et al. 2009), they have red colors and low values of specific star formation rates (i.e., they are quiescent), and their stellar population are mostly old with high metallicities and α/Fe ratios (e.g., Roberts & Haynes 1994;Kauffmann 1996;Kuntschner 2000;Baldry et al. 2004;Bell et al. 2004; Thomas et al. 2005;Kuntschner et al. 2010; for a review and more references see Blanton & Moustakas 2009). All these facts motivated the idea that the 'red and dead' galaxies formed long ago by violent processes in a high-density environment that contributed to avoid further gas accretion into the galaxy.
Recent detailed observational studies have shown that ellipticals, in spite of the fact that they are the most regular galaxies, present more complex structures and more variations in their properties than previously thought (see, e.g., Blanton & Moustakas 2009). A relevant question is how the properties of E galaxies vary with environment. Since the prevailing mechanism of E galaxy formation is that of major mergers (e.g., Hernquist 1993;Kauffmann 1996;Tutukov et al. 2007;Schawinski et al. 2014), which commonly happen in dense regions before they virialize, the environment is then expected to play a key role in the properties of ellipticals and their formation histories and quenching processes. However, it could be that local galaxy-halo processes and the mass scale rather than external processes are mostly responsible for the quenching and general properties of the post-merger systems. Hence, the study of isolated E galaxies is important since, in this case, the quenching mechanisms associated with the group/cluster environment (starvation, tidal, and ram-pressure gas stripping, etc.) are not acting. In general, isolated galaxies are optimal objects for constraining the internal physical processes that drive galaxy evolution.
The question of formation of E galaxies in isolated environments is on its own of great interest. Are these galaxies, for a given mass, different from those in clusters? Do both populations A&A proofs: manuscript no. pversion_27844_am follow the same correlations with mass? Semianalytic models in the context of the popular Λ cold dark matter (ΛCDM) cosmology predict that on average the ellipticals formed in field haloes should have stellar ages comparable to those formed in rich clusters. However, in the field environment a more significant fraction of ellipticals with younger stellar populations is predicted than in clusters (Kauffmann 1996;Niemi et al. 2010). This may indicate different formation histories. Theoretical models suggest that ellipticals in clusters form through dissipative infall of gas and numerous mergers that took place at early epochs (5 to 10 Gyr ago), whereas some field ellipticals form through recent major mergers and are still in the process of accreting cold gas.
In most of the previous observational works, early-type (E and S0/a) galaxies in general were studied. These works find that early-type galaxies are mostly red/passive, but there is also a fraction of blue/star-forming objects; this blue fraction increases as the mass is smaller and the environment is less dense (e.g., Schawinski et al. 2009;Kannappan et al. 2009;Thomas et al. 2010;McIntosh et al. 2014;Schawinski et al. 2014;Vulcani et al. 2015). The fraction of blue early-type galaxies at masses larger than log(M s /M ⊙ ) ≈ 11 is virtually null, showing that the most massive galaxies formed very early and efficiently quenched their growth by star formation. Interesting enough, the trends seen at z ∼ 0 are similar at higher redshifts, although the fractions of blue early-type galaxies increase significantly with z (Huertas-Company et al. 2010). For E galaxies in the field, which may include galaxies in loose and poor groups with a dynamical masses < 10 13 M ⊙ , it has been found that their colors are bluer on average and show more scatter in color than ellipticals in rich groups or clusters (de Carvalho & Djorgovski 1992). Regarding pure E galaxies in very isolated environments, the samples in these studies are usually composed of only a few bright (massive) objects (e.g., Colbert et al. 2001;Marcum et al. 2004;Reda et al. 2004;Denicoló et al. 2005;Collobert et al. 2006;Hau & Forbes 2006;Smith et al. 2010;Lane et al. 2013;Richtler et al. 2015;Salinas et al. 2015). In contrast to these works, Smith et al. (2004) and Stocke et al. (2004) presented relatively large samples of 32 and 65 isolated E galaxies, respectively, but they did not consider the radial velocity separation of companion galaxies to classify an isolated candidate.
In view of the shortage of observational samples of welldefined E (pure spheroidal) galaxies in extreme isolation, we present here a relatively complete sample of these galaxies in a large mass range and compare some of their properties to those ellipticals located in a high-density environment, the Coma supercluster. Our sample comes from the catalog of local isolated galaxies by Hernández-Toledo et al. (2010), which includes the redshift information creating a robust isolated sample of pure Es. We explore whether very isolated ellipticals differ in some photometric, spectroscopic, and structural properties with those of the Coma supercluster, and whether both isolated and highdensity ellipticals follow similar correlations with mass. Our final aim is to probe the role of environment on the morphological transformation and quenching of E galaxies as a function of mass. Isolated ellipticals, whose results are very different from those in the cluster environment, are studied in more detail. In particular, we focus on a set of blue and star-forming (hereafter SF) galaxies. In case the reason for their blue colors and recent star formation activity is due to a rejuvenation process produced by the recent accretion of cold gas, these isolated elliptical galaxies can be used as unique 'sensors' of the gas cooling from the cosmic web.
The outline of the papers is as follows. The selection criteria of isolated galaxies along with the data set of galaxies in the Coma supercluster are described in Sect. 2. We present the results with the properties and mass dependences of E galaxies in Sect. 3. The implications of our results are discussed in Sect. 4. The photometric and spectroscopic analysis of the particular subsample of blue and SF isolated elliptical galaxies is presented in Sect. 5. Finally, our conclusions are given in Sect. 6.
Throughout this paper we use the reduced Hubble constant h, where H 0 = 100 h km s −1 Mpc −1 , with the following dependencies: stellar mass in h −2 M ⊙ , absolute magnitude in +5 log(h), size and physical scale in h −1 kpc, and halo mass in h −1 M ⊙ , unless the explicit value of h is specified.
Data and selection criteria
Our main goal is to study the properties of local elliptical galaxies in very isolated environments. For this, we use a particular galaxy sample described in Sect. 2.1. In order to compare some of the properties of these galaxies with those in a much denser environment, where ellipticals are more frequent, we use a compilation of elliptical galaxies in the Coma supercluster as described in Sect. 2.2.
Isolated elliptical galaxies
The isolated elliptical galaxies studied here come from the UNAM-KIAS catalog of Hernández-Toledo et al. (2010); this paper gives more details. Here we briefly refer to the sample and selection criteria.
The Sloan Digital Sky Survey (SDSS; York et al. 2000;Stoughton et al. 2002) produced two galaxy samples. One is a flux-limited sample to extinction corrected apparent Petrosian r-band magnitudes of 17.77 (the main galaxy sample), and a color-selected and flux-limited sample extending to r Pet = 19.5 (the luminous red galaxy sample). Galaxies with r-band magnitudes in the range 14.5 ≤ r Pet < 17.6 were selected from the DR4plus sample that is close to the SDSS Data Release 5 (Adelman-McCarthy et al. 2007). The survey region covers 4464 deg 2 , containing 312 338 galaxies. Hernández-Toledo et al. (2010) attempted to include brighter galaxies, but the spectroscopic sample of the SDSS galaxies is not complete for r Pet < 14.5. Thus, they searched in the literature and borrowed redshifts of the bright galaxies without SDSS spectra to increase the spectroscopic completeness. The final data set consists of 317 533 galaxies with known redshift and SDSS photometry.
The isolation criteria is specified by three parameters. The first is the extinction-corrected Petrosian r-band apparent magnitude difference between a candidate galaxy and any neighboring galaxy, ∆m r . The second is the projected separation to the neighbor across the line of sight, ∆d. The third is the radial velocity difference, ∆V. Suppose a galaxy i has a magnitude m r,i and iband Petrosian radius R i . It is regarded as isolated with respect to potential perturbers if the separation ∆d between this galaxy and a neighboring galaxy j with magnitude m r, j and radius R j satisfies the conditions or ∆V ≥ 1000 km s −1 , or the conditions I. Lacerna et al.: Isolated elliptical galaxies in the local Universe ∆V < 1000 km s −1 (4) m r, j ≥ m r,i + ∆m r , for all neighboring galaxies. Here R j is the seeing-corrected Petrosian radius of galaxy j, measured in i-band using elliptical annuli to consider flattening or inclination of galaxies (Choi et al. 2007). Hernández-Toledo et al. (2010) chose ∆m r = 2.5. Using these criteria, they found a total of 1548 isolated galaxy candidates. We note that a magnitude difference of 2.5 in this selection criteria translates into a factor of about 10 in brightness similar to that imposed by Karachentseva (1973).
In Hernández-Toledo et al. (2010), isolated elliptical galaxies were classified after some basic image processing and presented in mosaics of images including surface brightness profiles and the corresponding geometric profiles (ellipticity ǫ, Position Angle PA and A 4 /B 4 coefficients of the Fourier series expansions of deviations of a pure ellipse) from the r-band images to provide further evidence of boxy/disky character and other structural details. A galaxy was judged to be an elliptical if the A 4 parameter showed: 1) no significant boxy (A 4 < 0) or disky (A 4 > 0) trend in the outer parts, or 2) a generally boxy (A 4 < 0) character in the outer parts. We inspected for the presence/absence of 3) a linear component in the surface brightness-radius diagram. Morphologies were assigned according to a numerical code following the HyperLeda 1 database convention; in particular for early-type galaxies, the following T morphological parameters (Buta et al. 1994) are applied: −5 for E, −3 for E-S0, −2 for S0s, and 0 for S0a types. In the UNAM-KIAS sample there are 250 isolated early-type galaxies that satisfy T ≤ 0 (E/S0), where 92 galaxies are ellipticals (T ≤ −4), which is ≈ 6% of the sample.
Elliptical galaxies in the Coma supercluster
To perform a comparative study of the isolated elliptical galaxies in the UNAM-KIAS catalog, we have compiled a sample of elliptical galaxies in a dense environment such as the Coma supercluster. This region is composed of the Coma and Leo clusters (Abell 1656 and Abell 1367, respectively) along with other galaxies in the filaments that connect these two rich clusters. To that purpose, we retrieve available data through the GOLDMine Database (Gavazzi et al. 2003). From the list of galaxies in the Coma supercluster (∼ 1000 objects), the quoted CGCG (Catalogue of Galaxies and of Clusters of Galaxies) principal name was used to cross-correlate with the HyperLeda database. In the latter catalog, 915 galaxies have morphological classification, where 131 objects correspond to pure elliptical galaxies. There are 113 elliptical galaxies with spectroscopic redshifts from the SDSS database using CasJobs. 2 We select galaxies in the redshift range of 4000 < cz < 9500 km s −1 as true members of the Coma supercluster (Gavazzi et al. 2014). With this, we obtain a sample of 102 elliptical galaxies with a mean redshift of 0.023 ± 0.003 and r Pet < 15.4. Figure 1 shows the spatial distribution of these galaxies in the Coma supercluster. Although some Es are not members of the two clusters, they probably belong to groups or regions in the outskirts of the clusters, which correspond to higher density environments compared to the lowdensity environment of isolated galaxies. Indeed, we checked that none of the elliptical galaxies in the Coma supercluster is classified as an isolated object by following eqs.
(1)-(5). In addition, we checked that the overall results and conclusions in this paper do not change if the velocity range is reduced (e.g., 6000 < cz < 8000 km s −1 ).
Physical properties
Color measurements for the isolated galaxies and galaxies in the Coma supercluster were taken from the SDSS database with extinction corrected modelMag magnitudes (dered parameter in CasJobs). This magnitude is defined as the better of two magnitude fits: a pure de Vaucouleurs profile and a pure exponential profile.
We transform the SDSS colors to B − R colors and the absolute magnitude in the r-band to the R-band in Sect. 3 following the equations, suggested by Niemi et al. (2010), written as We use the following stellar mass-to-light ratio of Bell et al. (2003) to estimate the stellar mass, M s , for the galaxy samples: where 0.0 (g Pet − r Pet ) is the color with Petrosian magnitudes (petroMag parameter in CasJobs), which are measured within a circular aperture defined by the shape of the light profile. In addition to the correction for Galactic extinction (Schlegel et al. 1998), we perform the K-correction and evolution correction at z = 0 with kcorrect v4_2 (Blanton & Roweis 2007). The Petrosian r-band absolute magnitude is 0.0 M r Pet − 5 log(h), which is also K-corrected and evolution corrected at z = 0. We include an extra correction to this absolute magnitude of −0.1 mag for elliptical galaxies since Petrosian magnitudes underestimate the total flux for these galaxies (Bell et al. 2003;McIntosh et al. 2014). The term −0.1 in equation (8) implies a Kroupa (2001) initial A&A proofs: manuscript no. pversion_27844_am mass function (IMF). The systematic error is 0.10-0.15 dex. The stellar mass estimation used here does not have systematic differences compared to other methods, for example, using spectral energy distribution fittings (see Dutton et al. 2011). The specific star formation rate (sSFR) is simply defined as the star formation rate divided by the stellar mass. This quantity has been obtained from the MPA-JHU DR7 catalog, 3 which corresponds to an updated version of the estimates presented in Brinchmann et al. (2004) by using a spectrophotometric synthesis fitting model.
The radii used in this work, R deV , correspond to the de Vaucouleurs fit scale radius in the r-band (deVRad_r parameter in CasJobs). This radius is defined as the effective (halflight) radius of a de Vaucouleurs brightness profile, I(r) = I 0 e −7.67[(r/R deV ) 1/4 ] .
Finally, the luminosity-weighted stellar age is obtained from the database of STARLIGHT, 4 through the population synthesis method developed by Cid Fernandes et al. (2005) and applied to the SDSS database.
Halo masses
We study how the properties of our isolated elliptical galaxies depend on the host halo mass. For this reason, we use the Yang et al. (2007; hereafter Y07) group catalog, which includes by construction the halo (virial) mass, M h , of galaxies down to some luminosity. This kind of halo-based group finders have emerged as a powerful method for estimating group halo masses, even when there is only one galaxy in the group. This method can recover, in a statistical sense, the true halo mass from mock catalogs with no significant systematics .
The halo mass in Y07 is based on either the characteristic stellar mass or the characteristic luminosity in the group. We use the halo mass based on the characteristic stellar mass, which is defined as the sum of the stellar mass of all the galaxies in the halo with 0.1 M r -5 log(h) ≤ -19.5, where 0.1 M r is the r-band absolute magnitude with K-correction and evolution correction at z = 0.1. They assume a one-to-one relation between the characteristic stellar mass and M h by matching their rank orders for a given volume and a given halo mass function.
However, for single-galaxy groups, which are not complete in characteristic stellar mass, the halo mass estimates under the assumption of the one-to-one relation mentioned above are already not reliable. For this reason, the central galaxies that are fainter than the magnitude limit do not have halo mass estimates, where the halo mass lower limit in the Y07 catalog is 10 11.6 h −1 M ⊙ . We point out that because of the method used in Y07, the estimated halo masses are not measurements of the true halo masses, but they are a very good statistical approximation.
Redshift and stellar mass limits
Since the UNAM-KIAS catalog reaches mainly out to z = 0.08, throughout the text we use the redshift range 0 < z < 0.08. Furthermore, in a magnitude limited sample, the minimum detected M s depends on the redshift and on the stellar mass-to-luminosity ratio, where the latter depends on galaxy colors. For the SDSS sample and its magnitude limit, van den Bosch et al. (2008; see also Yang et al. 2009) calculated the stellar mass limit at each z above which the sample is complete. We adopt their limit as 3 Available at http://www.mpa-garching.mpg.de/SDSS/DR7/ 4 http://www.starlight.ufsc.br follows: Our final sample of isolated ellipticals (T ≤ −4) consists of 89 galaxies using eq. 9 (69 galaxies with halo mass estimates). For the average redshift of our sample of isolated E galaxies,z = 0.037, the corresponding average stellar mass limit is log(M s,lim /h −2 M ⊙ ) = 9.47. We notice that only three E galaxies of these 89 have stellar masses smaller than this average mass limit.
On the other hand, we obtain a final sample of 102 elliptical galaxies in the Coma supercluster according to equation (9). The mass limit completeness at the average redshift of the Coma supercluster galaxies (z = 0.023) is log(M s,lim /h −2 M ⊙ ) = 9.0.
Properties and mass dependences of elliptical galaxies
3.1. Colors and star formation rates Figure 2 shows the (g − i)-M s diagram for our sample of isolated elliptical galaxies (black filled squares) and for ellipticals in the Coma supercluster (red filled triangles). The red solid line corresponds to the relation found by Lacerna et al. (2014) to separate red and blue galaxies, specifically where M s is in units of h −2 M ⊙ . As can be seen from the figure, most of the ellipticals located both in dense and isolated environments are red upon this criterion. However, there is a fraction of galaxies with bluer colors. We find that 18 isolated ellipticals (≈ 20% of the sample) are below the red line, some of them far away from this division line. Instead, there are only eight (≈ 8%) ellipticals in Coma that are below the red line, and most of them are actually close to it, probably lying in what is called the green valley. Thus, the fraction of blue galaxies is higher in the isolated environment than in the Coma supercluster. For the blue isolated ellipticals, they become bluer as the mass is lower. At relatively low masses, M s < 10 10.4 h −2 M ⊙ , the blue population only corresponds to the isolated elliptical sample. At high masses, most of the galaxies are red. The top and right panels of Fig. 2 show the stellar mass and g−i color distributions, respectively, for isolated elliptical galaxies (gray solid histogram) and elliptical galaxies located in the Coma supercluster (red open histogram). These normalized density distributions were obtained using the Knuth method for estimating the bin width implemented on astroML 5 (Ivezić et al. 2014), which is also able to recognize substructure in data sets. The g − i mean and median values of isolated and Coma supercluster ellipticals are similar, see Table 1, although the distribution is slightly narrower in the case of Coma galaxies (see also the difference among the 16th and 84th percentiles). Some isolated elliptical galaxies seem to be more massive than the most massive elliptical galaxies in the dense environment (e.g., NGC 4889 and NGC 4874 of Coma cluster), though the differences are within the mass uncertainties of 0.15 dex. We checked that our mass estimates for these galaxies are consistent with mass estimations based on spectral synthetic models from the MPA-JHU DR7 catalog. Notes. For each property, the columns correspond to the mean, its standard deviation, the median, the 16th, and 84th percentiles. (a) log 10 of the stellar mass in units of h −2 M ⊙ . (b) log 10 of the specific star formation rate in units of yr −1 . (c) log 10 of the radius of a de Vaucouleurs fit in the r-band in units of kpc (h = 0.7). (d) log 10 of the light-weighted stellar age in units of yr.
Fig. 2.
As a function of stellar mass, g−i color for our sample of isolated elliptical (T ≤ −4) galaxies out to z = 0.08 (black filled squares). In addition, we include a sample of elliptical galaxies located in the Coma supercluster (red filled triangles). The red line shows equation (10) to separate red/blue galaxies. Blue filled squares and green filled squares show the blue and red isolated ellipticals, which are also star-forming galaxies according to equation (11), respectively. The numbers are the ID in the UNAM-KIAS catalog for the former. Magenta open squares correspond to the recently quenched elliptical (RQE) galaxies following the color-color criterion of Sect. 3.3. Top and right panels: normalized density distributions of stellar mass and g − i color for isolated elliptical galaxies (gray solid histogram) and elliptical galaxies located in the Coma supercluster (red open histogram), respectively. The integral of each histogram sums to unity. Figure 3 shows sSFR as a function of stellar mass. We include as a red solid line the relation found by Lacerna et al. (2014) to separate passive and SF galaxies, i.e., where M s is in units of h −2 M ⊙ and sSFR is in units of yr −1 . According to this relation, galaxies located above and below this line are considered SF and passive galaxies, respectively. Most of our isolated ellipticals are passive galaxies in terms of their star formation. In Figs. 2, 3, and the figures that follow, we plot with a blue (green) solid square those ellipticals that are blue (red) and SF galaxies; the magenta open squares highlight the 'recently quenched ellipticals' to be described below. There are four blue SF ellipticals and three red SF galaxies. In total, we have seven SF isolated ellipticals (≈ 8% of all the isolated ellipticals). In Sect. 3.5, we see that at least one blue SF galaxy could be classified as AGN/LINER in the BPT diagram (Baldwin et al. 1981) and in Appendix A that other blue SF galaxy could be an AGN because of its broad component in Hα. On the other hand, nearly all the E galaxies from the Coma supercluster (red filled triangles) are passive objects according to the above criterion. There is only one ( < ∼ 1%) elliptical in Coma that would be a SF galaxy.
In general, the distribution of isolated and Coma ellipticals in the sSFR-M s plane seems to be not too different, except for the small fraction of isolated ellipticals at masses 10 9.7 < M s < 10 10.7 h −2 M ⊙ with relevant signs of star formation activity. The right panel of Fig. 3 shows the sSFR distributions for isolated elliptical galaxies (gray solid histogram) and elliptical galaxies in the Coma supercluster (red open histogram). The latter are slightly more passive, by ∼ 0.2 dex according to the mean and median values reported in Table 1.
The color-magnitude diagram
It is well known that, in general, more luminous galaxies tend to have redder colors. In Fig. 4, as an example of the colormagnitude diagram (CMD), we plot the B − R color against Rband absolute magnitude for our samples of isolated and Coma E galaxies. The symbols for the E galaxies are the same as in Fig. 2. The trend that galaxies are redder as their luminosity increases is followed by both isolated and Coma E galaxies. This means that ellipticals exhibit roughly a similar CMD regardless of the environment. However, as in the case of the color-M s diagram, there is a small fraction of isolated ellipticals that follow a different trend, toward bluer colors as magnitudes are fainter. They show a steeper color-magnitude relation than the rest of the ellipticals, giving rise to a branch that systematically detaches from the red sequence. The blue SF galaxies identified in the previous figures (blue filled squares) are namely at the end of this branch.
The right panel of Fig. 4 shows the B−R color distribution for isolated E galaxies (gray solid) and Es in the Coma supercluster (red open). The mean and median B − R colors (reported in Table 1) are only slightly different between both samples. Elliptical (11) to separate star-forming and passive galaxies (above and below the line, respectively). Right panel: normalized density distribution of sSFR for isolated elliptical galaxies (gray solid histogram) and elliptical galaxies located in the Coma supercluster (red open histogram). The integral of each histogram sums to unity. galaxies in Coma are on average redder by ∼ 0.02 mag than those in isolation, but the distribution of the former is narrower than that of the latter (see also the 16th and 84th percentiles). The top panel of this figure corresponds to the distribution of the absolute magnitude in the R-band. The mean and median values are reported in Table 1. A slight bias is seen in the distribution toward brighter isolated ellipticals than in Coma.
Recently quenched ellipticals and stellar ages
McIntosh et al. (2014) introduced some criteria to define the population of ellipticals that have been quenched recently. These recently quenched ellipticals (hereafter RQE) are mostly blue, have light-weighted stellar ages shorter than 3 Gyr (according to age determinations by Gallazzi et al. 2005), and lack of detectable emission for star formation. Such galaxies clearly experienced a recent quenching of star formation and are now transitioning to the red sequence. McIntosh et al. (2014) argue that these RQEs "have recent star formation histories that are distinct from similarly young and blue early-type galaxies with ongoing star formation (i.e., rejuvenated early-type galaxies)", and given their supra solar metallicities, they "are consistent with chemical enrichment from a significant merger-triggered star formation event prior to the quenching". The authors conclude that RQEs are strong candidates for 'first generation' ellipticals formed in a relatively recent major spiral-spiral merger. McIntosh et al. (2014) found an empirical criterion in the (u −r) −(r −z) diagram to select RQEs. Figure 5 shows the (u−r)−(r−z) diagram for the isolated and Coma E galaxies. The green shaded triangle is the empirical re- Fig. 4. B − R color as a function of the absolute magnitude in the R-band (h = 0.7, K-corrected at z = 0). The symbols for the galaxies are the same as Fig. 2. The linear fit of Niemi et al. (2010) for their (nonisolated) elliptical galaxies from a semianalytic model is shown as a dotted line. The fit for their simulated isolated elliptical galaxies is shown as a solid line. Top and right panels: normalized density distributions of absolute magnitude and B − R color for isolated elliptical galaxies (gray solid histogram) and elliptical galaxies located in the Coma supercluster (red open histogram), respectively. The integral of each histogram sums to unity. gion where RQEs lie according to McIntosh et al. (2014). There are nine (≈ 10%) isolated E galaxies that are RQEs according to this criterion; they are shown with magenta open squares in Fig. 5 and in other figures. Among these nine ellipticals, seven are blue and two are red, according to equation (10), as can be seen in Fig. 2, and eight are passive, according to equation (11), as can be seen in Fig. 3. In Sect. 3.5 we see that the (marginal) RQE, which is a blue SF galaxy (highlighted with a blue solid square; UNAM-KIAS 1197), can instead be classified as a LINER according to the BPT diagram. There are no isolated RQEs more massive than M s ≈ 7 × 10 10 h −2 M ⊙ (see, e.g., Fig. 2), and they reside in haloes of masses 6 × 10 12 h −1 M ⊙ (see Fig. 9 below), confirming that the most massive ellipticals were quenched long ago. The two RQEs classified as red galaxies are in fact the least massive of the nine RQEs; see Fig. 2.
In the case of the ellipticals in the Coma supercluster, there is only one RQE candidate although it lies in the lower border limit. Thus, while in the Coma supercluster there are virtually no RQEs (even those that are blue in Fig. 2 seem to have been quenched relatively early), RQEs are ≈ 10% among isolated E galaxies. This environmental difference is consistent with McIntosh et al. (2014), where the authors find that the vast majority of their RQEs are central in smaller groups (M h 3 × 10 12 h −1 M ⊙ ), i.e., they do not reside in high-density environments.
The black solid lines in Fig. 5 delimit the region of spectroscopically quiescent, red-sequence early-type galaxies according to Holden et al. (2012) and extended to (u − r) = 1.9 in McIntosh et al. (2014). Galaxies outside this boundary are defined as pure star-forming (BPT H ii emission). Most of the ellipticals in Coma (triangles) and isolated ellipticals (squares) qualify as non-SF systems in this diagram. There is a rough agreement between those galaxies defined as SF in Figs. 3 and 5. Blue SF isolated E galaxies (blue squares with numbers corresponding to their ID in the UNAM-KIAS catalog) are located far away from the non-SF region, whereas red SF isolated E galaxies (green squares) are close to the border.
In contrast to properties such as color and size (see Sect. 3.4), the luminosity-weighted stellar age does not show a significant dependence on stellar mass as shown in Fig. 6. From this figure it is clear that the blue SF isolated ellipticals are the youngest galaxies with luminosity-weighted stellar ages 1 Gyr. This could indicate recent processes of star formation for this particular class of objects. The right panel shows the stellar age distribution of isolated ellipticals (gray solid histogram) and elliptical galaxies located in the Coma supercluster (red open histogram). The mean and median values are reported in Table 1. Ellipticals in Coma appear to be older than isolated ellipticals in general (∼ 1 Gyr older), although this difference is within the statistical uncertainties. Figure 7 shows the effective radius in the r-band for the different samples of elliptical galaxies as a function of stellar mass. The general trend that massive galaxies are bigger with no apparent dependence on environment is observed. Blue SF isolated ellipticals are smaller than 2.5 kpc. In the case of RQEs, they have R deV 3.5 kpc. The right panel of Fig. 7 shows the radius dis- tribution of the isolated and Coma supercluster ellipticals (gray solid and red open histograms, respectively). We see that the radius distribution of Coma and isolated ellipticals are similar. If any, the former is slightly shifted to smaller radii with respect to the latter. The mean and median values of the radii are reported in Table 1. On average the radius for both samples are similar within the uncertainties, although the distribution of E galaxies in Coma is narrower than that of isolated ellipticals (e.g., see the 16th and 84th percentiles).
Galaxy sizes
In Fig. 7 we also include the size-mass relation of earlytype galaxies (n > 2.5) reported in Shen et al. (2003; as a dashed line. They use the half-light radius in the z-band to define the size of their galaxies with z ≥ 0.005. The dot-dot-dashed line shows the size-mass relation found by Guo et al. (2009) for early-type galaxies (usually n > 3.5) at z ≤ 0.08. They define the size as the half-light radius in the r-band. The solid line is the relation found by Mosleh et al. (2013) for their sample of early-type galaxies between 0.01 < z < 0.02. The morphology of these galaxies was obtained from the Galaxy Zoo Catalogue (Lintott et al. 2011); the size is also defined as the halflight radius in the r-band. Despite the fact that different size, morphological definitions, and samples were used in each one of these works, we find a qualitative good agreement among their studies and this work at masses M s > 3 × 10 10 M ⊙ . The best agreement is obtained with the result by Guo et al. (2009). Their sample is more similar to ours (redshift limit, size definition, and more stringent criteria to select early-type systems). At lower masses, Mosleh et al. (2013) find a flatter size-mass relation. This is roughly followed by some of our low-mass isolated galaxies, although other low-mass ellipticals follow an extension Fig. 7. Effective radius of a de Vaucouleurs fit, R deV , in the r-band as a function of stellar mass (h = 0.7). The symbols for the galaxies are the same as Fig. 2. We also include the mass-size relations of earlytype galaxies by Shen et al. (2003; dashed line), Guo et al. (2009;dotdot-dashed line) and Mosleh et al. (2013;solid line). Right panel: normalized density distribution of the radius for isolated elliptical galaxies (gray solid histogram) and elliptical galaxies in the Coma supercluster (red open histogram). The integral of each histogram sums to unity. of the trend exhibited by the massive galaxies. It seems that the size-mass relation is different or not unique at the low-mass end.
BPT diagram
A nuclear classification was carried out from diagrams in the optical diagnosis initially introduced by Baldwin et al. (1981;BPT) and redefined by Veilleux & Osterbrock (1987). The BPT diagrams have been massively used for discriminating between different mechanisms of ionization and production of emission lines, separating the production mechanisms either by photoionization by massive OB stars (typical of regions with star formation) or nonstellar sources such as the presence of AGN in the core. We have cross-matched our sample of isolated and Coma supercluster E galaxies with the database of STARLIGHT, where the stellar population synthesis analysis developed by Cid Fernandes et al. (2005) was applied to SDSS galaxies. Figure 8 shows the BPT diagram for the isolated E galaxies with reported lines in the STARLIGHT database (solid squares). Those with reported lines correspond to 35 out of 89 (≈ 39%) isolated E galaxies. The rest are probably quenched and/or without an AGN at present day. Out of the 35 galaxies, 24 are AGNs (19 LINERS and five Seyferts) and ten are transition objects (TOs). 6 Only one galaxy (UNAM-KIAS 1394) is a SF nuclear 6 The interpretation of TO objects as SF-AGN composites is not clear. For example, McIntosh et al. (2014) discuss that these galaxies may be neither SF nor an AGN, rather their emission may be dominated by the same non-nuclear ionization sources as many LINERs. (SFN) object. This galaxy is also a blue SF object as defined by us. Other two blue SF isolated Es are TOs, and one blue SF galaxy seems to be actually a LINER (UNAM-KIAS 1197). In Appendix A and C, we describe a more detailed spectroscopical analysis of these four blue SF isolated ellipticals, using lines with S/N>7. Our results (blue open squares) are similar to those from the STARLIGHT database (blue solid squares). In the case of UNAM-KIAS 613, we find that this galaxy is actually a Seyfert 1.8 because of its broad components in Hα and Hβ (see details in Appendix A). Therefore, 25 isolated Es correspond to AGNs, nine are TOs and one galaxy is a SFN.
In the case of the E galaxies located in the Coma supercluster (red open circles), 39 out of 102 (≈ 38%) have reported lines in the STARLIGHT database. The rest are probably quenched and without an AGN by today. Twenty-nine galaxies are LINERs, two are Seyferts and eight are TOs.
In general, there is a similar fraction of isolated and Coma supercluster ellipticals that present ionization emission lines associated with star formation or AGN activity. Among these, the fraction of those classified as SFN/TO is slightly larger for the isolated ellipticals than for the Coma ellipticals (11% vs 8%). Thus, environment seems to have a weak influence on the fractions of ellipticals with a mixture of emission coming from a
Interpretation and implications
The results presented above are consistent with several previous works on early-type galaxies mentioned in the Introduction. Our sample is different from previous samples in that it selects very isolated environments and refers only to morphologically well-defined elliptical galaxies (T ≤ −4) in a wide mass range. This allows us to probe the morphological transformation and quenching of the star formation as a function of mass, hopefully free of environmental effects.
The work by Vulcani et al. (2015) is the closest to the present paper. Using an automatic tool designed to attempt to reproduce the visual classifications (MORPHOT; Fasano et al. 2012), these authors made an effort to distinguish pure E galaxies from S0/Sa galaxies in a sample complete above log(M s /M ⊙ )= 10.25 at their redshift upper limit z = 0.11. For those galaxies that Vulcani et al. (2015) call singles (no neighbors with a projected mutual distance of 0.5 h −1 Mpc and a redshift within 1500 km s −1 ), 24 ± 3% are ellipticals. Out of them, ≈ 83% are red according to their definition. The rest are blue and green (roughly 13% and 4%, respectively). In our sample, the fraction of blue isolated ellipticals (we separate them only into blue and red galaxies) above log(M s /h −2 M ⊙ )= 9.94 (10.25 for h = 0.7) is ≈ 21%, which is slightly larger than in Vulcani et al. (2015), but yet within their error bars.
According to our analysis, the colors, sSFR, sizes, and luminosity-weighted ages of very isolated E galaxies are only slightly different from those in the Coma supercluster (see Table 1 for mean and median values). In general, in both environments the fractions of blue and SF ellipticals are low. However, these fractions are larger for the isolated ellipticals (approximately 20% and 8%, respectively) than for the Coma supercluster (approximately 8% and < ∼ 1%, respectively), as seen in Figs. 2 and 3. Moreover, these fractions deviate in a different way from the main red/passive sequences as a function of mass. The main difference is that the blue/SF isolated ellipticals deviate more from the main trends toward smaller masses down to log(M s /h −2 M ⊙ ) ≈ 9.85 (at lower masses, there are no blue nor SF E galaxies). In the case of the small fraction of blue ellipticals in the Coma supercluster, they deviate from the main trend only moderately, and below log(M s /h −2 M ⊙ ) ≈ 10.4 there are no blue ellipticals. Thus, in a high-density environment, only a small fraction of intermediate-mass galaxies are slightly away from the red sequence today, so that they seem to have intermediate-age stellar populations. The situation is not too different for the very isolated ellipticals, although as mentioned above, the fraction of blue objects is larger and, especially at intermediate masses, some of them deviate significantly from the red sequence and are actually SF galaxies. An interesting question is why these very isolated galaxies remain blue and star forming after their morphological transformation: Is that because this transformation happened recently from gaseous mergers or because they accreted gas, suffering a rejuvenation process? We discuss this question in Sect. 5.
A main result to be discussed now is that E galaxies in the local Universe are mostly red and dead (passive), that is, even those that are very isolated. It seems that the mechanisms responsible for the morphological transformation of galaxies produce efficient quenching of star formation and depletion of the cold gas reservoir, both for isolated and cluster E galaxies, and environ-ment is not the most effective (or only) mechanism of quenching, although it is expected to play some role (see below Sect. 4.1).
In more detail, as can be seen in Figs. 2 and 4, both isolated and supercluster ellipticals follow the overall trend that they are on average redder as they are more massive (luminous). Moreover, above M s ≈ 8 × 10 10 h −2 M ⊙ there are no blue or SF ellipticals at all in both the supercluster and isolated samples, and the luminosity-weighted ages are older than 4 Gyr for almost all of them. This confirms that massive E galaxies assembled their stellar populations early, remaining quenched since those epochs (cf. Thomas et al. 2005;Schawinski et al. 2009;Kuntschner et al. 2010;Thomas et al. 2010). As we go to lower masses, notwithstanding the environment, ellipticals tend to have on average bluer colors than the more massive E galaxies. This mass downsizing behavior for E galaxies is also seen in the case of halo mass. The left panel of Fig. 9 shows the B − R color as a function of M h for the isolated E galaxies with an estimate of their halo (group) mass according to Y07. With a large scatter, we find that B − R ∝ 0.12 log M h . This trend, within the context of the merger-driven (morphological) quenching mechanism, might be explained by more gaseous and/or later mergers as the system is less massive; actually, observations show that lower mass galaxies have on average higher gas fractions (e.g., Avila-Reese et al. 2008;Papastergis et al. 2012;Calette et al. 2015;Lehnert et al. 2015). However, even if the merger is late and gaseous, the quenching seems to be so efficient and rapid that most of the low-mass ellipticals, both isolated and in the Coma supercluster, already transited to the red sequence by z ∼ 0. Even more, their present-day sSFR's are very low, close to those of the massive galaxies or haloes (see Fig. 3 and right panel of Fig. 9).
Is it possible to evaluate in a more quantitative way whether the quenching of those E galaxies that are passive today happened early or recently? The very red colors of most massive ellipticals (M s 10 11 h −2 M ⊙ ) suggest that these galaxies did not have active star formation since at least 1-2 Gyr ago. However, those (less massive) passive E galaxies with bluer colors and sS-FRs slightly higher than the average could have been quenched more recently. As described in Sect. 3.3, in the plot presented in Fig. 5, we can select those RQE galaxies. At face value, our results show that the ellipticals in an environment such as the Coma supercluster suffered the quenching of star formation, and likely the previous morphological transformation, early, so that at z ∼ 0 almost all of them are passive and red; the small fraction that lies below the red sequence (≈ 8% in Fig. 2, likely in the green valley) are not RQEs excepting one marginal case, i.e. these ellipticals started to be quenched long ago but they are just in the process of transition to the red sequence.
In the case of isolated ellipticals, among those that are blue but passive galaxies (∼15%), half are RQEs (see Figs. 2 and 5). The fact that in the isolated environment, among the blue passive ellipticals there is a significantly larger fraction of RQEs than in the Coma supercluster, suggests that the transition to the red sequence should be faster for isolated ellipticals than for supercluster ones. For the blue passive isolated E galaxies, we can even see a trend. The RQEs are bluer than those that were quenched not recently; most of the latter already lie very close to the red sequence (see Fig. 2). Instead, almost all of the (few) blue Coma ellipticals were quenched not recently but they did not yet arrive to the red sequence. Moreover, for masses below log(M s /h −2 M ⊙ ) ∼ 10.5, the isolated RQEs are close or already in the red sequence (excepting the peculiar case of the bluest galaxy to be discussed below; see Fig. 2). Therefore, they quenched and reddened very fast. In general, we see that as less massive the isolated RQEs are, the faster they seem to have been reddened. What produce the cessation of star formation and gas accretion so efficiently in isolated low-mass E galaxies?
Quenching mechanisms of E galaxies
Our results are consistent with the conclusions by Schawinski et al. (2014) for early-type galaxies in general. These authors propose that a major merger could simultaneously transform the galaxy morphology from disk to spheroid and cause rapid depletion of the cold gas reservoir with a consequent quenching of star formation (morphological quenching). As a result of the drop in star formation, the galaxy moves out of the blue cloud, into the green valley, and to the red sequence as fast as stellar evolution allows. The authors estimate that the transition process in terms of galaxy color takes about 1 Gyr for early-type galaxies; this time is much longer for late-type galaxies, i.e., they undergo a much more gradual decline in star formation. The rapidity of the gas reservoir destruction in E galaxies should be due to a very efficient star formation process and strong supernova-and/or AGN-driven feedback (winds, ionization, etc.). According to our results, on the one hand, isolated and Coma supercluster E galaxies in general share the same loci in the color-M s , color-magnitude, and sSFR-M s diagrams, and show evidence of rapid transition to the red sequence after they were quenched, especially the low-mass isolated ellipticals. On the other hand, the radius-mass relation of supercluster and isolated E galaxies are roughly similar and follow roughly the relation determined for early-type galaxies from large samples (Fig. 7). Moreover, as is well known, ellipticals in general are more concentrated than late-type galaxies. This makes evident the presence of strong dissipative processes at the basis of the origin in most of these galaxies, which are happening in the same way both for the isolated and cluster environments. In conclusion, the processes of morphological transformation, quenching, and rapid transition to the red sequence of E galaxies seem to be in general independent of environment, except for a small fraction of isolated ellipticals that significantly deviate from the main sequences of E galaxies.
From an empirical point of view, the quenching of star formation in general has been found to be associated with mass and/or environment, mainly when the galaxy is a satellite (e.g., Peng et al. 2010b). Two main mechanisms of quenching associated with mass were proposed: (1) the strong virial shock heating of the gas in massive haloes (e.g., White & Frenk 1991;Dekel & Birnboim 2006), and (2) the AGN-driven feedback acting in massive galaxies assembled by major mergers (e.g., Silk & Rees 1998;Binney 2004). Since the first models and simulations, where these mechanisms were implemented, it was shown that they are gradually more efficient as the halo mass increases, starting from M h ∼ 10 12 M ⊙ (e.g., Granato et al. 2004;Springel et al. 2005a;Di Matteo et al. 2005;Croton et al. 2006;Bower et al. 2006;De Lucia et al. 2006;Lagos et al. 2008;Somerville et al. 2008). This mass corresponds roughly to M s 1.5 × 10 10 h −2 M ⊙ . As seen in Figs. 2 and 3, both isolated and Coma supercluster E galaxies more massive than this follow roughly the same correlations of color and sSFR with mass, that is most of them are red and dead, regardless of the environment. Therefore, for massive ellipticals, rather than the environment, the physical mechanisms that depend on halo mass seem to be responsible for keeping E galaxies quiescent (see also Dekel & Birnboim 2006;Woo et al. 2013;Yang et al. 2013;Dutton et al. 2015; and more references therein). What is the situation for the lower mass ellipticals?
As seen in Figures 2 and 3, most of the E galaxies (isolated or from the Coma supercluster) with masses lower than M s = 1.5 × 10 10 h −2 M ⊙ are also red and passive, although for these E galaxies, the quenching mechanisms associated with mass are not already suitable. While the hostile environment of clusters contributes to removing and to not fostering new episodes of cold gas inflow in the E galaxies, this is not the case for the isolated ellipticals. In fact, it should be said that most of galaxies with masses lower than M s = 1 − 1.5 × 10 10 h −2 M ⊙ in the local Universe are centrals, gas-rich, blue, and SF (see, e.g., Weinmann et al. 2006;Yang et al. 2009;, but they are also of late type. The question for these galaxies is why they delayed their active star formation phase to a greater degree the less massive they are (this is referred to as downsizing in star formation rate; e.g., Fontanot et al. 2009;Firmani & Avila-Reese 2010;Weinmann et al. 2012). In the (rare) cases that these lowmass galaxies suffer a morphological transformation to an elliptical, according to our results, they should also destroy the gas reservoirs and strongly quench star formation.
It is expected that a major merger induces an efficient process of gas exhaustion due to enhanced star formation, but if after the merger a fraction of gas is left and/or the galaxy further accretes gas, then it could form stars again and become blue, SF, and even have a new disk (for theoretical works see, e.g., Robertson et al. 2006;Governato et al. 2009;Hopkins et al. 2009;Tutukov et al. 2011;Kannan et al. 2015 and for observational evidence see, e.g., Kannappan et al. 2009;Hammer et al. 2009;Puech et al. 2012). These processes are very unlikely to happen in a group/cluster environment, as mentioned above, but according to our results, late gas accretion and star formation do not occur in most of the very isolated E galaxies either since they are mostly red (80%) and passive (92%), and among those that are blue (20%), only less than one-fourth are SF.
The quenching associated with mass can be very efficient for isolated E galaxies formed in haloes much more massive than 10 12 M ⊙ . In the case of ellipticals formed in less massive haloes, a possible mechanism for ejecting remaining gas or for avoiding further cold gas infall could be the feedback produced by type Ia supernovae (SNe Ia), which are not associated with the current star formation rate (SFR; e.g., Ciotti et al. 1991;Pellegrini 2011). Because the interstellar medium of E galaxies is very poor, the energy and momentum released by SNe Ia are easily expected to attain the intrahalo medium, heat it, and eventually eject it from the low-mass halo. Moreover, the smaller the halo, the more efficient the feedback-driven outflows are expected to be. According to Figures 2 and 3, the lowest mass, isolated E galaxies are all red/passive, and there are pieces of evidence that the less massive the isolated elliptical, the faster it quenched and reddened. On the other hand, Peng et al. (2015) have suggested recently that the primary mechanism responsible for quenching star formation is strangulation (or starvation). In this process, the supply of cold gas to the galaxy is halted with a typical timescale of 4 Gyr for galaxies with stellar mass less than 10 11 M ⊙ . However, it is not clear what could be the mechanisms behind strangulation in an isolated environment. A possibility to have in mind is the removal of gas in early-formed, low-mass haloes due to ram pressure as they fly across pancakes and filaments of the cosmic web (Benítez-Llambay et al. 2013).
Comparisons with theoretical predictions
Within the context of the ΛCDM cosmology, the hierarchical mass assembly of dark haloes happens by accretion and minor/major mergers (see, e.g., Fakhouri & Ma 2010; and more references therein). The average mass accretion and minor/major merger rates depend on mass and environment (e.g., Maulbetsch et al. 2007;Fakhouri & Ma 2009). As a function of environment, present-day haloes in high-density regions suffered more major mergers on average and assembled a larger frac-tion of their mass in mergers than haloes in low-density regions (Maulbetsch et al. 2007). The latter continue growing today mainly by mass accretion. Therefore, at face value, we expect that the very isolated galaxies could be efficiently accreting mass at present due to the cosmological mass accretion of their haloes.
The accretion and merger histories of CDM haloes is the first step to calculate the mass assembly and morphology of the galaxies formed inside them. In this line of reasoning, it could be expected that the isolated E galaxies formed in the ΛCDM cosmology should be on average significantly bluer and with higher SFRs than the 'normal' E galaxies formed in high-density environments because the haloes of the former continue accreting mass by today (see above). Nevertheless, the galaxy-halo connection is by far not direct as a result of the nonlinear dynamics of the infalling subhaloes and the complex physical processes of the baryons, as several semiempirical studies have shown (see, e.g., Stewart et al. 2009;Hopkins et al. 2010;Zavala et al. 2012;Avila-Reese et al. 2014). As the result, for instance, Zavala et al. (2012) have shown that despite the fact that in the ΛCDM scenario the halo-halo major merger rates are high, this does not imply a problem of galaxy-galaxy major merger rates that are too high with the consequent overabundance of bulge-dominated galaxies. Schawinski et al. (2009) have compared their volumelimited SDSS sample of early-type galaxies (complete to M r = −20.7 mag, which is slightly below M * ) to the ΛCDM-based semianalytical models (SAM) of Khochfar & Burkert (2005) and Khochfar & Silk (2006). They found that these SAMs predict a slightly (significantly) higher fraction of blue (SF) earlytype galaxies than the observed sample. In another work, by means of numerical simulations, Kaviraj et al. (2009) have found that the expected frequency of minor merging activity at low redshift can be consistent with the observed low-level of recent star formation activity in some early-type galaxies (Kaviraj et al. 2007). However, the theoretical study that is closest to the analysis presented here is that by Niemi et al. (2010). These authors have used the SAM results of De Lucia & Blaizot (2007) built up in the Millennium Simulation (Springel et al. 2005b), and applied some criteria to select isolated galaxies and to determine which galaxies are ellipticals. They find that 26% of the synthetic isolated E galaxies should exhibit colors bluer than B − R = 1.4 in an absolute magnitude range of −21.5 < M R < −20. They call this population the blue faint isolated ellipticals. In the UNAM-KIAS catalog we find that only three isolated E galaxies satisfy the previous criteria (see Fig. 4), which corresponds to 3.4% of our pure E galaxy sample (0% of ellipticals in Coma). These three isolated ellipticals are part of the four blue SF galaxies (blue squares in all the plots shown in Sect. 3).
Part of the large discrepancy in the fractions of predicted and observed blue, faint galaxies can arise as a result of the different isolation criteria and morphological definitions used in Hernández-Toledo et al. (2010) with respect to Niemi et al. (2010). The isolation criteria of the former consider neighbor galaxies as not relevant perturbers if they have a magnitude difference of ∆m r ≥ 2.5 compared to an isolated galaxy candidate within a radial velocity difference ∆V < 1000 km s −1 , whereas in the latter the difference in magnitude is ∆m B ≥ 2.2 inside a sphere of 500 h −1 kpc and this condition is relaxed to ∆m B ≥ 0.7 for spheres with radii between 500 h −1 kpc and 1 h −1 Mpc. We note that the former uses the r-band whereas the latter uses the B-band. Regarding the morphological definitions, in our case, galaxies are identified as ellipticals based on a structural and morphological analysis. They are denoted as T ≤ −4 accord-A&A proofs: manuscript no. pversion_27844_am ing to Buta et al. (1994). In the case of Niemi et al. (2010), they use the condition T < −2.5 to classify modeled galaxies as ellipticals, where the T parameter is based on the B-band bulgeto-disk ratio. Therefore, these authors may have included some S0 galaxies in their sample. While these differences in the isolation criteria and morphological definitions could reduce the difference between our observed sample and the SAM prediction of Niemi et al. (2010), they are unlikely to explain the disagreement by a factor of around eight in the fractions of blue, faint isolated ellipticals.
It is known that in models and simulations the star formation of galaxies is to some point correlated with the dark matter accretion of their host haloes (e.g., Weinmann et al. 2012;González-Samaniego et al. 2014;Rodríguez-Puebla et al. 2016). The fact that a substantial population of the predicted blue, faint isolated galaxies is not observed suggests that the star formation activity of isolated E galaxies in the last Gyr(s) is overestimated in the SAMs. The overestimate in the star formation activity is likely due to the late (dark and baryonic) highmass accretion rate typical of isolated haloes (see discussion above). Indeed, Niemi et al. (2010) report that haloes hosting model isolated ellipticals continue their dark matter accretion until z ∼ 0, whereas haloes hosting normal (nonisolated) ellipticals have joined nearly all their mass at z ∼ 0.5. Furthermore, isolated ellipticals with halo masses < 10 12 h −1 M ⊙ have half of their stellar mass by z ∼ 0.7, whereas isolated elliptical galaxies hosted by more massive haloes have half of their stellar mass by z ∼ 1.6. Therefore, the different mass assembly histories of the SAM isolated ellipticals hosted by low-mass haloes explain their bluer colors compared to normal model ellipticals and other more massive isolated ellipticals.
Thus, the large difference we find here in the fraction of blue faint isolated E between observations and the SAMs, strongly suggest that some gastrophysical processes are yet missed in the SAMs, in particular at low masses. We have proposed the SN Ia-driven feedback could be a mechanism that is able to avoid a significant population of blue faint ellipticals in isolated lowmass haloes. In fact, the SAMs by De Lucia & Blaizot (2007) included the effects of the SN Ia feedback, but in a very simple (parametric) way. More detailed studies of this process and of the mentioned above "cosmic web stripping", which affects lowmass haloes (Benítez-Llambay et al. 2013), are necessary.
In spite of the differences in the fractions of blue and faint isolated ellipticals between SAMs and observations, it should be said that the SAM predictions for the overall population of isolated E galaxies are consistent in general with our observed sample. The mean B − R color for the synthetic isolated E galaxies in Niemi et al. (2010) is 1.47 ± 0.23, which is bluer than the mean for our observed isolated ellipticals (1.62 ± 0.10, see Table 1), but yet within the scatters. On the other hand, the mean B − R color for their modeled normal (nonisolated) E galaxies is 1.58 ± 0.10, which is bluer than the mean of E galaxies in the Coma supercluster (1.64 ± 0.05), but within the scatters again. These results show also that for both observations and models, the B−R color distribution of E galaxies in isolated environments is broader than those in environments of higher density.
The linear fits of Niemi et al. (2010) to their normal and isolated ellipticals are shown in the CMD of Fig. 4 (dotted and solid lines, respectively). As can be seen, the ellipticals in the Coma supercluster follow the trend predicted by the model normal elliptical galaxies, but the same behavior is observed for many of our isolated ellipticals. Reda et al. (2004) found a similar result using a small observational sample of six bright isolated elliptical galaxies compared to the Coma cluster. However, there is a small fraction of isolated E galaxies that detach from this trend, toward bluer colors at fainter magnitudes. This trend is similar to that followed in the SAM by the isolated ellipticals, while, as reported above, the fraction of galaxies following this trend is much higher in the SAM than in observations. In the (B − R)-M h diagram (left panel of Fig. 9), we plot with a dotted line a visual approximation for the trend of the model nonisolated ellipticals in Niemi et al. (2010). These authors predict that normal ellipticals have a roughly constant color for the whole halo mass range sampled (10 10 < M h /h −1 M ⊙ < 10 14 ), but isolated ellipticals show bluer colors at M h < 10 12 h −1 M ⊙ . This behavior is somewhat similar for our observed isolated E galaxies, thus suggesting that some isolated ellipticals hosted by low/intermediummass haloes should exhibit bluer colors than normal ellipticals in haloes with the same mass.
We conclude that the ΛCDM-based models of galaxy evolution are roughly consistent with observations in regards to the local population of E galaxies, both isolated and in cluster-like environments. However, the SAMs predict a too high abundance of blue, faint isolated ellipticals formed in low-mass haloes with respect to our observational inference. The above discussed mechanisms (SN-Ia feedback and cosmic web stripping) could help to avoid further gas accretion to galaxies in low/intermedium-mass haloes.
The blue SF isolated ellipticals
In the previous Sections, we have shown that isolated E galaxies, as well as those in the Coma supercluster, are mostly red and passive. However, in the isolated environment, there is a small fraction of ellipticals that systematically deviate from the red/passive sequence as their masses are smaller. The question is whether these few intermedium-mass ellipticals are blue and SF because they suffered the morphological transition recently and did not yet exhaust their gas reservoir (McIntosh et al. 2014;Haines et al. 2015), which can include the process of disk regeneration as suggested by Kannappan et al. (2009), or because these ellipticals were rejuvenated by recent events of cold gas accretion as suggested by Thomas et al. (2010).
In general, several pieces of evidence show that the structural properties and correlations of blue SF isolated ellipticals do not differ significantly from those of the other isolated ellipticals or even the group/cluster ellipticals. For example, in Fig. 7 we show that the radius-M s correlation of all ellipticals is roughly the same. Kannappan et al. (2009) found that blue-sequence E/S0s are more similar to red-sequence E/S0s than to late-type galaxies in the M s -radius relation. Blue E/S0 galaxies are also closer to red E/S0 than to late-type systems in this relation at 0.2 < z < 1.4 (Huertas-Company et al. 2010). We need to go in more detail and explore whether the blue SF isolated ellipticals have some peculiarities that could suggest us which mechanisms are dominant in making them blue and SF.
A characterization of first order on the nature of the blue SF isolated E galaxies is proposed here using the available gri images and spectra from the SDSS database. This includes only four galaxies, three of which coincide with the definition of blue, faint isolated ellipticals in Niemi et al. (2010). Table 2 summarizes the general properties of these galaxies. In Apprendix A, we present an analysis of several structural-morphological and spectroscopical properties for each one of the four observed blue SF isolated ellipticals. We point out that UNAM-KIAS 1197 shows evidence that it is classified as a LINER galaxy and that UNAM-KIAS 613 could be actually an AGN due to its broad component in Hα (see Appendix A for details). We have also carried out similar analyses for the other, more common red and passive isolated ellipticals (to be presented elsewhere). In the following, we discuss the results of our analysis with the aim of elucidating the nature of the blue SF isolated ellipticals.
From the surface brightness profiles in the bands g and i, we find that the four blue SF isolated ellipticals show radial color gradients with bluer colors toward the galaxy center (see bottomleft panels in Figs. A.1, A.2, A.4 and A.5 in the Appendix A), while most of the red and passive ellipticals show negative or flat radial color profiles (e.g., den Brok et al. 2011). The positive color gradient may be evidence of dissipative infall of cold gas, which promotes recent star formation in the central regions. This supports the rejuvenation scenario for the blue SF ellipticals. Suh et al. (2010) have suggested that positive color gradients in early-type galaxies are visible only for 0.5−1.3 billion years after a star formation event. Afterward, the galaxies exhibit negative color gradients. Shapiro et al. (2010) have also proposed the rejuvenated scenario for some early-type galaxies in the SAURON sample (de Zeeuw et al. 2002) with red optical colors that show star formation activity in the infrared. These galaxies correspond to fast-rotating systems with concentrated star formation. They suggest that when the star formation ceases, over the course of ∼ 1 Gyr, the transiently star-forming galaxy return to evolve passively. Furthermore, Young et al. (2014) have also proposed the rejuvenation scenario to explain the blue tail of early-type galaxies in the color-magnitude diagram in the ATLAS 3D sample (Cappellari et al. 2011).
From our detailed photometric analysis (Appendix B), we find that the structure of isolated E galaxies can be described in general by three Sérsic components (inner, intermediate, and outer), which is usually reported for ellipticals in groups and clusters (e.g., Huang et al. 2013b). However, the outer components of our blue SF isolated ellipticals (see top-right panels of Figs. A.1 A.2, A.4 and A.5 and Table A.1 in Appendix A) are present in two cases, UNAM-KIAS 613 and UNAM-KIAS 1197, where the latter shows a small value of n, and in other two cases it seems that this component is even absent, UNAM-KIAS 359 and UNAM-KIAS 1394. On the contrary, bright cluster galaxies have been reported to have extended stellar envelopes with large n indexes (Morgan & Lesh 1965;Oemler 1974;Schombert 1986). For a sample of low-luminosity ellipticals, most of them in group-like environments, Huang et al. (2013b) measured a mean n value of 1.6 ± 0.5 and a mean effective radius r e f f = 7.4 ± 2.6 kpc for the outer component. Only UNAM-KIAS 613 has a n value larger than this mean and all the blue SF isolated ellipticals have values of the outer r e f f that are smaller than the mentioned mean. Therefore, at least three of the four blue SF isolated ellipticals seem to have an outer structure that is different from other ellipticals.
According to Huang et al. (2013a), the different structural components in E galaxies may be explained by a two-phase scenario (Oser et al. 2010;Johansson et al. 2012). The inner and/or intermediate components are the outcomes of an initial phase characterized by dissipative (in situ) processes such as cold accretion or early gas-rich mergers. The outer, extended component of ellipticals are related to a second phase dominated by nondissipative (ex situ) processes such as dry, minor mergers after the quenching of the galaxy. This could also explain the build up of E galaxies in isolated environments. Galaxies with very small outer n and r e f f values or with no outer component, which is the case in three out of four blue SF isolated ellipticals, could have not suffered ex situ processes recently. This again supports the rejuvenation scenario for the blue SF ellipticals, at least for three of them.
Regarding the inner components of the blue SF isolated ellipticals, the results of our study show that their best-fit Sérsic indices have n ≤ 2.0 and r e f f ≤ 0.6 kpc (see Table A.1 in Appendix A), whereas Huang et al. (2013b) obtained mean values of n = 3.2 ± 2.1 and r e f f = 0.7 ± 0.4 kpc for their sample of ellipticals. Thus, though within the scatter, the blue SF isolated ellipticals seem to have a more disky inner component than other low-luminosity ellipticals. This can be consistent with both the rejuvenation by cold gas infall or the post-merger disk regeneration scenarios.
In Appendix B.1, we list the different fine structure and residual features that can be found in the images of E galaxies and their association with different levels of interaction/merger. Our analysis (see Table A.1) shows that two of the blue SF isolated ellipticals (UNAM-KIAS 613 and UNAM-KIAS 1394) do not present convincing evidence of (recent) disturbances of any type and the other two, UNAM-KIAS 1197 and UNAM-KIAS 359, present evidence of weak disturbance effects through low surface brightness (LSB) outer shells. We found no evidence of tidal tails or broad fans of stellar light, which are both associated with dynamically cold components produced by an accreted major companion. No evidence of significant sloshing (> 10%) in the inner kpc region was found either. An additional revision of the residuals in the very central regions suggests the presence of a few thin localized patches, which we tentatively interpret as coming from dusty features. This is consistent with the reddened central subregions observed in the corresponding color maps. Such presumably nuclear dust structures may be associated with the inner or intermediate disk-like components found in our decomposition analysis, and could be evidence of centralized star formation due to cold gas infall. Thus, the lack of evident fine structure and residual features and the presence of nuclear dust structures in the four blue SF isolated ellipticals supports again the mechanism of rejuvenation in contraposition to the one of recent mergers and disk regeneration. We caution, as noted by George & Zingade (2015), that the SDSS images are not the most adequate for detecting finer details in early-type galaxies. Deeper imaging data is preferable.
Finally, in the Appendix C we describe our spectroscopic analysis for the four blue SF isolated elliptical galaxies, where the SDSS spectra were used. Recall that SDSS provides only one optical fiber of 3 arcsec aperture centered in each galaxy. For our four objects, this corresponds to the inner ≈ 1 − 2 kpc, which roughly corresponds to their de Vacouleours effective radii (Fig. 7). Our analysis of the emission lines gives similar results in the BPT diagram as those reported in the STARLIGHT database and plotted in Fig. 8 Our analysis suggests then that the ionization mechanism could have a large contribution from a nuclear nonthermal component (e.g., LINER) in two of the four blue SF isolated ellipticals. Thus, the star formation rate in these galaxies could be lower than that calculated from the Hα flux.
The stellar populations encode the mass assembly history over the lifetime of the blue SF isolated elliptical galaxies, which is important to gain insights on their formation and evolution. By applying a stellar population synthesis analysis (see Appendix C), the obtained mass-weighted star formation histories show A&A proofs: manuscript no. pversion_27844_am that the four blue SF ellipticals formed < 5% (< 20%) out of their present-day stellar masses in the last 1 (3) Gyrs (see Fig. C.1 and Table A.2). The obtained light-weighted star formation histories show that 30-60% of the present-day luminosity is due to star formation in the last 1 Gyr. These results suggest that the blue SF isolated ellipticals formed most of their stars early, but in the last ∼ 1 Gyr they had a period of enhanced star formation. This enhanced period of star formation is reflected in the very small luminosity-weighted average ages obtained for these galaxies with respect to the rest of the ellipticals (see Fig. 6). We can also estimate the star formation timescale (SFTS; Plauchu-Frayn et al. 2012) activity in each galaxy by calculating the difference of mass-weighted and light-weighted average stellar ages (age mw and age lw , respectively) as ∆(age) = 10 log(age mw ) -10 log(age lw ) . These values are reported in Table A.2. The SFTS is an indicator of how fast a galaxy created its stellar population, or how long the stellar activity was prolonged in this galaxy. A typical elliptical galaxy, for example, where star formation has stopped long ago, would be expected to have a short SFTS. The blue SF isolated E galaxies have SFTS values of 8.5 Gyr on average. As a comparison, Plauchu-Frayn et al. (2012) find that early-type galaxies in Hickson compact groups (Hickson 1982;Bitsakis et al. 2010; have SFTS values of 3.3 Gyr, whereas similar isolated galaxies have values of 5.4 Gyr, meaning that the former have formed their stars over shorter timescales than isolated early-type galaxies. The blue SF isolated galaxies show higher values than those respective samples of early-type galaxies, which suggests a more prolongated star formation activity in these galaxies.
Our photometric and spectroscopic analyses of the rare four blue SF isolated ellipticals are not conclusive. However, our results suggest that in general these galaxies do not present evidence of strong recent disturbances or mergers in their structure and morphology but they have been forming stars since the last ∼ 1 Gyr in the central regions; in two cases there is also some evidence of AGNs. We conclude that it is more plausible that these isolated E galaxies assembled early as other ellipticals but they were rejuvenated by recent (< 1 Gyr) accretion events of cold gas. On the other hand, integral field spectroscopy (IFS) has allowed the study of kinematic and stellar population properties of early-type galaxies. For example, in the SAURON (Bacon et al. 2001;Kuntschner et al. 2010;Shapiro et al. 2010), ATLAS 3D (Cappellari et al. 2011;Young et al. 2014;McDermid et al. 2015), and CALIFA (Sánchez et al. 2012;González Delgado et al. 2014; projects. Future observations with IFS will be crucial to shed more light on the formation and evolution of blue isolated elliptical galaxies. In the Introduction we stated the possibility of using pure E galaxies in very isolated environments as 'sensors' of gas cool-ing from the intergalactic medium. This cool gas, once trapped by a galaxy, should form stars. The fact that only a negligible fraction ( 4%) of our sample of local isolated ellipticals are blue and SF suggests that the process of cooling and infall of gas from the warm-hot intergalactic medium is very inefficient.
Conclusions
We have studied a sample of 89 local very isolated E galaxies (z = 0.037 on average) and compared their properties with those from E galaxies located in a higher density environment, the Coma supercluster. The samples studied here refer only to morphologically, well-defined elliptical (pure-spheroid) galaxies in the mass range 6 × 10 8 M s /h −2 M ⊙ 2 × 10 11 , in contrast to other works that select overall early-type galaxies including S0s objects. Our main results and conclusions are as follow.
(i) The correlations of color, sSFR, and size with mass that follow most of the isolated E galaxies are similar to those of the Coma supercluster E galaxies. Notwithstanding the environment, most of ellipticals are 'red and dead'. All E galaxies more massive than M s ≈ 5 × 10 10 h −2 M ⊙ (M h ≈ 10 13 h −1 M ⊙ ) are quiescent. As the mass or luminosity is smaller, both isolated and Coma ellipticals become bluer, although on average they remain in the red sequence. However, a few intermediate-mass ellipticals pass to be moderately blue in Coma, while in the case of the isolated ellipticals, a fraction of them deviates systematically toward the blue cloud. The extreme of this branch is traced by those isolated ellipticals that are blue and SF at the same time; this includes only four ellipticals, which have intermediate stellar masses between 7 × 10 9 and 2 × 10 10 h −2 M ⊙ . These blue SF isolated ellipticals are also the youngest galaxies with lightweighted stellar ages 1 Gyr, which could indicate recent processes of star formation in them.
(ii) In terms of fractions, among the isolated ellipticals ≈ 20% are blue, 7% are SF, and ≈ 4.5% are blue SF, while among the Coma ellipticals ≈ 8% are blue, < ∼ 1% are SF, and there are no blue SF objects. On average, the galaxies in Coma have sSFR values that are lower than isolated ellipticals by ∼ 0.2 dex and are older by < ∼ 1 Gyr. Based on a color-color criterion, ≈ 10% of the isolated ellipticals show evidence of recent quenching. All of these isolated RQEs are less massive than M s ≈ 7×10 10 h −2 M ⊙ , and are approaching the red sequence (the two lowest massive ellipticals are actually already red), which suggests that the quenching and reddening happened quickly in the isolated environment. In the Coma supercluster, excepting one marginal case, there are no RQEs, even among those that are still blue; for the latter, the quenching and reddening seem to have proceeded more gradually.
(iii) Around 40% of the E galaxies have detectable (S/N > 3) emission lines in both isolated and dense environments. Accord-ing to the BPT diagram, most of these are AGNs. However, the fraction of those classified as SFN/TO is slightly larger for the isolated ellipticals than for those in the Coma supercluster (11% and 8%, respectively).
Our results show that all massive ellipticals (M s 5 × 10 10 h −2 M ⊙ ), and a large fraction of the less massive ellipticals, assembled their stellar populations early, remaining quenched since these epochs, regardless of whether they are isolated or from the Coma supercluster. Moreover, in both of these different environments, a downsizing trend is observed: as the mass becomes lower, the ellipticals are on average less red and have higher sSFR. Thus, rather than environment, it seems that the processes involved in the morphological transformation of E galaxies are those that dominate in their efficient star formation shut-off, the depletion of their cold gas reservoir, and their downsizing trends. On the other hand, new episodes of cold gas inflow are very unlikely to happen in the environment of clusters or for isolated galaxies living in massive haloes, hence, the E galaxies are expected to remain quenched. However, our study shows that most of intermediate-and low-mass isolated ellipticals have also transited to the red/passive sequence by z ∼ 0. We suggested two possible mechanisms to explain why most low-mass E galaxies in an isolated environment could be devoid of gas: (1) the galactic winds produced by the feedback of SNe Ia, and (2) the removal of gas in low-mass haloes due to ram pressure as they fly across pancakes and filaments of the cosmic web.
Interesting enough, the predictions of ΛCDM-based SAMs for the population of E galaxies (both in clusters/groups and isolated) agree in general with the results of our study, except that these models predict a too high abundance (a factor of 8 more) of blue, faint (low-mass) isolated ellipticals with respect to our results. This suggests that some gastrophysical processes at low masses, for example, those mentioned above, are yet missed or underestimated in SAMs.
Our results show that E galaxies in the isolated environment are not too different from those in the Coma supercluster at a given mass, but the fractions of blue or SF objects is larger in the former case than in the latter. Hence, in some cases, the isolated environment seems to propitiate the rejuvenation or a late formation of the ellipticals. The extreme examples are those ellipticals that are blue and SF at the same time; they exist only in the isolated environment. In Appendix A, we presented a structural/spectroscopic analysis of these four ellipticals with the aim of inquiring about their nature. We found the following for them: (iv) The four blue SF isolated ellipticals have radial color gradients with bluer colors toward the galaxy center. Furthermore, at least three out of the four blue SF isolated ellipticals have only two (inner/intermediate) structural components, lacking the third outer component seen in classical ellipticals. The four ellipticals lack significant fine structure and residual features, and show the presence of nuclear dust structures.
(v) The spectroscopic analysis suggests that the ionization mechanism can have a large contribution from a nuclear nonthermal component (e.g., LINER) in two of the four blue SF isolated ellipticals. On the other hand, 30 − 60% of their presentday luminosity, but only < 5% of their present-day mass, is due to star formation in the last 1 Gyr. This suggests that these galaxies formed most of their stars early but in the last ∼ 1 Gyr they had a period of enhanced star formation. Their high SFTS values suggest that they have formed their stars over prolongated timescales.
The positive color gradient in the four blue SF isolated ellipticals may be evidence of recent cold gas infall, which supports the rejuvenation scenario in contraposition to the scenario of a recent merger and/or disk regeneration. The presence of only inner and intermediate structural components, which are related to dissipative processes such as cold accretion or early gas-rich mergers, and the lack of outer component, which is related to nondissipative processes (e.g., dry mergers) after the quenching of the galaxy, suggest that these ellipticals did not suffer recent dry mergers but probably had cold gas accretion. This is supported by the lack of fine structure and residual features and the nuclear dust structures.
We conclude that it is more plausible that the blue SF isolated E galaxies assembled early as other ellipticals, but they were rejuvenated by recent (< 1 Gyr) accretion events of cold gas. Further work with powerful observational methods such as IFS is needed to investigate the kinematic and stellar population properties resolved in space of the blue SF isolated elliptical galaxies. These galaxies can be used to trace and estimate the fraction of recent gas cooling from the cosmic web.
A&A proofs: manuscript no. pversion_27844_am Our 2D residual images show a set of localized filaments in the central region, probably associated with dust structures. This galaxy does not show any significant sloshing in the inner kpc. We do not detect any low surface brightness features in this galaxy.
Top-right panel of Fig At the distance of UNAM-KIAS 1394, the 3 arcsec fiber spectrum subtends 2.1 kpc. The spectroscopic analysis shows definite star formation in the nuclear region (SFN) for this galaxy (see Fig. 8).
Appendix B: Methodology of the photometric analysis
The homogeneous collection of 0.39 arcsec/pixel optical images available in the SDSS was used to carry out a photometric characterization of our E galaxies, in particular the four blue SF isolated galaxies. Searching for fine structure in the inner/outer re-gions of elliptical galaxies can be achieved with filter enhancement of the images. This procedure involves taking the original image of the galaxy and filtering it with a Gaussian kernel of two/three times the size of the features to enhance. Filterenhancement was applied to the r-band SDSS images. Features in elliptical galaxies can also be revealed by color maps. Since we possess data from the SDDS ugri bands, we generate g − i color maps to look for color features/gradients toward the center.
As the most important procedure in the present section we choose the method of modeling the two-dimensional (2D) light distribution through GALFIT (Peng et al. 2002;2010a). From the 2D image decomposition we try to reproduce the observed surface brightness distribution and explore how these local isolated elliptical galaxies may contain photometrically distinct substructures that can shed light on their evolutionary history.
GALFIT can fit an arbitrary number or combination of parametric functions. For the present study, the Sérsic function is adopted because (i) it is appropriate enough to model the main structural components in galaxies and because (ii) it has been widely used in the literature, thus it is useful for further comparisons with published results. Each galaxy is fit with a series of models, each consisting of one to four Sérsic components. These components were ranked by physical size (according to their effective radius, r e f f ) and were generically designated as inner, intermediate and outer components. As a first order check of our multicomponent fitting, we extracted the azimuthally averaged 1D surface brightness profile from our best 2D models and com-pared it with the corresponding profile derived from the original data. PSF images were built by following procedures in the IRAF 7 DAOPHOT package. Good estimates of seeing profiles are A&A proofs: manuscript no. pversion_27844_am Notes. Columns are: name in the UNAM-KIAS catalog, physical scale (kpc, h = 0.73) subtended by the 3 arcsec SDSS fiber at the distance of each galaxy, type of spectrum and ionization source associated with the nuclear region from the modeled SDSS spectrum, log 10 of the average light-weighted age (in units of yr) and its error, log 10 of the average mass-weighted age (in units of yr) and its error, and the star formation timescale (in units of Gyr). mandatory since residual errors in the seeing estimate could lead to mismatches between the model and the data. The different components are always fit by sharing the same central position.
Since the presence of field stars can introduce potential uncertainty into the GALFIT component models, these stars were PSF subtracted or interpolated either close to the galaxy center or in the neighborhood previous to any fitting. At the end of the modeling, we carefully inspect the residual image with the original overlaid.
As a by-product of our image analysis, a measure of the nonaxisymmetry in the surface brightness distribution in the inner kpc of the UNAM-KIAS isolated elliptical galaxies is presented. The r-band images from the SDSS database were analyzed by fitting elliptical isophotes whose centers were first (i) kept fix and then (ii) allowed to vary. A comparison of the centers of the isophotes in (i) and (ii) may reveal a sloshing pattern or spatial variation in the central kpc that could indicate mass asymmetry and/or a dynamically unrelaxed behavior in the central regions of these galaxies.
Surface brightness profiles in g and i bands were extracted by imposing a fixed center and also fixed ellipticity ǫ and position angle estimates during our isophotal analysis. This guarantees homogeneous color estimates. The g − i color gradient is corrected for mean galactic extinction. lative distributions of light-weighted and mass-weighted ages of the stellar populations that better reproduce the observed spectra.
The average values are reported in Table A.2. | 2016-01-26T21:38:47.000Z | 2015-11-27T00:00:00.000 | {
"year": 2015,
"sha1": "407127f635a5dc132ca83496ec04a28166a22d81",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2016/04/aa27844-15.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "407127f635a5dc132ca83496ec04a28166a22d81",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
74871597 | pes2o/s2orc | v3-fos-license | Nonsilica Oxide Glass Fiber Laser Sources: Part I Nonsilica Oxide Glass Fiber Laser Sources: Part I
Additional information available at the end of the chapter Abstract Nonsilica oxide glasses have been developed and studied for many years as promising alternatives to the most used silica glass for the development of optical fiber lasers with unique features and properties. Depending on the glass former of choice, these glasses can offer very distinctive physical properties if compared to silica-based glasses. With regard to the development of photonic fiber devices, these key properties include low phonon energy, high rare-earth ion solubility, high optical nonlinearity and easy handling procedures. This chapter, part I of a detailed study concerning nonsilica oxide glass – based optical fiber laser sources, reviews the main properties of three different nonsilica oxide glass families, namely phosphate, germanate and tellurite. The manufacturing process of an optical fiber using these glass materials is also discussed in Section 3 of this chapter.
Introduction
Inorganic glasses have been playing a key role in the development of optical devices and instruments, thanks to the unique combination of different properties: they are transparent in the visible region, mechanically stiff and resistant, chemically durable and can be easily manufactured into different highly homogenous forms and sizes.Starting from ancient times, transparent glasses were fabricated to make windows and goblets, then later, thanks to the improvement in the glass manufacture brought about by Venetian masters in the Middle Age, stable glass compositions were processed into eyeglasses, lenses and mirrors.High-quality optical glasses became key player materials for the fabrication of lenses for telescopes and microscopes, thus enabling tremendous development of modern science [1].Finally, long haul optical fiber-based backbone networks are based on extremely pure and low loss glasses, which then are enabling materials for the internet revolution [2,3].
Due to their outstanding ultra-low propagation loss and intrinsic thermomechanical properties, silica-based glasses have been the material of choice for most optical fiber-related applications.As passive media, they were crucial in allowing the deployment of long haul fiber networks and found applications even as nonlinear frequency conversion fiber laser sources, despite the low intrinsic nonlinearity of silica.When doped with rare-earth (RE) ions, they have been used for fiber lasers and amplifiers with outstanding performance in the nearinfrared wavelength region [4].
Despite its success, however, silica glass possesses several intrinsic limits: 1.The high phonon energy of silica glass around 1100 cm À1 [5] permits to exploit only a restricted range of the possible emission wavelengths offered by RE ions.Silica glass fibers have proved to be outstanding for the development of 1, 1.5 and 2 μm laser sources, but numerous applications require alternative wavelengths, in particular in the mid-infrared (mid-IR) wavelength region.
2.
The RE doping concentration level is limited in silica glass [6]: both the nature of the silica glass network and the doping process itself limit the doping concentration achievable.Thus, silica glass cannot be used to develop short-length devices that are required for a single-frequency fiber laser and for the development of low-nonlinearity booster amplifier for high peak power lasers.
3. Additionally, the short infrared transmission edge of silica glasses restricts their use for numerous high-impact applications, such as mid-IR laser, chemical sensing and infrared imaging [7].
The so-called soft glasses are based on alternative glass formers exhibiting different nature and structure, which offer alternative phonon energies and transmission characteristics, high RE ion doping levels (up to 10 21 ions/cm 3 ) and high optical nonlinearity (orders of magnitude higher than that exhibited by silica glass).Soft glasses include oxide and nonoxide glasses.Nonoxide glass compositions of great interest for laser emission in the mid-IR wavelength region are mainly based on fluoride [8] and chalcogenide [9,10] glass formers.They provide a wide transmission window extending up to the mid-IR well above 2 μm, but their low chemical stability and poor mechanical properties have so far strongly limited their use for devices in harsh environment.Oxide glasses, although exhibiting shorter wavelength range of operation in the infrared, are suitable for the integration with commercial fiber-based components and demonstrated reliability for the incorporation in operational environment.
In this chapter, we present a detailed overview of the most promising oxide-based soft glass systems, namely phosphate, germanate and tellurite, together with the fabrication of fibers based on these glass families.The synthesis and individual physical properties of these glasses are presented in Section 2 to identify their prospect and range of applications.The engineering techniques used to manufacture optical fiber preforms out of these glasses and the fiber drawing are then discussed in Section 3.
Nonsilica oxide glasses
Engineering a glass for a specific photonic application requires the knowledge of some key parameters that enable its use.In reviewing the main oxide glass system alternative to silicabased compositions, we will focus on the following properties: solubility of RE ions, chemical durability, thermal stability, mechanical reliability, ease of fabrication, fiber drawing ability, nonlinearity and phonon energy.This last property is less familiar outside the community of glass scientists working on active materials for lasers.In studying a suitable material for coherent sources in the mid-IR, a particular effort is required to design hosts that minimize their interaction with the electronic transitions of the ions leading to the emission of photons.High phonon energy glasses cause the decay from an excited state to a lower state to occur via nonradiative emission of heat, in the form of phonons, thus decreasing the overall efficiency of the laser emission [4].In this section, these properties of phosphate, germanate and tellurite glasses are reviewed.
Synthesis of nonsilica oxide glasses
The synthesis of silica glass has been subjected to continuous evolution, with the aim of reducing the causes of extrinsic absorption due to the presence of impurities, namely transition Because of the corrosive nature of some of the chemical precursors involved, this melt casting process leads to an inevitable degree of optical contamination, both related to the initial purity of the chemical precursors but also to possible cross-contamination occurring through the whole preparation process.Crucible material is of prime importance, as some degree of dissolution in the glass can occur during the melting process [19].Although the final glass material produced through this process does not compete with the purity of standard silica glass preform, yet optical losses below 100 dB/km are readily achievable under meticulous and clean melting conditions.
Moreover, the melt casting process provides the advantage that the glass can be easily shaped using adequate casting mold geometry.This feature is largely exploited when preparing multicomponent glass preforms, as discussed in the following paragraphs.
Phosphate glasses
Pure phosphate glasses have not been historically as popular as silicate glasses, because of their poor chemical stability and mechanical properties [20], which, however, could be significantly improved by the addition of proper modifier and intermediate ions [21,22].Phosphate glasses were mainly used for HF-resistant glasses and for other niche applications [20].Later on, research works on phosphate glasses were stimulated by the wide range of potential and commercial applications of these materials, from the treatment of hard-water [23] to biomedicine [24] and for the storage of radioactive wastes [25].
Optical quality phosphate glasses, initially developed by Schott and coworkers, were of interest also for their UV transparency [26], but found no significant applications due to their poor stability.However, the need of a suitable gain medium for high-peak power lasers such as the one developed within the framework of the inertial confinement fusion (ICF) research led to a resurge in their employment, after careful engineering of the compositions [27].
Structure
The basic units that constitute the phosphate glass are the P-tetrahedra, with a central phosphorous atom surrounded by four oxygen atoms.These are connected through bridging oxygen (BO) atoms to give different phosphate anions.The tetrahedra are classified using the Q i terminology [28], where 'i' represents the number of tetrahedra linked to the unit (shown schematically in Figure 1).
Depending on the oxygen to phosphorous ratio, phosphate glasses can be classified into a series of subcategories, from ultra-phosphate (O/P ≤ 3) to ortho-phosphate (O/P = 4).
The O/P ratio in vitreous phosphate (v-P 2 O 5 ) is derived from the stoichiometry of the pure compound and it is equal to 2.5.The basic unit of the structure of the v-P 2 O 5 is the Q 3 tetrahedron, which has three covalent bonds via BO atoms with the neighbor tetrahedra and a terminal shorter bond via nonbridging oxygen (NBO) atoms.
The structural strength and chemical durability of the optical phosphate glass can be improved by adding appropriate components, as described in several patents and papers [29][30][31].Metal oxides added to v-P 2 O 5 can improve the physical properties and chemical stability of the system.In more detail, alkali metal oxides R 2 O (R = Li, Na, K, Rb and Cs) can be added to the glass to increase the RE solubility [32].Network intermediates R 2 O 3 (R = B and/or Al) are also added in phosphate glasses to improve their chemical durability and mechanical properties and to decrease the solubility in water.If the amount of R 2 O 3 is too low, the glass is water soluble, while if the amount of R 2 O 3 is too high, an increase in the glass transition temperature (T g ) and the crystallization temperature occurs [33].In particular, even a small addition of R 2 O 3 can significantly improve the mechanical properties of the phosphate glass.This is due to the particular behavior of R 3+ ions that can have both tetrahedral and trigonal coordination [33][34][35].The presence of alkali-earth oxide MO (M = Mg, Ca, Ba, Sr and Zn) in the glass prevents the devitrification and improves the chemical durability [35].When the amount of MO is too low, the glass is hygroscopic and has poor chemical durability and poor optical quality; when the amount of MO is too high, the glass tends to devitrify [29].
In conclusion, the addition of various dopants, such as alumina, alkali and earth-alkali oxides, was demonstrated to reinforce the phosphate glass network.Moreover, when RE ions were added, the glasses proved to be suitable materials for lasers, showing an interesting combination of low nonlinear refractive index and high optical gain.
Phosphate glasses as laser material
Phosphate glasses doped with Nd 3+ ions have proved suitable for the fabrication of large monolithic active material sections constituting the high-peak power laser at Lawrence Livermore Laboratories and in other laser ignition facility infrastructures around the world [27].This was possible by developing compositions with high durability, which displayed a high-emission cross-section, low nonlinear refractive index and high energy storage and extraction characteristics [36].Besides, phosphate glasses, with respect to silicate glasses, can be fabricated free of Pt inclusions, which may cause catastrophic damage to the optical active material [37].
More recently, the development of high-peak pulsed optical amplifiers asked for materials able to incorporate high amounts of RE ions, and phosphate glasses became an ideal candidate since up to 10 21 ions/cm 3 of RE can be accommodated without clustering effects [38].This is important for pulsed optical amplifiers, where nonlinear optical effects must be minimized: phosphate glass allows reducing the length of the amplifier with respect to the silica counterpart.In addition, phosphate glasses are also less susceptible than silica to photodarkening [39] and display cross-sections.
Finally, their mechanical reliability allows the integration of phosphate fibers with commercial silica fibers through cleaving and arc fusion splicing [40].
Phosphate glasses used for lasers in the eye-safe wavelength region usually incorporate erbium (Er 3+ ) as activator ion, with emission centered at around 1550 nm corresponding to the radiative decay from the 4 I 13/2 excited state to the ground state 4 I 15/2 (see Figure 2).Like in the case of
Advances in Glass Science and Technology
Nd 3+ , also the cross-section of Er 3+ ions in phosphate host is high: values at peak of around 7.0 Â 10 À21 cm 2 are reported with respect to values of around 5.5 Â 10 À21 cm 2 for silica [4,41].
In order to improve the overall efficiency of the lasing process, ytterbium (Yb 3+ ) ions are often employed as sensitizer in combination with Er 3+ ions: thanks to the superior absorption crosssection at the pump wavelength (980 nm) of these ions, the excitation from the ground state 2 F 7/2 to the 2 F 5/2 excited state takes place.The energy is then transferred to the 4 I 11/2 excited state of erbium, which decays through the nonradiative 4 I 11/2 ! 4 I 13/2 transition followed by the radiative transition 4 I 13/2 ! 4 I 15/2 .This energy transfer reduces the threshold of the laser emission and improves the efficiency of the device [42,43].The interplay between ytterbium and erbium ions is depicted in Figure 2. Phosphate glass represents, with respect to silica, an ideal host because its high phonon energy allows obtaining transfer efficiencies up to 95% [44].
The lifetime values of the excited state corresponding to the upper laser level provide useful indications of the population inversion ability of the emitter.A higher lifetime value is preferred because that will allow the large population inversion needed for high gain and low noise optical amplifiers.In the case of lasers, high lifetime values will permit lower pump power to reach the laser threshold, with resulting higher efficiency in laser emission and lower heat accumulation in the material.Silica, given the lower oscillator strength of the 4 I 13/2 ! 4 I 15/2 transition, displays generally a higher lifetime value (10.80 ms) than phosphate glass (8.25 ms) [4].However, phosphate glasses maintain high lifetime values even for high erbium concentrations: values of 7.5 ms are reported for an erbium concentration of 6 Â 10 20 ions/cm 3 [45].
Germanate glasses
Pure germanium oxide was obtained in its glassy state around 90 years ago [19], showing similar properties to silica in terms of mechanical strength and chemical stability, although the high cost of the raw materials did not make it an attractive alternative.However, since the Ge▬O bond displays a lower energy than Si▬O, germanate glasses present a shift to lower wavenumbers in their phonon energy with respect to silica from 1100 to 900 cm À1 ,t h u s extending the transparency window up to 4.5 μm [46].GeO 2 glass was thus proposed as optically transparent material alternative to silica for telecom applications, thanks to the intrinsically low attenuation at the wavelength of 2 μm [47].Lead germanate glasses [48] were developed and studied in the view of laser beam delivery above 1.5 μm [49].RE-doped alkali germanate glasses were explored as magneto-optic materials: Faraday angle rotation was measured, providing a linear variation of Verdet constant with the concentration of RE ions [50].
Structure
The structural units of pure germanium dioxide glass are GeO 4 tetrahedra.Binary alkali germanate glasses undergo a change from 4-fold Ge to 6-fold Ge (GeO 6 octahedra), with a corresponding increase in density and refractive index, which reaches a maximum at around 15 mol% of M 2 O modifier.Higher values of modifier produce a progressive formation of 4-fold coordinated Ge accompanied by a gradual depolymerization of the network through an increase of nonbridging oxygens.This behavior is also known as the "germanate anomaly" [51,52].
Lead germanate glasses are made of a mixture of 4-and 6-fold coordinated Ge, which, with increasing the doping level of lead, turns into a predominance of GeO 4 tetrahedral units.
These characteristics, together with high RE solubility, make them very interesting for the development of laser devices operating above 1.5 μm.
Germanate glasses as laser material
Germanate glasses, among oxide soft glasses, are the best in terms of mechanical properties, thanks to the great similarity of GeO 2 with SiO 2 .Lead germanate glasses show an outstanding resistance to devitrification [53] and a wide transmission window, while offering a suitable environment for RE ions, thanks to the low phonon energy of 920 cm À1 .
For the above-mentioned reasons, the main studies of germanate glasses were focused on those RE ions emitting at wavelengths above 1.5 μm, namely Tm 3+ ,Ho 3+ and Er 3+ .
Tm 3+ ions are of interest for the emission in the mid-IR wavelength region at around 2 μm.A maximum output power of 346 mW and a slope efficiency of 25.6% were obtained when pumping a 1 mol% Tm-doped germanate glass by a 790 nm laser diode [54].The glass was characterized by good forming ability and chemical durability and exhibited a large emission cross-section of 8.69 Â 10 À21 cm 2 , a high quantum efficiency of the Tm 3+ : 3 F 4 level of 71% and a low nonradiative relaxation rate of the 3 F 4 ! 3 H 6 transition of 0.09 ms À1 .
In view of enhancing the intensity of the 1.8 μmemissionofTm 3+ ions, the Yb 3+ ion codoping is commonly adopted due to the large absorption of Yb 3+ at the diode-pumping wavelength of 980 nm.Among the interesting sensitizers, Yb 3+ presents the advantage of displaying a simple energy level scheme, which is beneficial for obtaining large absorption and emission crosssections and for avoiding any undesirable excited state absorption under intense pumping [55].
The radiative characteristics and spectroscopic properties of Yb 3+ /Tm 3+ -codoped bismuth germanate glasses with different concentrations of Yb 2 O 3 were thoroughly investigated under the excitation of a conventional 980 nm laser diode [56].The efficient sensitization of Tm 3+ ions with Yb 3+ ions was proved by the larger energy transfer coefficient (4.81 Â 10 À40 cm 6 /s) and higher energy transfer efficiency (89%) from Yb 3+ to Tm 3+ ions.Moreover, a noticeable peak emission cross-section value of 7.66 Â 10 À21 cm 2 was calculated based on the emission spectrum.
It is worthwhile noting, however, that the intense upconversion emissions at 480 and 800 nm generated by the strong excited state absorption in the Tm 3+ /Yb 3+ -codoped glasses [57] and the lack of flexible pump sources make the sensitization of Tm 3+ ions with Yb 3+ ions not always advantageous.To overcome these drawbacks, transition metal (TM) Cr 3+ ions have been successfully employed as a sensitizer for their two intensive and broad absorption bands from ultraviolet to near-infrared range, which offer a variety of selective pump wavelengths.An enhanced 1.8 μm emission band of Tm 3+ : 3 F 4 ! 3 H 6 in an extremely extended excitation band of 380-900 nm was obtained in fluorogermanate glasses through strong sensitization of Cr 3+ when pumped by an 808 nm laser diode [58].An energy transfer efficiency from Cr 3+ to Tm 3+ as high as 91.10% was calculated based on experimental data, thus proving that these Cr 3+ /Tm 3+ -codoped fluorogermanate glasses are promising matrices for applications in near-infrared eye-safe fiber lasers and amplifiers.
Besides Tm 3+ , other promising RE ions in pursuit of the fabrication of high-power and efficient laser sources in the wavelength region around 2 μm are the Ho 3+ ions.The energy levels of the two ions are reported in Figure 3.The emission cross-section of Ho 3+ is about five times higher than that of Tm 3+ and, in addition, the fluorescence lifetime of Ho 3+ is promising in view of developing Q-switched lasers [59].Unlike Tm 3+ ,Ho 3+ cannot be pumped directly by using the common commercially available laser diodes operating at 808 or 980 nm for the lack of a suitable absorption band.One intriguing approach to overcome this issue consists in the sensitization of Ho 3+ ions with other RE ions exhibiting strong absorption bands near the wavelength of existing commercial laser diodes, such as Yb 3+ , which displays strong absorption bands near 980 nm.
The mid-infrared emission properties at around 2.85 μmi nH o 3+ /Yb 3+ -codoped germanate glasses characterized by a noticeably low OH À absorption coefficient of 0.24 cm À1 and also by a low phonon energy equal to 790 cm À1 were reported [61].The glasses exhibited a large spontaneous transition probability of 36.66 s À1 , corresponding to the Ho 3+ : 5 I 6 ! 5 I 7 transition, and a broad 2.85 μm fluorescence.Moreover, a peak emission cross-section of 9.2 Â 10 À21 cm 2 and a predicted maximum gain per unit length at 2.85 μm of 4.3 dB/cm were achieved.
Another interesting research work thoroughly investigated the 2.05 μm emission of Ho 3+ : 5 I 7 ! 5I 8 and the energy transfer mechanisms of Ho 3+ sensitized by Tm 3+ and Er 3+ in novel Ho 2 O 3 ,T m 2 O 3 and Er 2 O 3 triply doped germanate glasses [62].The maximum value of Nonsilica Oxide Glass Fiber Laser Sources: Part I http://dx.doi.org/10.5772/intechopen.73488emission cross-section of Ho 3+ at around 2.05 μm proved to be 8.003 Â 10 À21 cm 2 , and a noticeable enhancement in the 2.05 μm emission of Ho 3+ : 5 I 7 ! 5I 8 was observed when adding the proper amount of Er 2 O 3 and Tm 2 O 3 under excitation at 808 nm.The maximum value of the Ho 3+ 2.05 μm emission intensity was obtained at concentrations of Ho 2 O 3 ,T m 2 O 3 and Er 2 O 3 equal to 1, 1 and 2 mol%, respectively.
Among the different RE ions able to efficiently emit in the mid-IR wavelength region, a prominent role is played by Er 3+ , which is an ideal luminescent center for the 2.7 μm emission corresponding to the 4 I 11/2 ! 4 I 13/2 transition and can be directly pumped by using the commercially available and low-cost 808 or 980 nm laser diodes.
Tellurite glasses
TeO 2 cannot form a noncrystalline solid if quenched rapidly, since the compound does not obey the Zachariasen's rules for glass forming [65].More stable glasses are obtained when a modifier ion is added, such as BaO, ZnO or Na 2 O, the first discovery of glass formation dating back to Berzelius in 1834 [19].Tellurite glasses have been studied and developed mainly for photonic applications: they offer an interesting alternative to silica mainly because of their high refractive index, good chemical stability and the lowest phonon energy among oxide glasses [12].They were initially investigated for their potential use as optical amplifiers in the third telecom window.They represent a valid alternative to fluoride glasses as host materials for Tm 3+ ions operating at the wavelength of 1.47 μm, as part of the thulium-doped fiber amplifier (TDFA).Indeed, tellurite glasses display a wider bandwidth, better depopulation of 3 F 4 level and higher absorption and emission cross-sections, which increase the efficiency of the amplification [66].Another interesting feature of tellurite glasses, unique among oxide glasses, is their high refractive index, which opens perspectives for the use of these materials for supercontinuum generation in the mid-IR wavelength region [67].Faraday angle rotation using both passive and RE-doped tellurite glasses has also been investigated [68].
Structure
Tellurium oxide-based glasses are structured into a predominance of TeO 4 units in trigonal bipyramid arrangement (4-fold coordinated Te, sp 3 d hybridization), which with the addition of modifier ions change into TeO 3 trigonal pyramids (3-fold coordinated Te, sp 3 hybridization).An intermediate structure of TeO 3+1 polyhedron was also detected using several types of spectroscopic techniques [69,70].
Advances in Glass Science and Technology
In tellurite glasses, the TeO 4 trigonal bipyramid (tbp) structural units contain two axial oxygen atoms (O ax ) at a distance of 191 pm from the Te atom and two equatorial oxygen atoms (O eq )at the distance of 209 pm.The angles between O ax ▬Te▬O ax and O eq ▬Te▬O eq atoms are 169 and 102 , respectively.With the addition of glass modifiers, the Te▬O ax bonds become weaker and longer, which causes the structural change of some TeO 4 units into TeO 3+1 units and, in a following step, with increasing the amount of glass modifiers, into TeO 3 units.The different structural units are reported in Figure 4.Such process is caused by the electron transfer from the glass modifier to a more electronegative (TeO 4 )4δ (where 0 < δ < 1 is a parameter representing the ionic character of the Te▬O bond) unit [71,72].
Tellurite glasses as laser material
Tellurite glass has been studied for laser emission since 1978, when the first Nd-doped bulk glass laser was demonstrated [73] by exciting it at the Ar-ion laser emission wavelength of 514.5 nm.
Among oxide glass systems, tellurite glasses are a promising glass host for near-IR and mid-IR lasers thanks to their peculiar properties.The high RE ion solubility (up to ~10 21 ions/cm 3 ) within its amorphous matrix allows the realization of highly compact devices.Moreover, tellurite glasses display the lowest phonon energy among all oxide glasses (in the range of 650-800 cm À1 depending on the composition), which allows transmission further into the infrared (up to ~5 μm), and high refractive index (~2.0),which means high absorption and emission cross-sections [74,75].Tellurite glasses are also more chemically, environmentally and thermally stable than other nonoxide glasses, making them an attractive option for reliable fiber device manufacturing [12].
A drawback of tellurite glasses, like most oxide glasses fabricated from solid-state precursor materials, is the presence of hydroxyl ions (OH À ), which absorb in the mid-IR wavelength region.These ions could decrease the fluorescence intensity and ultimately lead to the deterioration of the laser performance and even inhibit the laser output [75].In the tellurite glass system, significant results of RE ion emission in the mid-IR region have been reported from Tm 3+ ,Ho 3+ and Er 3+ [76][77][78][79][80][81][82].Thulium (Tm) is an ideal choice for the realization of glass lasers in the ~2 μm wavelength range, since it displays one of the broadest fluorescence bands among RE ions [83] due to the transition Tm 3+ : 3 F 4 ! 3 H 6 that is centered at around 1.8 μm.Tm 3+ has also the advantage of having an absorption band conveniently located at around 800 nm, which coincides with the output of low-cost and high-power commercial laser diodes.This pumping scheme allows obtaining two ions in the upper laser level for each pumping photon, thanks to a phononassisted cross-relaxation process that can potentially result in a laser action with 200% quantum efficiency [84,85].It is possible also to pump Tm 3+ directly into the upper laser level 3 F 4 , which has the potential benefit of a low quantum defect, but has the disadvantage of the lack of convenient low-cost and high-power sources operating in this wavelength range.Possible options that have been demonstrated are an Er 3+ /Yb 3+ -codoped tellurite fiber laser [84] and low-power semiconductor laser diodes [86].
Spectroscopic properties of Tm 3+ -doped TZN (80 TeO 2 -10 ZnO-10 Na 2 O) and TZNG (75 TeO 2 -10 ZnO-10 Na 2 O-5 GeO 2 ) glasses were reported and studied in [76].The measured full width at half maximum (FWHM) of the Tm 3+ : 3 F 4 fluorescence band in TZN glass was 200 μm, compared to 125 nm reported in ZBLAN [87] and 150 nm in silica [88], thus resulting in an enhanced tuning range obtainable in tellurite glass host material.In this work, lasing was also demonstrated in TZNG bulk glass pumped at 793 nm by a Ti:sapphire laser, with a maximum output power of 124 mW and a slope efficiency of 28% with respect to the absorbed pump power.In order to enhance the quantum yield of Tm 3+ 1.8 μm emission, codoping with Yb 3+ was proposed [89][90][91] due to its efficient absorption at 980 nm, which is readily available from InGaAs laser diodes.Moreover, the simple energy level structure of the ytterbium ion offers also an additional benefit by avoiding undesirable excited state absorption (ESA).In [89], an efficient energy transfer between Yb 3+ and Tm 3+ was demonstrated, transfer that increases along with Tm 3+ doping concentration.This study was however limited to low doping concentrations, while in [91] an investigation of the effect of Yb 3+ codoping on Tm 3+ ion spectroscopic properties when Yb 3+ ions content is higher than 2 mol% was conducted with the aim to identify a good candidate active material for short-cavity fiber lasers.This work showed that Yb quenching concentration is of the order of 13 mol% and far larger than Tm quenching concentration, thus allowing to use a Yb:Tm ratio up to 3:1 even for very high Tm concentrations.
The main shortcoming of the widely used TZN tellurite glass as laser material is its low thermal stability, which makes it less durable due to the large amount of heat generated in a laser.To alleviate this problem, a novel Tm 3+ -doped tungsten tellurite glass composition was developed, with a 50% higher T g and 36% lower coefficient of thermal expansion (CTE) [92].The glass was used to demonstrate laser emission in fiber form under excitation through a commercial laser diode operating at 803 nm, although quite a limited slope efficiency (20% with respect to the absorbed pump power) was achieved.
Besides thulium, another RE capable of generating ~2 μm laser emission is holmium (Ho), thanks to the transition Ho 3+ : 5 I 7 ! 5I 8 .Considering the larger emission cross-section and longer lifetime of the lasing state, Ho 3+ is suitable for ~2 μm laser, particularly for reducing the laser threshold [93].However, one of the major shortcomings of Ho 3+ is the lack of ground state absorption transitions that overlap with convenient high-power pump sources, so codoping with another RE with a strong absorption band at around 800 or 980 nm, such as Yb 3+ or Tm 3+ , is commonly used [76,79,94].
In [79], 2.0 μm emission characteristics of Ho 3+ ions both by direct excitation and by sensitized excitation through energy transfer from Yb 3+ ions in a codoped barium-tellurite glass are detailed.A fluorescence band of 160 nm and an emission cross-section of 1.45 Â 10 À20 cm 2 were reported.These values resulted to be higher compared to those previously published for other glass systems.The emission intensity of Ho 3+ : 5 I 7 ! 5I 8 was measured to be 8 times higher under the excitation at 980 nm through the energy level 2 F 5/2 of Yb 3+ ion when compared to the direct excitation of Ho 3+ .This is due to the high absorption cross-section of Yb 3+ ion alongside the highly efficient (86%) sensitized energy transfer from Yb 3+ : 2 F 5/2 to Ho 3+ : 5 I 6 .
Holmium presents also another interesting mid-IR emission at 2.9 μm from the transition Ho 3+ : 5 I 6 ! 5 I 7 .An extensive investigation of this holmium emitting level in a TZGB glass (74.5 TeO 2 -12.2 ZnF 2 -6.4 GeO 2 -4.2 Bi 2 O 3 ) was conducted in [78].The result indicates that the main issues with this glass are water incorporation and the low luminescence efficiency of 5 I 6 level (8%).The reported numerical simulations indicated that the prospect for continuous wave (CW) operation on the 5 I 6 ! 5 I 7 transition in Ho 3+ -doped tellurite glasses is low.
In [95], the 2.9 μm emission from an Yb 3+ /Ho 3+ co-doped tellurite glass (80 TeO 2 -15 (BaF 2 +BaO)-3 La 2 O 3 ) is investigated.The FWHM of the emission was 180 nm and the peak emission crosssection was 9.1 Â 10 À21 cm 2 , comparable to other hosts and even better than ZBLAN fluoride glass.The emission intensity increased many folds upon Yb 3+ excitation at 985 nm compared to direct Ho 3+ ion excitation, thanks to the high absorption of ytterbium at the pump wavelength followed by the resonant energy transfer from Yb 3+ to Ho 3+ ions.
Concerning erbium ion, it is an ideal choice for emission in the mid-IR wavelength range, thanks to its fluorescence at 2.7 μm corresponding to the Er 3+ : 4 I 11/2 ! 4 I 13/2 transition and the possibility to use 808 or 980 nm laser diodes as pumping source.
The potential of erbium-doped tellurite glass for the realization of compact laser devices at this wavelength was extensively investigated in [82].In this work, it is shown that the presence of OH À groups in current state-of the-art Er 3+ -doped tellurite glass is high enough to suppress the 3 μm emission in the glass, due to a large energy transfer from the excited state to the OH À radical.Moreover, it was calculated from numerical simulations that in the absence of OH À impurities, the pumping intensity required for population inversion in an Er 3+ -doped tellurite CW fiber laser pumped at 976 nm is ~80 kW/cm 2 for Er 2 O 3 concentrations ≥ 2.65 mol%.It was also established that a pump ESA process at 976 nm would have a detrimental impact on the performance of the fiber laser.
More recently, a barium tellurite glass host was proposed for obtaining 2.7 μm emission from erbium [80].This glass possesses higher thermal properties compared to typical TZN glass and lower OH À content, thanks to the addition of BaF 2 .An optical fiber was prepared using the developed glass, and 2.7 μm fluorescence was measured upon excitation through a 980 nm laser diode.The feasibility of an Er 3+ -doped tellurite fiber laser operating at 2.7 μm based on this novel glass was also theoretically investigated, showing that the barium tellurite fiber is a promising candidate for the development of efficient mid-infrared fiber lasers [80].
Optical fibers
The next paragraphs present the very basic concepts of an optical fiber that are relevant to the understanding of the technological challenges behind the manufacturing of multicomponent oxide glass fibers.
A complete description of the concepts and working principles of the optical fibers is out of the scope of this chapter.For a detailed description, the reader can refer to the excellent textbooks given in [96,97].
The typical configuration of an optical fiber is shown in Figure 5.It consists of a core made of a glass with a refractive index value n core surrounded by a cladding glass layer of refractive index n cladding .Although this is not always implemented at the academic level, a thin polymer coating (polyamide or acrylate type polymer) should be applied during the drawing process to strengthen mechanically the fiber and to protect it from long-term moisture degradation or other possible chemical contamination sources.
Electromagnetic radiations are confined in the core provided that the refractive index values of the core and cladding glasses meet the condition n core >n cladding .
Under this condition, at least one of the so-called electromagnetic or optical modes can be confined and propagate down the optical fiber core.In first approximation, the modes can be understood as a set of constructive interference patterns along the fiber.For illustration purpose, the intensity profile of the electromagnetic fields of few modes is shown in Figure 6.
The number of propagating modes depends on the dimension and the difference of refractive index values between the core and the cladding glasses.
The normalized frequency parameter, V, for a step-index optical fiber is given by: where λ is the wavelength in vacuum and a is the radius of the fiber core.If V < 2.4, the optical fiber can support only one propagating mode in the core.If any power is launched in the other modes at the fiber input, it will leak into the cladding material.
Advances in Glass Science and Technology
For numerous applications and in particular for the development of optical coherent sources, single-mode operation is highly desirable.The spatial and temporal properties of the propagating beam in a single-mode fiber can be managed with better control, making this fiber configuration more suitable for the development of high-performance sources.
Typically, a difference value down to 10 À3 between the refractive index values of the core and the cladding glasses can be achieved.According to equation ( 1), such refractive index difference value implies that to maintain single-mode operation, a typical core diameter must lay below the values of 15 and 30 μm for wavelengths in the 1 to 2 μm range, respectively
Double-cladding structure for high-power fiber lasers and amplifiers
The double-cladding strategy was developed to exploit the high pump power available from a laser diode [98].The structure of the fiber allows to launch high pump power into the first cladding surrounding the core, as reported in Figure 7.The pump power is confined within the first cladding, thanks to the second cladding.Along the fiber length, the pump radiation interacts numerous times with the core glass material.At each interaction, the RE ions contained in the core absorb part of the pump power.The excited RE ions subsequently reemit part of the absorbed power by a stimulated emission phenomenon.The reemission being then confined within the core, substantially the double-cladding structure converts low-brightness laser diode power into high-brightness fiber laser.Nonsilica Oxide Glass Fiber Laser Sources: Part I http://dx.doi.org/10.5772/intechopen.73488
Preform fiber drawing technique: process and main parameters
The drawing of a soft oxide glass fiber directly from the molten state has been reported [99]; however, the versatility of this approach remains very limited as it requires substantial modification of the drawing facilities in order to change the fiber core/cladding ratio and diameter.Most importantly, the diameter and fiber structural control is difficult to achieve while glass crystallization often occurs at the edges of the crucible walls, impairing both the optical property transmission of the optical fiber due to the presence of scattering crystals and the fiber mechanical robustness.
Actually, as for the advanced silica glass fiber technology, the most employed technique for drawing multicomponent glass fibers is the preform drawing [100].In this approach, the socalled preform, which is a "macroscopic" version of the fiber, is first manufactured using one of the procedures described in Section 2.1.For multicomponent oxide glass fibers, the typical dimensions of the preform range from 10 to 20 mm in diameter and few cm to 20 cm in length.
The preform is then placed in a drawing tower where it is heated up until the glass reaches a viscosity of about 10 5 Pa s.A schematic illustration of a drawing tower is shown in Figure 8.
Under the combined effect of gravity and surface tension forces, the softened part of the preform drops down and thins down into a fiber, which is then pulled either using a capstan or attached directly onto a rotating drum at the bottom of the tower.The control of the dimension at the mm scale of the preform and the relatively high tensions, typically 0.1-1N , of the drawing process allows for a very precise control of the final fiber dimensions and geometry.
The control of the fiber diameter is achieved by tuning the speed at which the preform is being fed into the furnace and the speed at which the fiber is being drawn from the preform.For an Advances in Glass Science and Technology incompressible liquid, mass conservation considerations lead to the following equation for the fiber diameter d fiber : where d preform is the preform diameter and v preform and v fiber are, respectively, the preform feed speed and the pulling speed.
Drawing tower facilities
Commercial towers for soft glasses can be acquired from several specialized manufacturers; however, a cost-effective drawing tower can be developed in-house and leads to similar results in terms of fiber diameter fluctuations, which are typically of AE 1 μm over few tens of meters of fiber.For multicomponent oxide glasses, the main source of fluctuations/contaminations in the final fiber arises at the production stage of the preform, not during the drawing process itself.
The fiber drawing process of a multicomponent glass preform is carried out at a typical speed ranging from few m/min to 30 m/min at most.This is in contrast to the very high drawing speed used to produce telecom silica glass optical fiber, which reaches up to 20 m/s [100].
Because of this slow drawing speed, automated diameter adjustment through a diameter monitor feedback can be rather inefficient, especially in the academic field where very often one fiber is different from the next in terms of its glass composition or structure.As such, the furnace configuration and the feeding procedure of the preform into the furnace become crucial to ensure the diameter stability during the drawing process.Beside obvious parameters such as the temperature stability of the furnace, ensuring a steady laminar flow around the preform during the drawing procedure is key.The choice of the gas used N 2 /O 2 ,A ro rN 2 depends strongly on the glass composition.The low gas content in H 2 O is however necessary to avoid any detrimental effects, optical or mechanical, on the drawn fiber.
The furnace can be based on either resistive elements or an induction head where the susceptor consists of a simple metal or graphite ring.The latter approach offers the possibility to tailor easily and cost-effectively the hot zone by simply changing the susceptors.
Rod-in-tube
In the rod-in-tube technique, the preform consists, in its most simple form, of a rod of core glass inserted into a cladding glass tube.When heated up inside the drawing tower furnace, the cladding tube collapses around the core rod under the effect of gravity and surface tension forces.The two glass materials are then drawn together as a single concentrically structured fiber.For the process to take place in a controlled manner and to avoid excessive residual stress within the fiber, several important material aspects need to be taken into consideration.The two glasses must match in terms of thermomechanical properties: glass working temperature, glass transition temperature and thermal expansion values of the two glasses should match as much as possible.In practice, these constrains imply that the two glasses have similar compositions, which in turn limit the upper range of the refractive index difference value achievable.
It is also preferable that both the tube inner diameter and the rod diameter match each other closely to avoid structural deformation of the core or trapped air at the interface between the two glasses.The latter issue can be addressed by applying vacuum on the top part of the preform.
As illustrated in Figure 9, to achieve a small diameter core dimension or manufacture the double-cladding structure, the preform preparation consists in an intermediate step where a core/cladding preform is thinned down into a cane, which is then inserted into another cladding tube to form a new preform.This process can be repeated several times depending on the thermal stability against crystallization of the glass compositions involved.This cane drawing process is carried out in the drawing tower but at higher viscosity and under higher tension than the fiber drawing process.
Core glass rod manufacture
The core glass rod arises from a single bulk glass casted into either a cylindrical-or rectangular-shaped mold.In the latter case, the bulk can then be machined into a cylindrical rod of the desired dimension.In both cases, the core glass rod needs to be polished preferably using a nonaqueous cooling liquid so as not to impair the optical transmission of the fiber.The cladding tube can be manufactured through different techniques.Drilling is carried out either using an ultrasonic drilling setup and/or using specialized diamond drilling bits.This approach allows for machining tubes reliably and with a great precision, making possible a precise control of the fiber dimensions through the drawing process.In addition, the glass does not go through a heating cycle, which could favor crystallization tendency.
There are, however, a number of drawbacks.Because of the brittleness of the glass, this is a slow and therefore time-consuming process.Drilling small diameter holes cannot be achieved over long lengths due to the mechanical flexibility of the drill bit itself.Thin wall tubes are also difficult to manufacture.Adding to the processing time, following drilling, the tube must undergo an additional polishing process not only to smoothen the wall roughness but also to clean up the walls from free glass particles that can be prone to crystallization during the drawing process.
Extrusion
An overview of the overall procedure and equipment of the extrusion process is given in [101][102].In the extrusion process, a bulk glass, typically 30 mm in diameter and 30-50 mm high, is loaded into a furnace apparatus open on top and bottom.A scheme of the process is reported in Figure 10.The glass is heated up to a temperature corresponding to a viscosity of 10 8 Pa s.
On the top part, a press ram applies a force on the glass bulk.The soften glass exits through the Nonsilica Oxide Glass Fiber Laser Sources: Part I http://dx.doi.org/10.5772/intechopen.73488bottom part of the apparatus, which consists of a funnel shape die where a spider setup allows for a pin to be held in the center of the die.
Typically, the pressure applied ranges from 1 to 6 GPa.For multicomponent oxide glasses, the dies can be manufactured out of standard stainless-steel material, although less reactive (more stable and inert) metals such as Inconel are sometimes preferable depending on the temperatures and glass compositions involved.The die surface finish plays an important role onto the surface quality finish of the extruded glass tube itself.Indeed, it is possible to extrude very high-quality surface finish tubes also because the process is carried out at a range of viscosity where surface tension is still effective.
Some swelling effect can occur and tends to distort the preform and modify its dimensions with respect to the die dimension.However, this effect can be limited through pertinent die design and temperature of operation.If compared to the drilling technique, the main disadvantage of the extrusion is that the glass goes through an extra heating cycle above the glass transition temperature (T g ), which can promote glass nucleation and crystallization.Nevertheless, the negative effect of this cycle is limited by the fact that the viscosity range considered is substantially high.
Rotational casting
The rotational casting [103] is carried out by casting the molten glass into a cylindrical mold (Figure 11), which is then tilted horizontally and rotated at a rotation speed ranging typically from 1000 to 2000 rpm while the glass inside the mold is still liquid.As the liquid cools down, it forms a glass tube inside the mold, which is then loaded into a furnace for glass annealing.Despite being a "manual craft" operation, if processed under the same conditions, the tube Advances in Glass Science and Technology inner diameter value is commonly reproducible within AE 5 %.The typical roughness value of the inner tube surface is below 10 nm.Such pristine surface is indeed particularly suitable for the development of optical fibers.
The rotational casting process takes place in a matter of seconds, making it a very fast production technique if compared to the two approaches described above.The main limitation of the rotational casting technique regards the range of inner tube diameter values achievable.Uniform tubes with small or large inner diameters can be difficult to achieve in a reproducible manner.Also, the technique is foremost restricted to glass compositions that display a low viscosity once molten.Silicate glasses, for instance, are unpractical for implementing the rotational casting technique.
Built-in-casting and suction casting approaches
The built-in-casting and suction casting techniques [104,105] were developed to avoid some of the issues inherent to the rod-in-tube technique, the main purpose being to manufacture a single unit core/cladding structured preform.Both techniques involve the casting of the core material in a liquid state inside a cladding tube for the former approach or on the top of the cooling cladding glass for the latter approach.
These techniques can provide substantially low loss optical fibers and display the advantage to require low processing time.However, the control on the dimension and shape of the core is rather limited with, therefore, a very low reproducibility.Some degree of diffusion process also occurs at the interface between the two glasses.Because of these features, these techniques are being used only at an academic level.
Figure 1 .
Figure 1.Types of tetrahedral sites occurring in phosphate glasses depending on their composition.
Figure 2 .
Figure2.Energy level diagrams of Yb 3+ and Er 3+ ions.The main pumping mechanism of the sensitizer-activator scheme is also reported.
Figure 3 .
Figure 3. Energy levels of Tm 3+ and Ho 3+ ions of interest for the emission in the mid-IR wavelength region.
Figure 5 .
Figure 5. (a) Scheme of a typical optical fiber and (b) cross-section illustration of a typical optical fiber structure.
Figure 6 .
Figure 6.Spatial distribution of the electromagnetic field amplitude of few optical modes in a low numerical aperture multimode fiber: (a) LP 01 mode or "fundamental mode," (b) LP 02 mode and (c) LP 03.
Figure 7 .
Figure 7. (a) Cross-section image and refractive index profile of a double-cladding fiber for high-power amplifiers and lasers; (b) illustration of the concept of double-cladding structure for a high-power amplifier.Pump laser beam in green, input signal and output amplified signal in blue.
Figure 8 .
Figure 8.(a) Schematic illustration of a fiber drawing tower, which implements the preform drawing approach; (b) photography of an in-house developed drawing tower installed at Politecnico di Torino.
Figure 9 .
Figure 9. (a) Implementation of an optical fiber preform using the rod-in-tube technique.Core glass rod in blue, cladding glass tube in red; (b) glass cane with a core/cladding structure obtained by drawing the preform shown in (a); (c) implementation of a small core/cladding diameter ratio preform by inserting the rod shown in (b) into an additional cladding glass tube.
Figure 10 .
Figure10.Schematic of the extrusion process for manufacturing glass tubes.A glass billet (in red) is heated up until the glass reaches a viscosity of typically 10 8 Pa s.A high pressure is then applied onto the top billet surface through a mandrel.The softened glass is then slowly pushed out of the die through the bottom orifice of this latter.The orifice arrangement with a pin in its center allows for producing a glass tube.
Figure 11 .
Figure 11.Schematic of the rotational casting procedure for manufacturing glass tubes: (a) the molten glass is cast into a mold held in vertical position; (b) the mold is tilted in the horizontal direction and then rotated at high speed.A glass tube forms along the mold internal walls.
Table 1 .
Main properties of nonsilica oxide glass systems.Nonsilica Oxide Glass Fiber Laser Sources: Part I http://dx.doi.org/10.5772/intechopen.73488metalions and water.This led to the development of chemical vapor deposition methods: these synthesis routes utilize chemical precursors in vapor phase as starting reagents, which are transformed into the final oxide components by reaction with oxygen at very high temperatures.These techniques allowed the fabrication of optical fibers capable of providing <0.2 dB/ km ultra-low loss in the third telecom window.Chemical precursors involved in the synthesis of multicomponent nonsilica oxide glasses have very distinct vapor pressures, making high-purity vapor-based fabrication techniques unsuitable for this type of glass compositions.Instead, traditional glass melting techniques must be implemented.Chemical precursors are weighed and batched into a crucible typically made of alumina, silica or a noble metal such as Pt or Au.The glass batch is then melted in a high-temperature furnace for few hours under controlled atmosphere.Typical melting temperatures for the glasses under consideration in this chapter are 800, 1200 and 1300 C for tellurite, germanate and phosphate glasses, respectively. | 2018-12-30T10:05:14.168Z | 2018-06-06T00:00:00.000 | {
"year": 2018,
"sha1": "099a9e5f08049b8bb4ad26e90209bb9a0b907115",
"oa_license": "CCBY",
"oa_url": "https://api.intechopen.com/chapter/pdf-download/59886.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "50558e467ca0a2aa9272d45f6b6e5faf6f4bfb9c",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
254098114 | pes2o/s2orc | v3-fos-license | Strategies for mercury speciation with single and multi-element approaches by HPLC-ICP-MS
Mercury (Hg) and its compounds are highly toxic for humans and ecosystems, and their chemical forms determine both their behavior and transportation as well as their potential toxicity for human beings. Determining the various species of an element is therefore more crucial than understanding its overall concentration in samples. For this reason, several studies focus on the development of new analytical techniques for the identification, characterization, and quantification of Hg compounds. Commercially available, hyphenated technology, such as HPLC-ICP-MS, supports the rapid growth of speciation analysis. This review aims to summarize and critically examine different approaches for the quantification of mercury species in different samples using HPLC-ICP-MS. The steps preceding the quantification of the analyte, namely sampling and pretreatment, will also be addressed. The scenarios evaluated comprehend single and multi-element speciation analysis to create a complete guide about mercury content quantification.
Introduction
As is well-known, mercury is regarded as a persistent and toxic element that impacts human and ecosystem health. The interaction between the metal and the environment starts when the Hg species are mobilized from natural deposits into the biosphere. Once it is mobilized, Hg can be transported through air, and water and, after undergoing various reactions, it can change its speciation, biomagnify, and bio-accumulate in food chains. Mercury species, especially organomercury compounds, can easily accumulate in fat tissues due to their lipophilic properties (Sundseth et al., 2017). Many studies show that mercury exposure may result in an important risk of cardiovascular disease, and it can have a negative impact on the reproductive and immune systems; in particular, the majority of Hg present in living organisms is in the form of methylmercury (CH 3 Hg + , abbreviated as MeHg hereafter), which is a developmental neurotoxin and has fetal neurotoxicity. The collection of literature data led to the development of greater awareness and the creation of new public health policies. The most common sources of mercury exposure are contaminated water and food, especially the tissues of animals at the top of the food chain subject to bioaccumulation phenomena. In addition, in some developing countries, where artisanal and small-scale gold mining (ASGM) is carried out, the inhabitants can inhale mercury vapors produced by the extraction process. High levels of mercury exposure are not related only to some populations but happen worldwide. However, the total mercury content in the environment compartments is low (Mergler et al., 2007;Gibb and O'Leary, 2014;Sundseth et al., 2017). Furthermore, element speciation is the key to comprehend an analyte's potential toxicity, mobility, and bioavailability; this kind of study allows to obtain information not provided by total mercury quantification (Templeton and Fujishiro, 2017). For these reasons, it is essential to improve and develop new analytical strategies for determining different mercury species with high sensitivity and good accuracy. Most approaches rely on hyphenated techniques, coupling the separation of metal species by High-Performance Liquid Chromatography (HPLC) with their sensitive determination by Inductively Coupled Plasma Mass Spectrometry (ICP-MS).
The popularity of HPLC-ICP-MS is due to the ability to determine many elements in a single analytical run, the simple connection of separation and detection systems, and the versatility and sensitivity of the detector.
High-Performance Liquid Chromatogaphy is a separation technique that can quickly select non-volatile.
Compounds of high and low molecular weight. It can be used with different separation modes, providing high versatility; analytes can be divided depending on their chemical and physical properties such as polarity, solubility, ionic charge and size, and molecular mass. The most used approach is the reverse phase (RP), which involves the separation of molecules based on interactions between the analytes in the polar mobile phase and the lipophilic ligands attached to the stationary phase of the column. After the separation of the compound, the procedure requires a detector to correctly determine the concentration of each element species. ICP-MS is particularly suitable for this purpose, both for its intrinsic qualities and because it can be easily coupled with this chromatographic technique. It has low detection limits (<ppt levels), good linearity (up to 108 order of magnitude) and can examine one or more masses at the same time with high resolution. The advantage of using these two techniques in a sequence is to combine an effective separation of the species under examination with a sensitive and versatile detector (Ponce de León et al., 2002;Gao et al., 2012).
This review addresses the quantification of mercury species in several matrices by HPLC-ICP-MS. Since the sensitivity and accuracy of the response depend on the pretreatment procedures, these steps will also be discussed. Both speciation and multielement analysis will be considered. Papers published between 2000 and 2022 were reviewed. A bibliographic search on Web of Science was carried out using mainly the keywords "HPLC-ICP-MS", "Mercury speciation" and "Multielement speciation" In literature, there are lots of procedures to investigate single element speciation or total concentration. However, a single element approach requires a huge investment of time and capital. Switching to a multielement approach permits to decrease the volume and the amount of reagent required and reduce the analysis time. In some cases, it is even possible to develop multielement speciation analysis, optimizing the use of time and resources. However, building a procedure to analyze different elements is challenging, because the pretreatment, determination, and separation conditions must be optimized for all analytes at once (Wolf et al., 2011;Sun et al., 2015). Thanks to this review it will be possible to select the best working conditions in the pretreatment, separation, and quantification phases according to the purpose of the different methodologies.
Sample pretreatment
The sample pretreatment is a crucial issue in trace Hg speciation. The procedures for total Hg determination or Hg speciation analysis are easily affected by interconversion of species, contamination, and analyte losses, especially if they involve steps of transport and storage of the samples. The affinity of inorganic Hg species to surface adsorption and their high volatility increases the probability of analyte loss. Sources of contamination can be identified in storage containers and stabilizing reagents used to decrease Hg evaporation or wall adsorption. It is also necessary to pay attention to the possible inter-conversion reactions between Hg species, which are due to the procedures used and the matrix type. For example, part of Hg(II) may transform into MeHg through extraction, derivatization or measurement processes (Castillo et al., 2010;Abad et al., 2017). Different reviews, such as the one of Brown at all (Pandey et al., 2011), paid particular attention to the necessity for clean sampling procedures, and proper sample storage.
Water
Pyrex and Teflon (PTFE or FEP) are the best materials for containers in both storage and processing phases for water samples, because of their low affinity with metal species. There are several efficient methods for cleaning sampling vials and other equipment that comes into contact with samples, to avoid contamination (e.g., aqua regia, chromic acid, nitric acid, and BrCl) (Bravo et al., 2018). Water samples are filtered with 0.22 or 0.45 μm pore size filters to eliminate particulate materials (Amde et al., 2016) and pretreated to stabilize volatile compounds, as Hg°and dimethylmercury (DMHg), and Frontiers in Chemistry frontiersin.org 02 reduce their loss. If the purpose of the procedure is to determine total mercury, it is sufficient to acidify the sample with HCl or HNO 3 or stabilize it with the addition of an oxidant (BrCl) to preserve the analytes and to prevent the formation of microbes; whereas, for MeHg, the samples can be acidified with HCl or stored unpreserved deep-frozen. Changing the initial conditions of the solution favors interconversion reactions: therefore, to perform a speciation analysis, acids or oxidants cannot be used for stabilization. Instead of stabilizing liquid samples for transport and storage, it is possible to selectively adsorb the analyte on a solid phase (e.g., gold, Carbotrap, Tenax, etc.). These processes can be advantageously performed in the field. In situ preconcentration procedures are a valid alternative to overcome the problems that occur during sample transport and storage and lower the limits of detection (LODs) (Leermakers et al., 2005).
Air
There are several ways to accumulate gaseous or particulatebound mercury, among these, each researcher chooses the most accurate, precise, and suitable method for the type of analysis of interest. Usually, a huge volume of air is required to obtain a detectable quantity of Hg, since the concentrations of Hg are extremely low in this matrix. Therefore, it is useful to adopt preconcentration to get concentrations above the LOD. For these reasons, the first step of sampling gaseous Hg or Hg in particulate matter is usually to draw a measured volume of air through a collection material able to retain the species of interest (see below). The process should be performed with pumps designed for trace-level pollutant sampling. The volume of air sampled (in L) must be measured to calculate the concentration in L o m 3 of air, for example with a dry test meter (DTM) placed in the vacuum line, between the pump and the sample box. Most of the studies focus on the determination of total gaseous mercury (TGM or Hg°), reactive gaseous mercury (RGM), organic species (such as MeHg and DMHg), or mercury bound to particulate matter (Hgp). These compounds can be collected separately using different materials as reported in the literature (Lu and Schroeder, 1999;Pandey and Kim, 2008); for example, MeHg and DMHg are retained by Carbotrap (Graphitized carbon) or Tenax TA (Polymer of 2,6diphenylphenol), indeed Hg°can be trapped with gold amalgamation (Leermakers et al., 2005;Pandey et al., 2011).
Biota and sediment samples
The majority of Hg present in living organisms is in the form of MeHg, which is easily absorbed from the gastrointestinal tract and has a high biomagnification factor (up to 106) in the food chain because of its high liposolubility (Gao et al., 2012). The samples of biota tissue are usually stored at low temperatures, sometimes lyophilized, or sterilized. It is advisable to avoid repeated freezing and unfreezing cycles to avoid MeHg decomposition (Leermakers et al., 2005). Sediment samples are commonly packed in PE bags and transported to the laboratory in a cooler at low temperatures. In the second step, they are usually dried, homogenized, sieved, and stored at 4°C in high-density PE containers cleaned with acid (Amde et al., 2016). Soils can be processed and analyzed fresh or after a lyophilization or drying procedure. The presence and the influence that these pretreatments have on the determination of MeHg, and other analytes are subjects of debate in the scientific community. Some studies do not find differences between fresh sediments and dried (lyophilized) sediments (Muhaya et al., 1998), while others have found lower concentrations in the wet samples than in the dried ones (Muhaya et al., 1998;Leermakers et al., 2005). Further investigation in this field is required. Sediments and biota samples are solid; therefore, the analyte must be extracted from these matrices with adequate recovery. In literature, a wide variety of combinations of strong oxidizing acids, and elevated temperatures and pressure are reported and suggested for the determination of the total Hg content. The main issues of this phase are sample contamination, volatilization, and adsorption losses, especially during elevated temperature and pressure digestion procedures (Collasiol et al., 2004). The majority of the recommended methods rely on microwaveassisted digestion using closed vessels with acid and/or oxidant solvent (in this case concentrated sulphuric and nitric acid, and 30% hydrogen peroxide) (Murphy et al., 1996). This approach is characterized by rapid sample preparation and reduced risk of contamination from the laboratory environment and analyte loss, but the relatively high amount of reagent used can cause an increase of the blank values and the detection limits. There are various alternatives such as ultrasound-assisted leaching. The collapse of gas or vapor bubbles creates areas of high temperature (which increase solubility and diffusivity) and pressure (which promote penetration and transport) at the interface between an aqueous or organic phase and a solid matrix. This condition, combined with the oxidative energy of radicals created during sonolysis of the solvent (hydroxyl and hydrogen peroxide for water), results in high extractive power with a low reagent volume (Ruiz-Jiménez et al., 2003;Collasiol et al., 2004). At present, there has been considerable research focused on reagents for extracting mercury species. These works aim to select the most appropriate reagent based on the nature of the investigated samples and to validate the analyses performed (Issaro et al., 2009).
Speciation analysis
The research on the elemental speciation of an analyte is crucial to understanding its toxicity, mobility, and bioavailability in the environment (Templeton and Fujishiro, 2017 compounds, especially MeHg, are considered the most toxic forms of mercury since they are lipophilic and, for this reason, easily bioaccumulated and biomagnified in the food chain. However, all the mercury emitted into the environment can go through biogeochemical transformation processes and be converted into MeHg. For these reasons MeHg is the subject of numerous studies that deal with different matrices and ecosystems. Other organomercury species, such as dimethylmercury (CH3-Hg-CH3, abbreviated as DMHg hereafter), are only considered by a few studies, due to their different chemical-physical properties and consequent lower impacts on ecosystems. In fact, DMHg is a neutral, volatile organomercurial present in marine environments, that is not expected to bioaccumulate to levels of concern. (Baya et al., 2015;West et al., 2022) Consequently, the different mercury species have different impacts on the health of the biota and the ecosystem. Therefore, monitoring total mercury content in the environment does not provide enough information, and speciation analysis is needed to evaluate mercury's toxicity and health risks. These methods are usually based on hyphenated techniques between chromatographic separation and atomic spectrometric detection as HPLC-ICP-MS (Gao et al., 2012;Cheng et al., 2018).
Preconcentrating procedures
Highly sensitive detection of potentially toxic elements has always been pursued, due to their low concentration range in environmental and biological samples. Consequently, it is important to study simple, efficient, and economic ways to increase the sensitivity of mercury determination. In the last few years, several pre-concentration processes have been proposed; they provide an improvement in sensitivity and can also separate the analytes from the sample matrix to decrease interferences. In Table 1, different procedures are listed: among these, those based on solid phase extraction (SPE) are the most tested ones. It has different benefits such as elevated recovery and enrichment factors, easy management and automation, low consumption of organic solvents, and high flexibility, which allows studying the application of numerous stationary phases with new eluents. For example, in 2019, Jia et al. used Zwitterion-functionalized polymer microspheres (ZPMs) as the sorbent phase of SPE, finding high enrichment factors and low limits of detection for Hg(II), MeHg, and ethyl-mercury (EtHg) ( Table 1).
HPLC separation conditions
There are different types of approaches in liquid chromatography used for different applications depending on the characteristic of the analytes, such as relative polarity, solubility, and molecular mass. Such approaches include normal and reversed-phase, reversed-phase ion-pair, micellar, ion-exchange, size exclusion, and chiral LC. However, when LC is hyphenated to ICP-MS, some of them create problems in terms of plasma stability, waste disposal, or safety. For example, using a non-polar mobile phase is challenging HPLC-ICP-MS, so reversed-phase LC is now the most used method of partition chromatography. Usually, the stationary phases are made of siloxanes bound to some hydrophobic substituent groups that contain eighteen, eight, or one carbon atom(s): with this separation technique, the analytes are separated according to their different hydrophobicity, adjusting the selectivity of the separation by regulating the types and proportions of the components of the mobile phase. Binary, tertiary, or quaternary combinations of solvents may be used to achieve the desired selectivity (Sutton and Caruso, 1999). Other typologies of liquid chromatography can provide some difficulties in combination with ICP-MS as a detector, such as size exclusion chromatography, due to the high salt concentration that is often used in the mobile phase (Beauchemin, 2020). Table 2 summarizes the chromatographic conditions adopted for the separation of Hg species. Several authors suggest the use of reverse phases with L-cysteine and mercaptoethanol, methanol, and ammonium acetate in various combinations as mobile phases. L-cysteine and 2mercaptoethanol are commonly used as thiol ligands for Hg in a mixture because the retention time was too long when 2mercaptoethanol was used alone, and it is not possible to separate completely the mercury species with L-cysteine only. Ammonium acetate is often used as a buffer to control the pH, and the use of methanol leads to an increase in Hg detection sensitivity and a decrease in the retention time for concentrations up to 4-5% v/v. According to several authors, the reason for this effect is that methanol at low concentrations provides appropriate elution strength for the 2-mercaptoethanol or cysteine complexes of mercury species because these complexes are quite hydrophobic, while higher concentrations cause a loss sensitivity due to a reduction in plasma energy (Chen et al., 2009a;de Souza et al., 2010;Rodrigues et al., 2010).
ICP-MS conditions
Hyphenated techniques have the disadvantage that the compatibility of the procedure with all involved techniques must be considered. For instance, as reported above, the eluent composition can strongly affect ICP-MS efficiency. Usually, the mobile phases chosen for mercury speciation analysis comprise buffers, organic modifiers, ion pairs, or chelating agents. Organic solvents, like methanol, can cause carbon deposition on the sampling and skimming cones or plasma discharge instability. These problems can be toned down by optimizing the working conditions; in the study of (Chen et al., 2009b) the authors decided to increase plasma forward power, cool the spray chamber, and add Frontiers in Chemistry frontiersin.org 05 optional gas flow containing oxygen to mitigate the methanol effect. They obtained a rise in the signal-to-noise ratio for mercury, with a maximum value at the optional gas flow rate of 0.3 L min−1. In addition, even after running a mobile phase (35% methanol and 40% acetonitrile) for 8 h in the HPLC-ICP-MS system, carbon deposition on the sampling cone could hardly be observed (Chen et al., 2009c).
The choice of the mobile phase can be affected by the preconcentration technique applied before the chromatographic step. For example, using cloud point extraction (CPE), watersoluble mercury species are converted into water-insoluble chelates through a suitable chelating reagent: to separate these hydrophobic chelates, reverse-phase o HPLC with high organic solvent content must be adopted. Also in this case, (Chen et al., 2009d) (Chen et al., 2009b) optimized working conditions. The plasma forward power was set at 1500 W to stabilize plasma discharge and the temperature was maintained at -5°C through a Peltier cooling system: this approach permitted to remove most of the organic solvent from the sample aerosol. As in the previous example, they applied an optional gas flow containing oxygen to lower the carbon deposition on the sampling cone: the signal-to-noise ratio of mercury reached the maximum at the optional gas flow rate of 0.3 L min−1 (Chen et al., 2009b).
Multi-element speciation analysis
Although most of the procedures, allow for speciation analysis of a single element, this approach is not practical. Developing a separate method for each element and the need to carry out a large number of analyses to determine the speciation of several analytes in a sample requires a large investment of time and capital (Sun et al., 2015). Instead, multi-element speciation procedures decrease drastically the analysis time, the volume of reagent needed, and, therefore, the amount of chemical waste. The multi-element approach seems to be a suitable tool, especially in evaluating human exposure to different toxic elements species simultaneously in polluted regions and examining elements that coexist or interact with each other (Wolf et al., 2011). Several methodologies allowing for multi-elemental speciation analysis have been developed (Marcinkowska and Barałkiewicz, 2016a).
However, elaborating methods for multi-element speciation analysis in a single run is challenging since it is necessary to simultaneously optimize the pretreatment separation and detection conditions for all analytes. It is necessary to adjust preconcentration and HPLC/ICP-MS conditions considering the behavior of the various species to gain: 1) retention of each analyte on a chromatographic column and its elution in a reasonable time; 2) a complete separation of analytical signals; 3) the maintenance of the stability of the different species throughout the whole analytical procedure; 4) elimination of potential interferences; 5) sufficiently low detection limits (Sun et al., 2015).
Preconcentration procedures
As pointed out single-element speciation analysis, directly evaluating the element content in environmental samples is often challenging due to the complexity of the matrices and/or the low concentration range: for instance, the concentration range for total mercury in unpolluted natural water could be 0.03-90 ng/L, while the lead one is also in the order of magnitude of several ng/ L (Song et al., 2021). In these cases, it is useful to apply preconcentration procedures to increase the concentration level. In the literature, there are some examples of methods that could be used for this purpose; they are the same as those mentioned for single-element preconcentration: for example, cloud point extraction (Jia et al., 2019), ionic liquid extraction (Falish Ramandi and Shemirani, 2015), liquid-liquid micro-extraction (Akramipour et al., 2018) and SPE/SPME (Płotka-Wasylka et al., 2016). Among these sample pretreatment procedures, SPE is one of the most adequate to readily couple with HPLC-ICP-MS as discussed above .
SPE, solid-phase extraction; HF-LPME, hollow fiber membrane extraction. Lots of new procedures and materials have recently been tested, as shown in Table 3. Song et al., in 2022 built a method to simultaneously determine Cr, Cd, Hg, and Pb species at ng/l level by integrating online SPE into HPLC-ICP-MS. They retained the elements on a C 18 column and eluted by the mobile phase subsequently used for HPLC separation. The procedure reached low LOD values (0.001-0.007 ng L), satisfactory enrichment factors (827-2,656 folds), and good repeatability (Song et al., 2022).
HPLC separation conditions
One of the main tasks to obtain an efficient procedure of multi-element speciation analysis is the selection of the separation parameters. The optimization of the chromatography phase is challenging because conditions for different elements species may vary significantly (Marcinkowska, et al., 2015). The key is to choose the best stationary and mobile phase, starting from the data in the literature and applying the necessary changes to adapt the procedures to each case study. Zhang et al. (2020) selected the best mobile phase for the separation of As, Hg, and Pb. They report that lots of studies use alkyl ammonium bases/salts such as tetrabutylammonium hydroxide (TBAH) and hexadecyl trimethyl ammonium bromide as mobile phases in an ionpairing RPC mechanism for the separation of arsenic species. For the speciation of mercury instead, the mobile phase used often contains thiol compounds (Cys and 2-mercaptoethanol, etc.), while negatively charged alkyl sulfonic acids/salts (sodium 1-pentanesulfonate SPS, sodium dodecyl sulfate, etc.) are commonly adopted for the separation of cationic lead forms. They then investigated the behavior of each possible mobile phase for each single element speciation. in the next phase of the investigation, they utilized mobile phases containing twochemical mixed solutions of the three compounds (TBAH, Cys, and SPS). The best results were obtained with a mixture of TBAH and Cys, which is suitable for the simultaneous separation of all species of the three analytes (Chang et al., 2007;Marcinkowska and Barałkiewicz, 2016b Cell (DRC) to the market. This has proved to be an effective tool for eliminating spectral interferences, paving the way for the use of quadrupole ICP-MS for multi-elementary analyzes. Highresolution ICP-MS were already capable to reach this performance, but their high cost and their scarce robustness prevented their widespread diffusion. As already specified, in multi-elemental studies by HPLC-ICP-MS, only one set of operating conditions must be chosen for all analytes simultaneously. This is a complex task because each element has its interfering ions, with different mechanisms of elimination. Furthermore, the effect of this process could impact even elements not suffering from interferences, and it must be considered during the optimization of the DRC. All details concerning collision/reaction cells working conditions applied in multi-elemental speciation analysis are presented in Table 5 (Marcinkowska and Barałkiewicz, 2016a).
Other approaches to dealing with interferences
Since the ICP-MS detector can detect more isotopes for an analyte, it is possible to quantify the elements by analyzing the isotopes least burdened by interferences with polyatomic ions. However, the use of less abundant isotopes leads to a decrease in instrumental sensitivity with a consequent increase in the detection limit. For example, the chromium isotopes most abundant in nature are 52 Cr with 83.8% and 53 Cr with an abundance of 9.5% (Markiewicz et al., 2015). In the study performed by Mulugeta et al., the less abundant isotope of Cr was used to overcome the interferences deriving from the 40 Ar 12 C + connection; the corresponding LoD was relatively high but sufficient to enable the quantification of the element in the samples under examination (leachates from cement-based material) (Mulugeta et al., 2010). Alternatively, it is possible to correct the spectral interferences with mathematical methods, especially suitable for monatomic elements such as Ar, for which it is not possible to analyze a minority isotope. Optimization of chromatographic separation can also help; for example, (Roig-Navarro et al., 2001) noticed that in their work argon chloride did not give interference problems on arsenic species determination because arsenic and chloride were separated chromatographically (Roig-Navarro et al., 2001).
Competitive alternatives to HPLCICP-M
Gas chromatography (GC) coupled to mass spectrometry (GC-MS) or to inductively coupled plasma mass spectrometry (GC-ICP-MS) appear to be hyphenated techniques capable of competing with the performance of HPLC-ICP-MS. GC is another high resolving chromatographic separation technique capable of effectively separating volatile and semi-volatile compounds. Several methods have been developed for the accurate quantification of organomercury species through GC; many of them include a derivatization step to convert these non-volatile compounds into volatile species suitable for analysis by GC with sensitive detectors. (Zachariadis et al., 2008) developed a method to for the simultaneous determination of MeHg and inorganic Hg in human body fluids using GC-MS. Analytes were derivatized by in situ ethylation with sodium tetraethylborate (NaBEt 4 ) in aqueous solutions and extracted with a headspace solid phase microextraction (HS-SPME) procedure. MeHg and inorganic Hg species extracted by spiked human urine, saliva, and serum were separated by GC and detected by MS, obtaining good repeatability and LOQ. (Zachariadis et al., 2008;Lusilao-Makiese et al., 2012) instead, tested a method to investigate the distribution of Hg in coal from South Africa, using a four-step sequential leaching procedure and isotope dilution with GC-ICP MS. Although the process is quite complex, it permits to determine inorganic Hg and MeHg efficiently; chromatograms also showed unknown Hg peaks which were identified as EtHg. (Lusilao-Makiese et al., 2012) Simultaneous isotope dilution analysis with GC-MS and GC-ICP-MS on certified bivalve samples, were compared by Cavalheiro et al. (2014). Overall, both techniques performed well in measuring inorganic Hg and MeHg concentrations in the CRM, showing excellent linearity and precision. Using GC ICP-MS it is possible to obtain better sensitivity, making it possible to work with high solvent volumes (low pre-concentration factors). However, it must be considered that GC-MS is less sophisticated than GC-ICP-MS, requiring less qualified personnel to operate the equipment and lower cost of maintenance. (Cavalheiro et al., 2014) Isotope dilution GC-MS and HPLC-ICP-MS were compared by (Wang et al., 2013) for the determination of MeHg in fish samples. The authors did not find any differences between the performances of the two techniques, nevertheless, they have highlighted how HPLC-ICP-MS is the most used due to its high sensitivity, no need for prior derivatization, i.e. possibility of separating native Hg species. In addition, isotope dilution GC-MS is more time-consuming, since it requires overnight digestion and signal-matched isotope dilution spiking. (Wang et al., 2013)
Conclusion
This review presents an overview of recent analytical strategies for determining different mercury species with high sensitivity and good accuracy with HPLC-ICP-MS. It must be pointed out that the reliability of the results depends also on the pretreatment procedures. Through these years, numerous preconcentration methods were developed for mercury species and multielement speciation analysis. Among these options, SPE is the simplest preconcentration system to connect with a liquid chromatographic instrument, and the most tested one in literature because of its several advantages: high recoveries and enrichment factors, ease of management and automation, use of low amounts of organic solvents, and high flexibility, which allows its application on a wide range of research contexts. Its flexibility also helps in Frontiers in Chemistry frontiersin.org choosing the HPLC phases, which is one of the most critical tasks to achieve an efficient procedure.
L-cysteine and 2-mercaptoethanol are commonly used in the mobile phase for mercury speciation analysis, as thiol ligands for Hg, in a mixture with ammonium acetate, as a buffer, and methanol, to increase Hg detection sensitivity and reduce the retention time. Detector conditions are consequently optimized to minimize any problems caused by the mobile phase mixture's components (such as organic solvents). On the other hand, the conditions used in multielement speciation analysis methods depend on the target elements. The optimal pretreatment and separation conditions may vary significantly depending on the element in exam: for this reason, the method must be adjusted to obtain compromise conditions suitable for all the species involved. In literature, there are numerous examples with different analytes and applications from which to start to optimize a proper method for specific studies.
In conclusion, it is possible to affirm that the numerous studies on methodologies and the frequent improvement of the available instruments allow to obtain fast and effective analysis procedures for mercury speciation. Furthermore, because of ICP-MS's multielement capabilities, it is possible to extend the single-element speciation procedures to determine the species of multiple elements in a single analytical run, reducing the time and costs needed.
Author contributions
LF: conceptualization, writing-original draft preparation, writing-review and editing. AG and MM: supervision, writing-review and editing. PI, AD: data curation, investigation writing-review and editing. OA: supervision, methodology, data curation, writing-original draft. | 2022-12-01T14:07:21.251Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "1f228d2398502b00c48f89cd852ae17a06ba70ee",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "1f228d2398502b00c48f89cd852ae17a06ba70ee",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": []
} |
221560051 | pes2o/s2orc | v3-fos-license | A Novel Homozygous Mutation of Thyroid Peroxidase Gene Abolishes a Disulfide Bond Leading to Congenital Hypothyroidism
Congenital hypothyroidism (CH) is the most prevalent congenital endocrine disorder and causes mental retardation. A male Japanese patient with first cousin marriage parents was diagnosed as CH at 10 months. He was born before introduction of mass screening for CH. With continuous thyroid hormone replacement therapy, normal thyroid hormone status was maintained until adulthood. Genetic screening of next-generation sequencing was performed at the age of 52 years, and we identified a new homozygous thyroid peroxidase (TPO) gene mutation (GRCh38.p13, chromosome 2 at position 1493997, c.1964 G>T, p.Cys655Phe). TPO is an important enzyme to produce thyroid hormone. As demonstrated by a homology analysis of TPO proteins among different species, cysteine 655 residue is highly conserved, suggesting an important role in maintaining TPO function and structure. An in silico study with three-dimensional structure of the novel mutation was performed and suggested that the mutation abolished disulfide bond between cysteines at positions 598 and 655. An in vitro functional analysis using HEK293 cells revealed that TPO activity of the mutant was significantly impaired compared with that of the wild type. Furthermore, study of immunohistochemistry showed that localization of TPO in cells did not differ between the wild type and the mutant. In conclusion, this single disulfide bond loss mutation of a new TPO homozygous mutation, p.Cys655Phe, reduced TPO activity and caused congenital hypothyroidism without affecting subcellular localization of TPO proteins.
Introduction
Congenital hypothyroidism (CH) is the most prevalent congenital endocrine disorder and one of the preventable causes of mental retardation [1]. Prevalence of CH is approximately 1 in 2,000-4,000 newborns all over the world [2]. In Japan, mass screening for CH was introduced in 1979 and is usually performed in the neonatal period. Genetic screening of CH has been performed for research purpose and mainly eleven genes are related to CH such as TSHR, PAX8, NKX2-1, FOXE1, TG, TPO, SLC5A5, SLC26A4, IYD, DUOX2, and DUOXA2.
yroid dysgenesis accounts for 80-85% cases, while 10 to 20% cases of CH are due to abnormalities in thyroid hormone synthesis [3]. yroid peroxidase (TPO) deficiency due to a biallelic TPO mutation is a representative genotype of CH [4]. Inheritance pattern of CH due to TPO mutation is autosomal recessive. Most patients with biallelic TPO mutations exhibit permanent CH.
TPO plays essential roles in thyroid hormone production. Oxidized iodide by TPO attaches to tyrosyl residues in thyroglobulin (Tg) to make monoiodotyrosine and diiodotyrosine, a process also catalyzed by TPO. Next, these iodotyrosyl residues couple in another TPO-mediated reaction to form an iodothyronine, triiodothyronine (T3), or thyroxine (T4) [5]. erefore, individuals with low TPO activity may have insufficient thyroid hormone synthesis.
Here, we report a new homozygous TPO mutation (GRCh38.p13, chromosome 2 at position 1493997, c.1964 G>T, p.Cys655Phe) identified via genetic screening based on next-generation sequencing. To date, approximately 70 TPO mutations have been recorded in the Human Gene Mutation Database (http://www.hgmd.cf.ac.uk/ac/ index.php). However, this is an unrecorded and novel mutation. erefore, we performed conformational prediction and in vitro analyses of the novel TPO mutation in this case.
Patient.
is study was approved by Tokyo Medical University, medical ethics committee (SH2932). Written informed consents were obtained from the proband and his elder brother. A male patient, the fourth child of healthy Japanese consanguineous parents, was born at term after an uneventful pregnancy and delivery. He was born in 1979, before the introduction of mass screening for CH. His family reported that the patient presented persistent drowsiness, could not drink breast milk, and was hospitalized soon after birth. Although he was treated with nutrition therapy, height and weight gains were delayed. He was diagnosed as CH at the age of 10 months based on the blood test, and thyroid hormone replacement therapy was initiated. His thyroid hormone status remained normal since then until now.
ere were no problems in his growth process, but he has a mild intelligence deficit. His age at the last visit was 52 years. He was 162 cm tall and weighed 58 kg. He has been our outpatient from the age of 37 years and has received thyroid hormone replacement therapy. At present, we continue administering him with levothyroxine at 150 μg/ day. In palpation, his thyroid gland was soft, mobile, and symmetric. Recently, his hormone levels associated with thyroid were within the normal range at the last visit: serum TSH 1.04 μU/mL (reference 0.46-3.50) and free T4 1.50 ng/dL (reference 0.90-1.80). Serum Tg values during the visit term varied between 59.0 and 546.0 ng/ml (reference <32.7) in a few measurements. Serum levels of anti-Tg antibody and anti-TPO antibody were in normal range.
Detection and In Silico Analysis of TPO Mutation.
Peripheral venous blood samples were obtained from the proband and his elder brother. Genomic DNA was extracted from peripheral blood leucocytes using the Gentra Puregene Blood Kit (Qiagen, Germany) according to the manufacturer's protocol. e CH capture panel contained 11 known CH-related genes, 3 of which (PAX8, NKX2-1, and FOXE1) are involved in thyroid dysplasia [4]. TSHR is a hormone receptor involved in TSH signaling abnormalities. e remaining seven genes (TG, TPO, SLC5A5, SLC26A4, IYD, DUOX2, and DUOXA2) are involved in thyroid dyshormonogenesis [6].
In silico studies were performed with the wild type (WT) and mutant TPO variants using PyMOL 0.9 to evaluate the three-dimensional (3D) structure of p.Cys655Phe TPO [13]. For the conformational prediction of TPO, human myeloperoxidase (MPO) 3D structure was used as a template. MPO is known as the closest homolog to TPO and shares 47% sequence identity with the MPO-like domain of TPO [14,15]. e X-ray crystal structure of human MPO has been previously determined (PDB accession code 3F9P) [16].
Cell Culture and Functional
Analyses. HEK293 cells were maintained in Dulbecco's modified Eagle's medium that was supplemented with 50 U/mL penicillin, 50 μg/mL streptomycin, and 10% fetal bovine serum. An expression vector encoding C-terminal hemagglutinin-tagged human TPO (TPO-HA) was created by inserting the TPO cDNA sequence into pEGFP-N1 (Clontech laboratories, Palo Alto, CA) as previously described [17]. After inserting the TPO cDNA sequence into pB513B-1 (System Biosciences, Palo Alto, CA, USA), an expression vector for stable TPO expression (WT) was created. A novel TPO mutation (p.Cys655Phe) expression vector was created by site-directed mutagenesis. e stable human embryonic kidney 293 (HEK293) cell lines expressing each TPO protein (WT, p.Cys655Phe) was established using the PiggyBac system according to the manufacturer's protocol.
Cell transfection was performed using Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA, #L3000008). e cells were seeded into 12-well tissue culture plates at a density of 0.1 × 10 6 cells/well to reach approximately 70%-90% confluence for transfection. Transfection reagent and 1 μg DNA (composed of 750 ng p.Cys655Phe expression vector and 250 ng transposase) were added to 100 μL Opti-MEM medium and placed at room temperature for 5 minutes. Mixtures were added to the seeded cells and were cultured in an incubator at 37°C, 5% CO 2 for 3 hours. After incubation, the cells were replaced with the conventional medium and selected by puromycin and cultured sequentially for 48 hours. Transfected cells were continuously cultured, as described above, and stable HEK293 cell lines expressing each TPO protein (WT, p.Cys655Phe) were established.
For TPO activity measurement, we prepared 90% confluent stable cells expressing each TPO protein (WT, p.Cys655Phe) in 12-well dishes. Cells were trypsinized, washed with phosphate buffered saline, and resuspended in 1X Earle's balanced salt solution (Sigma Aldrich, St. Louis, MO, USA) containing 100 μM Amplex Red reagent ( ermo Fisher Scientific) and 2 mM H 2 O 2 , as previously described [18]. After transferring to a 96-well plate, the mixtures were incubated at 37°C for 1 hour and stirred frequently. Fluorescent emission was measured using EnSpire ™ Alpha (Molecular Devices, PerkinElmer, Inc, Waltham, MA, USA) to quantify TPO activity of the cells. TPO activity of the new mutant is expressed as percentage (mean ± standard error of the mean (SEM)) of the WT activity. e background activity, which was measured using nontransfected cells (control), was set to 0%. e above experiment is representative of procedures performed independently three times and obtained similar results. P values <0.05 obtained using Student's t-test were considered significant.
2.5. Immunohistochemistry. HEK293 cells overexpressing wild type or mutant TPO were seeded on a chambered coverglass and then transfected with pDsRed2-ER Vector (Clontech laboratories, Palo Alto, CA, 632409). Twenty-four hours after transfection, the cells were fixed with 10% formalin-phosphate buffer solution (PBS) for 15 minutes at room temperature. After washing three times with PBS, the cells were treated with or without 0.5% Triton-X100 for permeabilization, blocked with BlockingOne (Nacalai tesque, Kyoto, Japan, 03953-66), and incubated with anti-TPO antibody (Abcam, Cambridge, MA, USA; ab109383) for 2 hours at room temperature. After washing four times with PBS, the cells were incubated with Alexa-Fluor488 labeled Goat anti-Rabbit IgG (H+L) secondary antibody ( ermo Scientific, #A32731) for 1 hour at room temperature in a dark room. After washing four times with PBS, the cells were mounted with ProLong Gold with DAPI solution ( ermo Scientific, #3693).
Mutation Analysis of the TPO Gene.
Genetic screening of next-generation sequencing for the proband was performed, and a new homozygous TPO gene mutation (GRCh38.p13, chromosome 2 at position 1493997, c.1964 G> T, p.Cys655Phe) was identified. He did not have any mutations in the remaining genes that were analyzed. A homozygous novel TPO mutation was confirmed via Sanger sequencing (Figure 1(a) red arrow; Figure 1(c) II-4). His sibling (Figure 1(b); Figure 1(c), II-1) had the same mutation in the state of heterozygous confirmed by Sanger sequencing. His parents were first cousins (Figure 1(c), I-1 and I-2) and probably carriers of heterozygous TPO mutation. e patient's mother had already died, and his father was too old to visit.
erefore, blood samples of parents could not be collected.
yroid ultrasonography showed mildly enlarged thyroid (estimated volume, 22.9 mL; Figure 1 internal echo imaging, and increased internal blood flow (Figure 1(e)). His elder brother (Figure 1(c), II-1) has no goiter and has normal thyroid hormone levels, with no conspicuous abnormalities observed in growth, intelligence, and adolescent development. His elder sister (Figure 1(c), II-2) and second brother (Figure 1(c), II-3) died shortly after birth. Detailed records were not available. According to family information, autopsy of both children revealed enlarged thyroid glands. e family had no history of thyroid cancer.
Homology analyses of protein sequences across species were performed around Cys655 of human TPO proteins using ClustalW 2.1 software. e Cys655 residue substituted with the mutant was highly conserved among mammalian species (Figure 2).
Conformational Prediction.
As demonstrated by mutation detection, the cysteine 655 residue is within a highly conserved region of TPO, suggesting its important role in TPO function and structure. Cysteines can joint between side chains via disulfide bonds as part of the secondary and tertiary structures of proteins. A comparison of the predicted tertiary structure of the WT and mutant protein revealed that the novel TPO mutation p.Cys655Phe abolishes disulfide bonds between cysteines at positions 598 and 655 (Figures 3(a)-3(c)).
Functional Analysis.
We performed in vitro expression experiments to ascertain the pathogenicity of a novel mutation (p.Cys655Phe). HEK293 cell lines that stably express each TPO protein (WT or mutant) were established using PiggyBac system. Western blots were performed in triplicate between wild-type, mutant, and control cells (Figure 4(a)). As a result of comparing expression levels with ImageJ, mutant TPO expression normalized against beta-actin was significantly reduced as compared with the wild type. When the wild-type TPO expression level was set to 1, the mutant TPO expression level was 0.274 ( Figure 4(b), P � 0.005 < 0.05). In addition, there was no significant difference between wild type and mutant (P � 0.990) of RNA expression using real-time PCR (Figure 4(c)). Peroxidase activity in those cell lines was e assay of Amplex Red reaction showed that the p.Cys655Phe-TPO had strikingly low peroxidase activity, which was 16.4 ± 8.6% (P < 0.001) of WT-TPO (Figure 4(d)).
Immunohistochemistry of TPO.
Immunofluorescence studies were performed to determine the localization of TPO proteins. Immunocytochemical analyses were performed with each cell line expressing wild-type TPO or mutant TPO under both permeabilized and nonpermeabilized conditions (Figures 5(a) and 5(b)). e results indicate that both wildtype TPO and mutant TPO localize to the cell membrane and endoplasmic reticulum.
Discussion
Using next-generation sequencing, we revealed a novel homozygous TPO mutation (c.1964 G>T, p.Cys655Phe) in the patient with CH. To date, more than 70 TPO mutations have been reported, but only some of them have been assessed in vitro for enzyme activity [17][18][19][20][21][22]. We demonstrated through in vitro experiments that the TPO activity of the new mutant is significantly lower than that of the WT. Using a molecular graphic tool, we created a three-dimensional image of the molecular structure of this new mutation TPO and confirmed that disulfide bonds disappeared with amino acid substitution.
Proteins inside the endoplasmic reticulum fold correctly by forming disulfide bonds. It has been previously discussed that substitution of cysteine residue disrupted disulfide bridges and induced CH [23]. It is unclear whether TPO mutations prevent intracellular translocation to the plasma membrane surface in thyroid follicular cells. In previous study, a TPO mutation of p.Cys825Arg reported by Zhao et al. has substitutional malfunction of a disulfide-forming cysteine residue in TPO protein [24]. Our experiments revealed that mutant (b) Each TPO expression signal was expressed in relative arbitrary unit after normalizing against beta-actin (TPO/beta-actin ± SEM). Wildtype TPO expression level was set to 1, and control TPO expression level was set to 0. e mutant TPO expression level was significantly lower than that of the wild type (n � 3, Student's t test, * P < 0.05). (c) e results of quantitative PCR performed using SYBR Green were shown in arbitrary units of TPO mRNA/housekeeping gene hprt1 as the mean ± standard error of the mean. ere was no significant difference between wild-type and mutant TPO mRNA expression (n � 5, Student's t test, P � 0.990). Measurement of peroxidase activity using Amplex Red reagent. (d) Peroxidase activity of mutant TPO protein (Mut) was normalized to that of the wild type (WT; 100%) and that of the mock-transfected (control; 0%). e results of three independent experiments are expressed as the mean ± standard error of the mean. * P < 0.05, Student's t-test (Mut vs WT). 6 International Journal of Endocrinology TPO protein (p.Cys655Phe) was abundant not only in the cell membrane but also in the cytoplasm, especially in the endoplasmic reticulum even under nonpermeabilized condition.
Conclusion
In conclusion, a new TPO homozygous mutation (p.Cys655Phe) was identified in a Japanese family. is single disulfide bond loss mutation reduced TPO activity and caused congenital hypothyroidism without affecting subcellular localization of TPO proteins.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request. International Journal of Endocrinology 7 | 2020-09-03T09:11:53.229Z | 2020-08-30T00:00:00.000 | {
"year": 2020,
"sha1": "6afac7e29a6742aaf3645016417a8761923b2f4b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ije/2020/9132372.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1607a457fcb479061724dcc587251c5e8d3cc64d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266786783 | pes2o/s2orc | v3-fos-license | Assessing the integrity of auditory sensory memory processing in CLN3 disease (Juvenile Neuronal Ceroid Lipofuscinosis (Batten disease)): an auditory evoked potential study of the duration-evoked mismatch negativity (MMN)
Background We interrogated auditory sensory memory capabilities in individuals with CLN3 disease (juvenile neuronal ceroid lipofuscinosis), specifically for the feature of “duration” processing. Given decrements in auditory processing abilities associated with later-stage CLN3 disease, we hypothesized that the duration-evoked mismatch negativity (MMN) of the event related potential (ERP) would be a marker of progressively atypical cortical processing in this population, with potential applicability as a brain-based biomarker in clinical trials. Methods We employed three stimulation rates (fast: 450 ms, medium: 900 ms, slow: 1800 ms), allowing for assessment of the sustainability of the auditory sensory memory trace. The robustness of MMN directly relates to the rate at which the regularly occurring stimulus stream is presented. As presentation rate slows, robustness of the sensory memory trace diminishes. By manipulating presentation rate, the strength of the sensory memory trace is parametrically varied, providing greater sensitivity to detect auditory cortical dysfunction. A secondary hypothesis was that duration-evoked MMN abnormalities in CLN3 disease would be more severe at slower presentation rates, resulting from greater demand on the sensory memory system. Results Data from individuals with CLN3 disease (N = 21; range 6–28 years of age) showed robust MMN responses (i.e., intact auditory sensory memory processes) at the medium stimulation rate. However, at the fastest rate, MMN was significantly reduced, and at the slowest rate, MMN was not detectable in CLN3 disease relative to neurotypical controls (N = 41; ages 6–26 years). Conclusions Results reveal emerging insufficiencies in this critical auditory perceptual system in individuals with CLN3 disease. Supplementary Information The online version contains supplementary material available at 10.1186/s11689-023-09515-8.
Introduction
CLN3 disease, also known as juvenile neuronal ceroid lipofuscinosis (JNCL -Batten disease), is a childhoodonset neurodegenerative disorder resulting from pathogenic variants in CLN3 that lead to the pathological accumulation of ceroid lipofuscin in lysosomes of multiple cell types, with neurons displaying particular vulnerability [1,2].CLN3 disease is one condition in a genetically heterogeneous class of rare neuronal lysosomal storage disorders, collectively known as neuronal ceroid lipofuscinoses (NCLs).While individually rare, collectively the NCLs constitute the leading known cause of childhood neurodegenerative disorders worldwide [1,3].Symptoms typically onset between 4-7 years of age, with progressive neurodegeneration persisting for approximately 20-25 years, leading to premature mortality [4][5][6].The most common initial symptom is a loss of vision that progresses to severe blindness within 2-4 years, which is typically followed by cognitive decline, onset of seizures and Parkinsonism [7][8][9]4].Throughout adolescence and early adulthood, there is progressive loss of cognitive functioning, speech and motor skills [5,6,10].Because of the combination of progressive vision loss, motor dysfunction, and cognitive decline, it can be challenging to accurately assess the extent of the progressive neurocognitive decline in this population as the disease takes its course, since the administration of conventional cognitive evaluations that require visual presentation of information is not feasible [11].As such, there is limited knowledge about perceptual and cognitive capabilities across the progressive clinical stages of CLN3 disease.The consequences of these limitations affect both clinical evaluation and examination of efficacy during clinical trials.There is a pressing need to identify specific quantitative measures of brain function (i.e., neuromarkers or endophenotypes) that could be tracked objectively across the natural course of CLN3 disease.Such measures would mitigate subjective outcomes associated with conventional cognitive evaluation, serve as surrogate biomarkers of disease severity and could provide more precise evidence of treatment effects.
Event-related potential (ERP) recordings are an increasingly appealing option in both human patients and animal models of rare diseases [12][13][14][15][16][17][18].This easyto-apply non-invasive technique provides the opportunity to acquire objective quantitative measures of brain activity, including cortical network dynamics, without the need for overt behavioral responses from participants (e.g.[19][20][21]), and its exquisite temporal resolution allows for assessment of information flow across the cortical hierarchy, from sensory to perceptual to cognitive stages of processing [22].Since the peripheral visual system is affected severely and early in CLN3 disease, primarily due to macular dystrophy [23], here we deployed the ERP technique to measure auditory sensory-perceptual processing as a means to assess the integrity of early cortical processing in CLN3 disease.This is an important point, since the intention here is to specifically test central cortical processing.As such, the presence of variable peripheral deficits means that visual stimulation cannot be reasonably used to assay the integrity of cortical processing.In contrast, the peripheral auditory system appears to be intact in CLN3 disease, and as such, stimulation can be fatefully delivered to assess central processing dynamics.
The integrity of early auditory processing, auditory discrimination, and sensory memory can be studied by recording the well-characterized mismatch negativity (MMN) component of the ERP [24,25].MMN is evoked pre-attentively by introducing occasional changes (deviants) in a regularly occurring stream of auditory inputs (standards), typically by manipulating features such as frequency, location, loudness, phonemic boundaries or duration [26][27][28].MMN experimental designs do not require participant engagement or the ability to follow complex tasks, which makes them ideal for assessment of individuals with limited attention or cognitive impairments.The fact that the MMN has been shown to be generated pre-attentively (automatically) is a key factor in its use in clinical conditions where cognition and attentional functioning is compromised, since its generation does not require engagement with the inputs, and individuals can be engaged in other activities (e.g., reading a book, watching a movie, and even in performing a demanding visual task).A substantial literature has shown that MMN generation to simple feature deviants like "duration" are pre-attentively generated and that attention does not detectably modulate the component [29][30][31][32][33][34].
Here, we set out to interrogate auditory sensory memory capabilities in individuals with CLN3 disease, specifically for the feature of "duration" processing, an important cue in auditory perception and consequently in task performance [35][36][37].Our primary hypothesis was that the duration-evoked MMN would be reduced in amplitude in CLN3 disease.Based on prior work by our research group using this identical paradigm in multiple other rare disease populations (Rett Syndrome [12]; 22Q11 deletion syndrome [38]; and Cystinosis [13,39]) and in neurotypical controls [40], we had clear precedence to define both the electrodes where the duration MMN is seen to be maximal over frontal scalp (F3, FZ and F4) and the appropriate timeframe within which to make measurements of its maximal amplitude (~ 200-240 ms).
An additional design feature of our paradigm was the use of three different rates of stimulation (fast: 450 ms, medium: 900 ms, slow: 1800 ms).This manipulation allows for assessment of the sustainability and robustness of the auditory sensory memory trace, as the amplitude of MMN is directly related to the rate at which the regularly occurring stream of stimuli is presented [41,42].That is, when stimuli occur at a rapid rate, the occasional deviants are highly detectable and tend to "pop out" from the background sequence, evoking a robust MMN.As the rate of presentation is slowed, however, the robustness of the sensory memory trace is diminished, the deviant stimulus does not pop out in a highly discriminable manner, and the MMN is reduced or even absent.Thus, by manipulating presentation rate, one can parametrically vary the strength of the sensory memory trace, providing a greater degree of sensitivity for detecting potential auditory cortical dysfunction.Therefore, our secondary hypothesis was that compared to TD controls, durationevoked MMN amplitude reductions in CLN3 disease would be more pronounced at the slower presentation rates, where greater demand was placed on the sensory memory system.
Finally, since recruitment of participants in rare diseases like CLN3 disease necessitates inclusion of individuals across a large age-range in order to ensure an adequately powered study and to study the progressive stages of the disease, age must also be a consideration in subsequent analyses.It is well-known that auditory responses continue to mature with typical development across age [43][44][45], we therefore assessed whether the robustness of the MMN would increase with age in these cohorts.
Participants
Twenty-five participants with CLN3 disease (i.e., genetically confirmed bi-allelic mutations of CLN3) and forty-one age-matched neurologically typically developing individuals (TD) were enrolled.Participants with CLN3 disease were recruited through the University of Rochester Batten Center and TDs were recruited from the local community.The CLN3 disease cohort consisted of 12 females and 13 males, while 16 of the 41 TD participants were male.Four participants with CLN3 disease (3 females; 1 male) were excluded due to excessively noisy EEG data, where less than 50 accepted trials per condition were retained after artifact rejection (see details below).In the case of one additional CLN3 disease participant, where fewer than 50 trials were retained after artifact processing, the ERP data were nonetheless retained in the main analyses due to acceptable signalto-noise properties (i.e., their evoked potentials did not differ significantly (3 SD) from the group averaged mean waveform).The final cohort consisted of 21 individuals with CLN3 disease (mean age: 16.9 ± 5.5 years; range 6-28 years) and 41 TDs (mean age: 13.9 ± 5.2 years; range 6-26 years).There was no difference in age between TD and CLN3 disease groups (t(60) = -1.39,p = 0.17).All participants with CLN3 disease underwent detailed phenotypic assessment, accompanied by detailed medical history questionnaires completed by their caregivers.All had clinically defined CLN3 disease [46].Symptom severity was measured using a disease-specific instrument, the Unified Batten Disease Rating Scale (UBDRS) [8,9] and severity stage was assigned using the CLN3 Staging System (CLN3SS) [7].The UBDRS includes assessments of physical impairment, seizures, mood and behavior, and functional capability.CLN3SS categorizes individuals with CLN3 disease into four stages based on the occurrence of core features of vision loss, seizure onset, and loss of independent ambulation.The lower the score of disease stage (stage 0 -3), the less severe the symptoms (i.e., individuals in stage 1 have a less progressed disease state compared to those in stage 3).Using the CLN3SS, 9 individuals with CLN3 disease were classified in stage 1; 10 in stage 2; 6 in stage 3. Clinical demographics, including age, sex, race, ethnicity, disease stage, age at symptom onset, and medications, are listed in the supplementary materials (Supplementary Table 1).
The following exclusion criteria were applied to individuals with CLN3 disease: onset of seizures before 4 years of age, developmental concerns not related to CLN3 disease that occurred before the age of 4, and clear outlier status based on preservation of independent function after the age of 30 years [7].These criteria did not apply to any individual in our cohort.Other exclusion criteria included uncorrected hearing loss or ear infection on the day of EEG acquisition.Neurotypical (TD) participants were excluded if they had a familial history of a neurodevelopmental disorder, or any self-reported or parent-reported neurological or psychiatric disorders.
Experimental design
We presented an auditory oddball mismatch negativity (MMN) paradigm while recording electroencephalography (EEG).Experimental procedures were similar to those described in our prior work [15].Tympanometry was used to rule out middle ear conductive hearing loss in all participants on the day of EEG acquisition.Participants sat in a sound-attenuated and electrically shielded booth (Industrial Acoustics Company, Bronx, New York) on a caregiver's lap or in a chair/wheelchair.They watched a muted movie of their choice on a laptop (Dell Latitude E640) while passively listening to auditory stimuli presented at an intensity of 75 dB SPL using a pair of Etymotic insert earphones (Etymotic Research, Inc., Elk Grove Village, IL, USA).An oddball paradigm was implemented in which regularly occurring standard tones (STD, 85%) were randomly interspersed with deviant tones (DEV, 15%).These tones had a frequency of 1000 Hz with a rise and fall time of 10 ms.Standard tones had a duration of 100 ms while deviant tones were 180 ms long.The tones were presented with stimulus onset asynchronies (SOAs) of either 450, 900, or 1800 ms in separate SOA blocks referred to here as conditions.The order of these conditions (450 SOA, 900 SOA, 1800 SOA) was randomized and each SOA condition block consisted of 500, 250, or 125 trials respectively (Supplementary Fig. 1A).Participants were presented with 14 blocks in total consisting of 2 × 450 SOA condition, 4 × 900 SOA condition, and 8 × 1800 SOA condition within the experimental session, resulting in 1000 trials per condition.The entire task takes one hour to complete without interruptions.
EEG acquisition
A Biosemi ActiveTwo system (Bio Semi B.V., Amsterdam, Netherlands) with a 32-channel electrode array was used to record continuous EEG signals.Electrodes were positioned according to the BioSemi Equiradial system, with another 2 electrodes located over the left and right mastoids.The set up included an analog-to-digital converter and fiber-optic pass-through to a dedicated acquisition computer (digitized at 512 Hz: DC-to-150 Hz passband).EEG data were referenced online to an active common mode sense (CMS) electrode and a passive driven right leg (DRL) electrode.
EEG data processing
EEG data were processed and analyzed offline using custom scripts and routines that included functions from the EEGLAB Toolbox [47] and Fieldtrip toolbox [48] for MATLAB 2016.b (the MathWorks, Natick, MA, USA).The EEG data were first resampled to 128 Hz using the decimate function in MATLAB.The decimate function incorporates an 8th order low-pass Chebyshev Type I infinite impulse response (IIR) antialiasing filter.EEG data were then band-pass filtered using a Chebyshev Type II filter with a bandpass set at 1-40 Hz.Continuous EEG data were passed through a channel rejection algorithm, which identified bad channels using measures of standard deviation and covariance with neighboring channels (3-7 channels).Rejected channels were then replaced through spherical spline interpolation (EEGLAB).Data were then divided into epochs that started 100 ms before the presentation of each tone and extended to 800 ms poststimulus-onset.Bad trials containing severe movement artifacts or particularly noisy events were rejected if voltages exceeding ± 150 μV, followed by a threshold set at two standard deviations over the mean of the maximum values for each epoch (the largest absolute value recorded in the first 500 ms of a given epoch, across all channels for each trial in each condition).The number of accepted trials for each condition and group is presented in supplementary Fig. 2. All epochs were then baseline corrected to the 100 ms pre-stimulus interval (-100 to 0 ms).Next, the epochs were averaged as a function of stimulus condition to yield the auditory evoked potential to the standard and deviant tones.To maximize the ERP at frontal sites, the data were rereferenced offline to the left inferior temporal scalp-site T7, or T8 (i.e., the equivalent scalp-site over the right inferior temporal region) if T7 was a noisy channel in a given participant.This approach takes advantage of the inversion of the MMN that is seen between fronto-central and inferior temporo-parietal sites [49,40].Finally, we applied de-noising using independent component analysis, usually only removing one or two components reflecting eye-movement-related artifacts following definitions provided by Debener and colleagues [50].
The window for measurement of the MMN was based on four previous studies by our research team using this paradigm [12,13,51], where the maximal window for measuring MMN amplitude was found to be between 200 and 260 ms, with a peak typically found between 220-230 ms [12].We confirmed this timing here by subtracting the grand mean ERP to standard tones from the grand mean ERP to deviant tones (i.e., MMN: STD-DEV).In TDs, the resulting distribution of activity showed a maximal difference at approximately 220 ms (Fig. 2 A-C), fully consistent with the timing seen in this prior work [12,13,51].We then defined a time window of 40 ms centered around 220 ms (i.e., 200 ms -240 ms) to obtain average MMN amplitudes for every individual and across each SOA.Composite averages generated from F3, Fz, and F4 scalp electrodes were used for further statistical analysis.Please note that in prior work, we have used fronto-central scalp sites (FC3, FCZ and FC4) for these measures, but due to the use of a less dense 32-channel electrode cap in the current work, these fronto-central electrode sites were not available for analysis.However, the MMN is also very well-represented by the nearby frontal scalp electrodes (F3, FZ and F4) [52].
Statistical analyses
The primary analysis employed linear mixed-effects modeling (LME) and was implemented to analyze electrophysiological and clinical staging scores based on CLN3SS, using the fitlme function in Matlab based on the restricted maximum likelihood (REML) method.Our analyses included both discrete and continuous data across multiple levels.Advantages of this approach over standard analysis of variance (ANOVA) have been detailed previously [53,54].Post-hoc analyses were performed using linear hypothesis testing on linear regression model coefficients (coeftest).Mixed-effects models account for multiple comparisons and tested the fixed estimates of Condition (DEV vs STD), SOA (450, 900, 1800 ms) and Group (CLN3 disease vs TD), while participants were characterized as random effects.The first analyses explore the effects of SOA and condition on the two participant groups and their interactions, while accounting for the potential influence of age as a random factor.The effect of age is assumed to vary randomly across individuals in the sample and the effects were measured to account for individual differences in the outcome variables.By treating age as a random factor, the LME model allows for individual differences in the outcome variables that are associated with age to be accounted for.This can help to improve the accuracy and precision of the estimates for the fixed effects of interest (i.e., SOA and Condition), as well as the random effects associated with individual participants.Using Wilkinson Notation [55], the following linear-model expression was used LME = (ERP amplitude ∼ 1 + SOA data + Condition trial + Group participants +SOA data * Condition trial * Group participants + 1 + Age|Subjects ID , method = "REML").Next, an LME was implemented to explore effects of CLN3SS with electrophysiological MMN ERP amplitudes (DEV-STD) within the CLN3 disease group as a function of
Estimating Bayes factor t-test
As well as using frequentist probability-based statistics, we also used the Bayesian analog of a t-test (bf.ttest) as a post-hoc approach to allow us to explicitly determine the amount of evidence in favor of the null hypothesis ( H 0 : no interaction) [56].We estimated the Bayes fac- tors (BF 10 ) using Matlab code adapted from RStudio (R-Core-Team, 2016; the function anovanBF in the toolbox Bayes factor [57]).We adopted the commonly used Jeffrey-Zellner-Siow (JZS) prior with a scaling factor of 0.707 [58,59].Monte-Carlo resampling with 10 6 itera- tions was used for the BF 10 estimation.Subjects repre- sented the random factor.Importantly, this estimation allows us to quantify evidence that our experimental factors and interactions explain variance in the data above the random between-subject variations.Standard convention stipulates that any BF 10 exceeding 3 is evidence in favor of the alternative hypothesis (H 1 ) , while below 1 is in support of the null hypothesis (H 0 ) , and BF 10 rang- ing between 1-3 is taken as weak evidence [60].
Finally, using Spearman correlation analyses with bootstrap resampling [61] it was possible to test the relationship between MMN amplitudes and age for each group.To achieve this, data from each SOA were concatenated together in one analysis allowing for more statistically robust examination of MMN maturation across SOAs [62].However, each individual linear model fit was overlaid to show relationships for each SOA, while concatenated group data 95% confidence bounds were used.Correlations resulting in significant p-values were quantified using Robust Correlation [61].This approach stringently checks for false positive correlations using bootstrap resampling, including six additional validation tests.Due to the limited sample size and the unequal age distributions within age categories, the bootstrap method was chosen as the most appropriate test to explore the linear relationship between age and clinical severity, rather than including age in the LME model.The bootstrap method allows for random resampling of the original dataset, which creates multiple simulated datasets with replacement.This resampling method can help to address the issue of unequal age distributions and improve the accuracy of the correlation estimate.Moreover, the robust correlation method is particularly useful in situations where the data may contain outliers or have non-normal distributions, which can potentially bias the correlation estimates.This method uses robust estimation techniques that are less sensitive to extreme values and non-normal distributions, which can help to produce more accurate and reliable results [63,64].
A secondary exploratory analysis was also planned to more thoroughly explore the rich spatio-temporal dynamics of the entire data matrix.Nonparametric cluster-based permutation (see Fig. 5) was employed [65,25].By clustering neighboring channels expressing the same effect, this test controls for issues associated with multiple comparisons while jointly accounting for the dependency of the data.For each SOA condition, a paired samples t-test was computed between deviant and standard trials (i.e., MMN: DEV-STD) across each channel-time pair.Significant clusters were defined wherein neighboring spatially connected channels and temporally arranged time-pairs exceeded the statistical threshold of p < 0.05 (corrected, a priori threshold), and then the sum of the corresponding t-values was calculated for each of the resulting clusters (cluster-level statistic, maxsum).Next, the critical p-value for each cluster was calculated using the Monte Carlo estimate.For each cluster, this involved randomly dividing the data into two subsets and calculating a new summed t-value for each iteration.By randomizing the data across the deviant and standard trials (i.e., DEV vs STD) and recalculating the test statistic 2000 times, we obtained a reference distribution of maximum cluster values against which to evaluate the statistics of the actual data.Finally, empirical clusters were considered significant at p < 0.05 if their summed t-value was smaller than the 2.5th percentile (i.e., less than an alpha-level of 0.05, two-tailed), or higher than the 97.5th percentile of the permutation distribution.
Results
Figure 1 displays the ERPs elicited by standard and deviant tones for each group as a function of stimulation rate, and the corresponding difference waveforms, over frontal scalp sites (averaged over; F3, Fz, and F4).As expected, TDs show clear canonical MMN responses with a robust negativity in the period from 200 ms-240 ms post stimulus onset across all SOAs.MMNs to the fast (450 ms) and medium (900 ms) SOA conditions are also evident in the waveform comparisons for the CLN3 disease group, but at the slowest rate (1800 ms), there is no clear MMN in evidence.The MMNs in TDs showed the typical topographic distribution, with a prominent fronto-central negative distribution, accompanied by bilateral positivity over the mastoids (Fig. 2), consistent with main generators of the duration MMN in auditory cortices along the supra-temporal plane [49].Despite weaker magnitudes, individuals with CLN3 disease produced somewhat similar topographical distributions, although at the slowest rate (SOA = 1800 ms), the typical fronto-central distribution was not present (Fig. 2, Panel C).
Modeling effects between groups and stimulus parameters
LME models were implemented to explore ERP amplitudes as the dependent measure, averaged over the time-window of interest (200 ms -240 ms), and their interrelationships across the two participants groups as a function of SOA and trial type (Condition) as the independent variables, and to test the interactions of these while age and participants were treated as random effects.Outcomes are reported as beta coefficients or normalized F1 depending on the model analyses.First, using a multilevel LME comparing all independent variables revealed significant main effects for Group F (1,351.69) = 6.91.20, p = 0.008 SOA F(2,326.43)= 5.34, p = 0.005 , Condition F(1326.43)= 48.71,p = 1.66x10 −11 ) and Age F (1,307.12) = 3.51, p = 0.051 .These results indicate that there was a generalized MMN effect across participant groups.It is noteworthy to mention that conducting a likelihood ratio test that included age as a beta covariate provided a better fit for the data than a model without it χ 2 (4) = 1390.4,p = 7.36x10 −10 .To this end, all LME analyses included age as a covariate as it improved the model significantly.Furthermore, there was a significant difference between the DEV vs STD trial conditions across both participant cohorts β = 1.01,SE = 0.14, p = 1.39x10 −11 , CI[0.721.28] ; indicating a meaningful relationship between participant groups across Conditions as a function of SOA.
In further exploring the relationship of the fixed effects between CLN3 and TD groups, we found a significant positive relationship between Groups (β = 0.46, SE = 0.17, p = 0.008, CI[0.110.81]) .This shows that as the MMN effect increases in the TD group the mean MMN effect in the CLN3 group increases demonstrating a positive relationship between the two groups.However, the absolute group means were significantly different from each other, overall, the mean of the TD group is 0.46 higher than the mean of the CLN3 group.Furthermore, there was a significant relationship between participant Group and SOA β = 0.48, SE = 0.14, p = 1.71x10 −10 .Examination of the full model interactions indicated that there was no significant effect between the 450 ms and 900 ms SOAs between Groups.There was, however, a significant effect between 1800 and 900 ms SOAs for the CLN3 disease cohort compared to the TD group using age as a covariate F (1,155.15) = 33.20,p = 4.34x10 −08 .Similarly, there was a significant effect between the 450 ms and 1800 ms SOA F (1,173,70) = 24.01,p = 2.18x10 −06 .These results indicate a robust MMN response across groups as a function of Condition, but also point to differential ERPs as a function of presentation rate (SOA).
Next, the analyses focused on the CLN3 group and explored more specifically for MMN effects as a function of SOA within Group.The model revealed that there was an overall significant relationship between the DEV and STD trials (β = 1.26,SE = 0.41, p = 0.02, CI[0.44 − 2.07]) , showing that on average the DEV trial mean amplitudes were 1.26 larger in amplitude than the STD trials.Furthermore, focusing on the association between SOA conditions, there was a significant difference between the 1800 ms SOA and the 900 ms SOA A clear MMN (difference between standard and deviant traces) was evident at all SOAs for the TD control group.However, a clear MMN was present only for the 450ms and 900ms SOAs in the CLN3 disease group.The time period of interest is depicted by light blue shaded panels representing the defined time where we obtain average MMN amplitudes for every individual and across each SOA there was a significant MMN effect in the 900 ms SOA t (1,20) = 1.69, p = 3.21x10 −04 , CI[0.72inf ] .How- ever, there was no significant difference in the 450 ms SOA p = 0.053, CI[−0.01inf] and no difference in the 1800 ms SOA p = 0.23, CI[−051inf ] .It was notewor- thy though, that in the case of the 450 ms SOA, the result was on the bound of conventional statistical significance (p = 0.053), whereas this was not the case for the 1800 ms SOA (p = 0.23).As detailed above, we applied complimentary Bayesian statistics as an alternative method to test for a true MMN effect [56].The Bayesian analog of the t-test allows us to explicitly determine the amount of evidence in favor of the null hypothesis (H 0 ), for details see Methods section.The results revealed that in the 450 ms SOA condition we had a Bayes factor of BF 10 = 3.71 which represents moderate evidence for the alternative hypothesis (H 1 ) as defined by Lee and Wagenmaker [60].In other words, this indicates that there is moderate evidence in support of a significant MMN effect in the 450 ms SOA condition.We interpret this finding as evidence for an approach to conventional levels of significance.Furthermore, additional cluster-based permutation statistics on spatio-temporal distribution, reported below (section: Spatio-temporal Statistical Cluster Analyses), provide additional evidence in support of a true MMN effect in the 450 ms SOA using Monte Carlo statistics.Next, when further exploring the MMN in the 1800 ms SOA, the results revealed a Bayes factor of BF 10 = 1.51, which represents anecdotal evidence for the null hypothesis (H 0 ).This finding indicates that there may not be a true MMN in the 1800 ms SOA condition as there is only very weak evidence to suggest that there is a significant difference between the DEV and STD trials at this slowest stimulus presentation rate (i.e., anecdotal evidence for alternative hypothesis (H 0 )).Additional exploratory cluster-based statistics, reported below, provide further evidence to suggest that there is no significant MMN effect across channels and time in the 1800 ms SOA.In sum, these findings suggest that in the CLN3 cohort, the 900 ms SOA evoked a robust MMN effect as compared to the other stimulus Fig. 2 Topographic representation of the differences between deviant and standard tones across SOAs.An MMN with typical spatial distribution with negativity (blue) over fronto-central scalp and positivity (red/yellow) over the mastoids and posterior scalp is clearly seen in all conditions for the TD group.In the CLN3 disease group, the strongest negativity occurs in the 450 ms and 900 ms SOA conditions, but is substantially reduced with atypical distribution in the 1800 ms SOA condition (Panel C) presentation rates.In the fastest SOA (450 ms), the MMN approached conventional levels of significance, but there was no statistical support for an MMN at the slowest (1800 ms) SOA.
As a comparison we report the same analyses as above (only the frequentist probability-based statistics) but within the TD group.As done before, the analyses explored more specifically for MMN effects as a function of SOA within the TD population.The model revealed that there was an overall significant relationship between the DEV and STD tri- Next, an LME was implemented to explore effects of CLN3 disease stage (given by CLN3SS) on electrophysiological MMN amplitudes (DEV-STD) within the CLN3 disease group as a function of SOA.First, a likelihood ratio test indicated that including age and CLN3 disease stage provided a better fit for the data than the model without them χ 2 (9) = 500.06,p < 0.05 .Using age as a covariate, LME results showed a significant effect between SOA and CLN3 disease stage 1 and 3 (β = 1.36,SE = 0.66, p = 0.04) , while there was no significant effect between stage 1 and 2 nor between stage 2 and 3.These results suggest the larger gap between CLN3 disease stages 1 and 3 is able to distinguish between MMN amplitudes in the current patient cohort.Finally, there was no significant interaction between SOA and CLN3 disease stage Fig. 3.
Testing the relationship between age and MMN as function of SOA across groups
Robust correlation analysis was used to test the relationship between MMN amplitudes and age for each participant group.The results revealed a significant Fig. 3 Mean MMN amplitude for each SOA in TD and CLN3 disease groups.Each of the scatter plot, box plot and violin plot columns represent individual participants MMN amplitude values (averaged over; F3, Fz, and F4) as a function of SOA condition.These amplitudes are calculated for the time window between 200 ms -240 ms.Horizontal lines represent the interquartile range (solid thin lines), median (dashed thick line in box), upper and lower fences that are ± 1.5 times interquartile range from the median (solid).The blue and green violin plots represent the kernel density estimation for the distributions.Significant effect is between 900 and 1800 ms SOA p = 4.34x10 −08 and between the 450 ms and 1800 ms SOA p = 2.18x10 −06 negative relationship between age and MMN for the TD group (r s = −0.17,p < 0.05, 95%CI[−0.33− 0.01]) , while there was a significant positive relationship between MMN and age within the CLN3 disease group (r s = 0.25, p < 0.05, 95%CI[0.010.49]) .These findings revealed that in TDs, with increasing maturation, the MMN effect becomes stronger.In contrast, individuals with CLN3 disease show that with increasing age, MMN effects become weaker as amplitudes approached baseline (zero).The latter result is not surprising given the age-associated deterioration of both physical and functional capabilities in CLN3 patients that are observed following the initial onset of disease symptoms [8,7].Here we repeated the correlation between MMN and age as a function of SOA while data were grouped based on CLN3SS (see Fig. 4, color coded data).The results revealed a significant positive relationship (r s = 0.25, p < 0.05, 95%CI[0.010.4]) .These find- ing corroborate existing literature that demonstrates an age-associated worsening of functional capabilities as well as physical symptom severity in individuals with CLN3 disease [8,7].
Exploratory spatio-temporal statistical cluster analyses
To further explore potentially significant spatio-temporal distributions of task-related activity, an additional exploratory analysis was conducted using cluster-permutation statistics to identify clusters of electrodes and periods of time showing significant differences between standard and deviant tones across SOA conditions.Figure 5 shows the outcomes of this post-hoc analysis (p < 0.05 corrected, Nperm = 2000).For the TDs, there were significant clusters across most channels within the time window of interest 200 ms -240 ms at all of the SOAs.Contrasts reveal negative magnitudes over frontal electrodes.In-line with the ERP waveforms, significant clusters were detected for the 450 ms and 900 ms SOAs at the time window of interest in individuals with CLN3 disease.In contrast, the comparison for the 1800 ms SOA condition showed no clear MMN distribution in the CLN3 disease group at this slowest rate.
Discussion
The aim of the current study was to utilize the amplitude of the mismatch negativity (MMN) component to assess auditory sensory memory for duration in individuals with CLN3 disease, on the premise that this easy-to-test neurophysiological marker might be sensitive to subtle changes in auditory cortical processing in this progressive neurodevelopmental disorder.Linear mixed effect analysis pointed to an MMN that was intact in the clinical group for medium presentation rates (900 ms SOA), reflecting a generally preserved ability to discriminate auditory duration deviance and to establish auditory sensory memories.However, the MMN was quite clearly compromised at longer (slower) presentation rates (i.e., the 1800 ms SOA) as greater demand was placed on the sensory memory system, in line with our main hypothesis.Results also suggested that at the most rapid stimulation rate (450 ms SOA), the MMN was weaker in the CLN3 cohort, an effect that was not predicted.Finally, we found that age significantly predicted neurophysiological correlates of sensory memory in CLN3 disease -that is, that the MMN showed a progressive reduction in amplitude with increasing age (i.e., disease progression), exactly opposite to what was observed in TD control participants.
In what follows, we describe these results in more detail.First, there was a significant positive relationship between TD and CLN3 participants demonstrating that overall neurophysiological responsivity was relatively comparable across groups as a function of condition, although the absolute group MMN means were significantly different between each cohort.In addition, the a priori hypothesis that age would be significantly related to the MMN effect was supported as the overall fit of the model improved when age was added as a covariate.Additionally, there was a main effect for DEV vs STD trials which further demonstrated that there was an overall generalized MMN effect across both participant groups.A key finding was that presentation rate (i.e., the variation in SOA) was a significant predictor of participant group, indicative of an interaction between groups as a function of rate of presentation.When exploring this further using age as a fixed effect, the results showed that there was only a significant difference between the fast and medium SOAs (i.e., 450 ms and 900 ms), but not between the fast and slow SOAs (i.e., 450 ms and 1800 ms) or slow and medium SOAs (i.e., 1800 ms and 900 ms).This suggested that the MMN in CLN3 participants was equally disrupted in the fast and slow presentation rates and the largest gap in auditory sensory memory performance was between the fast and the medium presentation rates.
Next, using a within-subjects model to explicitly test for the MMN effect in the CLN3 group, no significant difference between DEV vs STD trials (i.e., MMN effect) was observed at both the fast and slow presentation rates, whereas there was a robust MMN effect at the medium SOA rate.In partial contrast to the LME results, exploratory post-hoc cluster statistics did show a significant MMN effect at the fast presentation rate in CLN3, whereas both statistical approaches showed no evidence of an MMN at the slowest rate.This difference in absence and presence of MMN effect at the fastest presentation rate (SOA = 450 ms) is most likely due to the inherent methodological differences in these statistical approaches [66].The computation of clusterbased statistics takes into consideration adjacent temporal and spatial information and uses cluster-based correction methods to account for multiple comparisons, while the LME approach relies on specific timewindows of interest at a fixed region of interest (i.e., pre-specified electrodes).Additionally, the LME model used age as a covariate.Lastly, it is worth noting that while the MMN was not strictly significant at the 450 ms SOA in CLN3, it did approach conventional levels of significance in the LME (p = 0.053) and Bayesian analysis pointed to moderate evidence for an MMN at this presentation rate.When focusing on the TD cohort, the MMN was clearly present and highly stable across all SOAs, as expected.
Prior work has shown that the strength of the MMN is highly dependent on stimulation rate, with reduced MMN responses observed at slower rates [41,42,67].Current understanding of this phenomenon is that the strength of the auditory sensory memory depends on a temporal integration window, such that establishment of a robust sensory memory depends on the presentation of a number of standards within this window, against which the deviant will ultimately be compared.Perceptually, this is very obvious in a design such as the one used here.At rapid rates of presentation (e.g., (SOA of 450 ms)), the duration deviant pops out strongly from the rapid stream of standards, whereas when the rate of presentation is slowed (SOA 1800 ms here), this pop-out is diminished.In extremis, the reader can readily imagine that if the standard tones were presented once per minute or at even longer lags, it would become very difficult to determine a duration deviant relative to these sporadic standards, and this would certainly not be achieved automatically (pre-attentively).The fact that duration MMN is absent at the slowest and most demanding presentation rate here in CLN3 disease, may point to the early stages of a breakdown in automatic detection and integration of these stimuli in auditory sensory memory.It is also worth pointing out here that the presence or absence of an MMN during passive tasks is known to correspond closely with behavioral performance when individuals are asked to actively discriminate the deviants in follow-up behavioral studies.Only deviants that can be discriminated above chance levels are found to also evoke MMN responses [67,68].
We did not behaviorally assess auditory discrimination abilities of the participants given the associated loss of vision, speech, and motor decline.Many of the participants with CLN3 disease would not have been able to perform the task.Rather, we employed the passive MMN design to assess the evoked neural activity.It will fall to future work to determine what the perceptual and cognitive implications of this breakdown are [10,11,69].However, prior work has shown that weakened ability to sustain information in sensory memory can reflect cognitive deterioration in various clinical conditions [42,45,46].It will also be of significant interest to further investigate the duration-evoked MMN at even slower presentation rates.This may better reveal the extent of this difference, and it remains to be determined whether this difference is peculiar to the feature of duration or if it will also be evident for other basic auditory features such as pitch, loudness and location.Manipulations of presentation rate are not the only way in which the auditory sensory memory system can be parametrically manipulated.Whereas the presentation rate manipulation used here is presumed to test the temporal integration window of the MMN system, the sensitivity of the system can also be assessed by manipulating the extent to which the deviant stimulus differs from the standards.Here, a deviant of 180ms was used against a standard tone of 100ms, which represents a large and highly discriminable duration change known to evoke large amplitude MMN responses in neurotypical controls [40].By parametrically manipulating the extent of the duration deviance, prior work has shown that the amplitude of the MMN tracks with the size of the difference, such that at small differences (e.g.130ms versus 100ms), the MMN is highly diminished or even absent in neurotypical controls [40].
There are parallels between the current findings of diminished MMN responses at slower presentation rates and prior work in other rare neurodevelopmental diseases, specifically Rett Syndrome (RTT) and Cystinosis [12,13].In Rett participants, for example, the durationevoked MMN was only detected when stimuli were presented at the most rapid presentation rate of 450 ms SOA, and unlike the CLN3 disease participants reported here, no MMN was evident at the 900 ms SOA, nor at the 1800 ms SOA, suggesting a more severe disease course in this population.Likewise, participants diagnosed with Cystinosis, another of the rare lysosomal storage disorders, produced robust MMNs comparable to those seen in TDs only in response to the fastest presentation rate (i.e., at 450 ms SOA) [13], with clear atypicalities in the MMN at the two slower rates 900 ms SOA and 1800 ms SOA).Taken together, these data suggest that the duration-evoked MMN may be a sensitive measure of disease severity across a number of neurodevelopmental disorders.
An unanticipated finding here was the weakened MMN response in CLN3 disease at the fastest presentation rate (i.e., the 450 ms SOA).This is the rate at which one expects the most robust MMN to be produced, whereas it was at the medium rate (900 ms SOA) that this occurred in CLN3.Since this was not explicitly predicted, the effect warrants replication in an independent cohort before any strong conclusions can be drawn.Nonetheless, these data suggest that there may be an emerging deficit in the ability to generate auditory sensory memories for duration at rapid presentation rates in CLN3 disease.
Clinically, some of the most striking differences observed in individuals with CLN3 disease are in memory, attention and speech functions [5,6,10,11].This cognitive decline generally begins around the time of onset of vision impairment, but continues to progress over years, even after vision loss is maximal [9,70].To date, quantitative characterization of these differences has not been well-defined [11].As such, the relationships between cognitive impairments and other clinical features of CLN3 disease are not yet well understood.For instance, the onset of visual decline and of cognitive deterioration have been a subject of debate [6].It is generally accepted that the onset of observable cognitive decline begins within two years of the onset of visual decline [6,10].This has been shown in some individuals with CLN3 disease, while in others, this decline seemed to precede visual deterioration or even emerge at a much later stage [71][72][73][74].These inconsistencies in the manifestation of the onset of cognitive decline were taken to highlight the importance of careful acquisition of patient history in those suspected to have CLN3 disease [6].Similarly, understanding the extent of cognitive regression in CLN3 disease is an important component in identifying reliable neurophysiological biomarkers of this disease.As far as we know, the use of electrophysiological assays to evaluate cognitive abilities including attention and memory has not yet been leveraged in this population.The current work serves as a good first step in exploring and developing objective neural markers of pathology (biomarkers) that can be easily carried out noninvasively throughout the progressive stages of CLN3 disease.
Genetically manipulated mouse models of disease are remarkably powerful research tools, providing essential insights into the neurobiological substrates of neurodevelopmental disorders like CLN3 disease [75][76][77], and yet many of the outcome measures used to quantify or track disease progression in a mouse cannot be meaningfully applied in humans.Obviously enough, invasive electrophysiological recordings, ubiquitous in model systems work, are not feasible in humans.Similarly, many of the behavioral assays used to assess disease progression and severity in a mouse are only loosely related to human behaviors [78], and higher-order functions such as cognitive control and language cannot be readily interrogated.Establishing objective neurophysiological markers of disease progression in human patients is a crucial step towards bridging this inter-species translational divide.In humans, measures of brain electrophysiology are almost exclusively made using non-invasive scalp recordings that assay the activity of large distributed neuronal ensembles across the entire brain (i.e., circuit-level analysis).In mouse models, typical assays involve single or multiunit neuronal recordings in vivo (usually in anesthetized preparations) or in vitro slice preparations where synaptic plasticity can be assessed.Again, while the approaches used in each species are certainly powerful in their own right, the researcher is mostly left to infer or speculate about correspondences across species.However, ERP markers like the MMN can be readily recorded in mice using wholly similar, if not identical experimental procedures [79,80].It will be important to determine going forward whether in mouse models of Batten disease, the MMN phenotype seen here can be recapitulated.If so, it will present as an excellent cross-species neuromarker.
Study limitations
A few limitations of the current study need to be acknowledged.Given that auditory responses continue to mature with typical development [43,81], the relatively wide participant age-range is a limitation and follow up studies will ideally work within more delimited age-ranges.Of course, given that CLN3 disease is a rare disease, recruitment within restricted age bands is very challenging.In addition, although age was correlated with MMN amplitude in the TDs, it was not associated with manipulations of stimulus rate.This suggests that the differences seen among groups as a function of presentation rate were not affected by age, but rather, represent frank differences in brain function in CLN3 disease.It will be crucial for future work to follow up with parametric studies to assess the limits of the auditory sensory memory system in CLN3 disease for these and other fundamental auditory features (i.e., frequency, duration, location, and loudness) and their implication for higher-order cognitive processing.Future studies should also follow up with the evaluation of the relationship between ERP measures and the four disease stages based on the CLN3SS, with more patients representing each disease stage.In this study, exploring the effects of CLN3 disease stage on MMN amplitudes as a function of SOA while controlling for age proved to be the best model for LME analysis.Although including age as a covariate in the LME model improved performance, the outcomes should be interpreted with caution due to the relatively restricted sample size in each of the stages of CLN3 disease.Again, recruitment within restricted disease stages just as with restricted age bands is very challenging given that CLN3 is a rare disease.We did not include biological sex as a variable in our analyses, and this may be of importance in future work given the reported sex differences in symptom severity and progression in CLN3 disease [82][83][84].The current study was not adequately powered to examine this variable (9 females versus 12 males in our CLN3 cohort).It is worth pointing out though that there is no clear evidence for biological sex differences in the generation of the MMN [85].Lastly, non-invasive recordings such as those conducted here are limited in their ability to shed light on the mechanisms by which CLN3 protein dysfunction leads to auditory cortical processing differences.Work using similar paradigms in murine models of CLN3 disease will be highly instructive in this regard [86,87,77].
Conclusions
This study points to a preserved ability of individuals with CLN3 disease to automatically decode duration deviations in the auditory stream when stimuli are presented at medium presentation rates.Despite this, automatic detection of duration changes was atypical in these individuals when the presentation rate of the stimulus stream was slowed to both the lowest value 1800 ms SOA) and the fastest (450 ms SOA) used in the current study, suggesting that when additional demand is put upon the auditory sensory memory, more subtle atypicalities are revealed.We speculate that this attenuation in the duration of sensory memory might lead to significant implications for different aspects of information processing, task performance and language acquisition.The exact mechanisms underlying this decline, as well as behavioral outcomes, represent important avenues of research to increase knowledge of CLN3 disease and its perceptual and cognitive sequelae.Measures such as these could potentially serve as surrogate biomarkers with the ability to index disease severity and treatment response.
Fig. 1
Fig. 1 Group-averaged waveforms for typically developing (TD) and CLN3 disease groups over frontal scalp sites (composite average of F3, Fz and F4).Auditory event-related potentials (ERPs) to standard tones (blue trace) and deviant tones (red trace) are presented with standard error of the mean indicated by gray shading.Stimulus onset was at 0ms, indicated by the vertical dotted line.Panel A shows responses for the fastest stimulation rate (450ms stimulus onset asynchrony (SOA)).Plotted in the panel below the ERPs (yellow trace) is the subtraction waveform (deviant minus standard), isolating the MMN-related activity.TD controls are shown to the left of each panel and CLN3 disease individuals to the right.Panel B shows the responses for the medium paced rate (900ms SOA), and panel C shows responses for the slowest rate (1800ms SOA).A clear MMN (difference between standard and deviant traces) was evident at all SOAs for the TD control group.However, a clear MMN was present only for the 450ms and 900ms SOAs in the CLN3 disease group.The time period of interest is depicted by light blue shaded panels representing the defined time where we obtain average MMN amplitudes for every individual and across each SOA als β = 1.09,SE = 0.11, p = 4.68x10 −21 , CI[0.89 − 1.31] , showing that on average the DEV trial mean amplitudes were 1.09 larger in amplitude than the STD trials.Focusing on the association between SOA conditions, there was no significant difference between the 450 ms SOA and the 1800 ms (β = −0.06,SE = 0.17, p = 0.71, CI[−0.410.27]) and the 900 ms SOA (β = −0.06,SE = 0.17, p = 0.72, CI[−0.410.28]) .Finally, there was no significant difference between the 1800 ms and 900 ms SOA (β = −0.02,SE = 0.11, p = 0.85, CI[−0.210.18]) .These results show that MMN performance was largely similar across all SOA conditions.Next, to explore MMN effects within each SOA condition, planned comparisons were carried out comparing DEV vs STD trial conditions.This showed a significant MMN effect in the 450 ms t (1,40) = −6.47,p = 1.04x10 −07 , CI[−1.38 − 0.72] , 900 ms t (1,40) = −5.95,p = 5.52x10 −07 , CI[−1.49− 0.73] , and 1800 ms t (1,40) = −6.82,p = 3.39x10 −08 , CI[−1.45 − 0.79] SOA.As expected, these findings show robust MMN effects across all SOA conditions in TD individuals.
Fig. 4 Fig. 5 3 Fig. 5 (
Fig. 4 MMN amplitude correlation with Age across SOA conditions.Individual dots represent participants and are color coded according to SOA.Correlations were assessed using robust Spearman's rank correlation (bootstrap permutation test p < 0.05).This was done for each individual SOA (colored lines), as well as collapsed across all conditions (shaded area representing 95% confidence interval).A Subplots show the distribution of the data in terms of Age (left) and B MMN Amplitude (below).Panel 3 shows ranked CLN3SS correlation with MMN Amplitude and Age across SOA conditions | 2024-01-07T06:16:06.321Z | 2024-01-06T00:00:00.000 | {
"year": 2024,
"sha1": "9a5924dbdbbf110b32b63d56744671e5a8984642",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8c1fd1428c1b1e935685b515a364125c8c789c13",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
197308254 | pes2o/s2orc | v3-fos-license | SYNTHESIS OF 2,4-DIARYLIMIDAZOLES THROUGH SUZUKI CROSS-COUPLING REACTIONS OF IMIDAZOLE HALIDES WITH ARYLBORONIC ACIDS
- The Suzuki coupling, a Pd-catalyzed cross-coupling reaction of a boronic acid with an aryl halide, was used to prepare several 2,4-diarylimidazoles. Iodinated and brominated imidazoles ( 3 ) and ( 9 ) proved to be suitable aryl halides for Suzuki coupling. These imidazole halides readily reacted with phenyl-, naphthyl-and biphenylboronic acids under Suzuki conditions to give arylated imidazoles.
INTRODUCTION
Diarylimidazoles have been described as pharmacologically active compounds in several publications over the last twenty years. 1,22,4-Diarylimidazoles for example showed NPY5 receptor antagonist activity 3 as well as antiinflammatory activity. 4Most of these diarylimidazoles were prepared either by condensating amidines with α-halogenated ketones or by condensating α-amino ketones with KSCN.However, condensation reactions did not always prove to be successful in the preparation of imidazoles and yields in many cases were low. 4 In the nineteen-nineties several chemists successfully introduced cross-coupling reactions such as the Stille reaction, 5 the Negishi reaction 6 or the Suzuki reaction 7 for the arylation of protected imidazoles.Of all these cross-coupling reactions the Suzuki coupling was the most successful in terms of product variety and yield.
EXPERIMENTAL 1 H and 13 C NMR spectra were recorded on a Bruker Avance DPx200 spectrometer, using TMS as an internal standard.MS spectra were recorded on a Shimadzu QP 5000.Column chromatography was performed on Merck silica gel 60, 0.063 -0.200 mm.Melting points were determined with a Kofler melting point apparatus and are uncorrected.Microanalyses were determined by Johannes Theiner at the Institute of Physical Chemistry of the University of Vienna.
5-Iodo-1-methoxymethyl-2-phenyl-1H-imidazole (3)
To 5.647 g (0.030 mol) of 1 in 250 mL of anhydrous THF at -78°C under argon 20 mL of 1.6 M n-BuLi in hexane (0.032 mol) were slowly added.After 30 min a solution of 8.630 g (0.034 mol) iodine in 20 mL of dry THF was added.The reaction mixture was stirred 1 h at -78°C and then allowed to warm to rt.
After that a solution of sodium hydrogensulfit (10%) was added until the reaction mixture turned clear.
Then the reaction mixture was washed with 150 mL of a saturated aqueous solution of ammonium chloride.The organic layer was dried over anhydrous sodium sulfate and evaporated.
The ethanol was evaporated and the residue was extracted with dichloromethane.The combined organic extracts were dried over anhydrous sodium sulfate and evaporated.
The ethanol was evaporated and the residue was extracted with dichloromethane.The combined organic extracts were dried over anhydrous sodium sulfate and evaporated.
2-(4-Biphenylyl)-4-(4-ethylphenyl)-1H-imidazole (17)
A solution of 0.455 g (0.001 mol) of 15 in 30 mL of acetone, 7 mL of concentrated hydrochloric acid and 5 mL of water was refluxed for 16 h.After cooling to rt the reaction mixture was neutralised with 6M sodium hydroxide.The ethanol was evaporated and the residue was extracted with dichloromethane.The combined organic extracts were dried over anhydrous sodium sulfate and evaporated.The residue was subjected to column chromatography with ethyl acetate/n-hexane (2+8 | 2019-04-06T13:07:04.088Z | 2005-08-01T00:00:00.000 | {
"year": 2005,
"sha1": "2c03eb41db855283390a5251b8e42c3ab5c7a8eb",
"oa_license": null,
"oa_url": "https://doi.org/10.3987/com-05-10445",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d40b2bdaa441703446841ef481cdfc915cae9295",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
226639886 | pes2o/s2orc | v3-fos-license | CALCOCO2 silencing represents a potential molecular therapeutic target for glioma
Introduction A glioma is a type of malignant tumour that occurs in the neuroectoderm of the central nervous system with high morbidity, mortality, and recurrence rates [1]. It is the most common primary intracranial tumour, with an annual incidence of 3–10 per 100,000 in the United States, accounting for 46% of intracranial tumours and 2% of all malignant tumours [2, 3]. Current predominant treatments are surgical resection, radiation therapy, and chemotherapy [4–6]. However, due to the invasion and metastasis properZhisen Tian, Cong Ning, Changfeng Fu, Feng Xu, Congcong Zou, Qingsan Zhu, Jun Cai, Yuanyi Wang 2 Arch Med Sci ties of gliomas, it is difficult to attain total removal by surgical resection [7]. In addition, the resistance to radiotherapy and chemotherapy frequently lead to the progression and recurrence of tumours [8]. The prognosis of patients with high-grade glioma is still poor, with a median survival time of 14 months [9]. However, gene therapy has emerged as a promising treatment for glioma, with fewer side effects and greater specificity compared to those of traditional therapies. Therefore, it is necessary to identify therapeutic targets based on the molecular mechanisms underlying tumour occurrence and progression for effective glioma treatment. Autophagy is a homeostatic process in which cellular metabolic waste is recycled to support cellular metabolism via autophagosomes [10, 11]. Recently, researchers have proven that autophagy can promote tumour growth. Activated autophagy is a mechanism by which tumour cells adapt to extreme conditions, such as hypoxia and high metabolic demand [12]. Autophagy occurs during glioma chemoresistance after the use of temozolomide, contributing to the failure of chemotherapy. Drugs targeting autophagy in glioma are urgently needed. CALCOCO2 encodes a coiled-coil domain-containing protein [13]. The protein can combine with ubiquitin-coated bacteria, recognise microtubule-associated protein 1 light chain 3 (LC3) in autophagy, and deliver bacteria to autophagosomes for elimination [14]. However, the role of CALCOCO2 in glioma is unclear. In this study, the role of CALCOCO2 in the pathogenesis and progression of glioma was investigated. Material and methods Cell culture Human glioma U87 and U251 cell lines were obtained from the Shanghai Institute of Cell Biology, Chinese Academy of Sciences. The cells were cultured in DMEM supplemented with 10% FBS and 1% antibiotics at 37°C with 5% CO2. The present study was approved by the Ethics Committee of China-Japan Union Hospital of Jilin University and The First Hospital of Jilin University. CALCOCO2 expression in glioma cell lines CALCOCO2 expression in four glioma cell lines, i.e., U87, U251, U373, and A-172, was assessed by quantitative real-time polymerase chain reaction (RT-qPCR). Briefly, total messenger RNAs (mRNAs) of cells were extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). The mature mRNAs (2 μg per sample) were reverse transcribed into cDNAs using Super MMLV reverse transcriptase (BioTeke, Beijing, China). The mRNA levels of CALCOCO2 were determined by RT-qPCR using the Bio-Rad Connect Real-time PCR platform. RT-qPCR consisted of an initial denaturation step at 95°C for 15 s, 30 cycles of 95°C for 5 s, and 60°C for 30 s. The mRNA expression levels were determined by a comparative CT (2–ΔΔCt) analysis. Construction of lentivirus vectors targeting CALCOCO2 A short hairpin RNA (shRNA) was designed according to the sequence of CALCOCO2. The shRNA oligos were synthesised and inserted into the plasmid GV115 (GeneChem, Shanghai, China), and then recombinant lentiviruses were constructed by plasmid co-transfection of 293T cells according to the manufacturer’s instructions. The viral supernatant was collected and filtered through a 0.45-μm filter (Millipore, Billerica, MA, USA) at 72 h post-transfection, and the viral titre was determined. Subsequently, the viral supernatant was added to the U87 and U251 cell lines, and the expression of CALCOCO2 in cells was observed under a fluorescence microscope at 48 h (Olympus America, Melville, NY, USA). The cells infected with shCALCOCO2 and control shRNA were termed shCALCOCO2 and shControl, respectively. Silencing efficiency assessment The silencing efficiency of CALCOCO2 at the protein level was assessed by western blotting. Briefly, after U87 and U251 cells were infected with shCALCOCO2 or shControl for 5 days, they were collected and lysed with protein lysate (100 mM tris(hydroxymethyl)aminomethane hydrochloride (pH 6.8), 10 mM ethylenediaminetetraacetic acid, and 4% sodium dodecyl sulphate) for 20 min. The lysates were centrifuged, and the supernatants were collected. The total protein was measured by a BCA protein assay (HyClone-Pierce, Rockford, IL, USA), separated by 12.5% sodium dodecyl sulphate polyacrylamide gel electrophoresis, transferred to polyvinylidene difluoride membranes, and blocked for 1 h at room temperature (25°C). The membranes were then incubated with rabbit anti-GAPDH or rabbit anti-CALCOCO2 primary antibodies (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA, USA) and incubated at 4°C overnight. The membranes were washed with Tris-buffered saline and Tween, and a moderate volume of secondary antibody (goat anti-mouse IgG, 1 : 5000; Santa Cruz Biotechnology) was added and incubated for 3 h at room temperature. The membranes were then detected using enhanced chemiluminescence (ECL) reagent (ECL-Plus/Kit; Amersham, Piscataway, NJ, USA). Cell counting Multiparametric high-content screening (HCS) was utilised to determine the cell growth status. CALCOCO2 silencing represents a potential molecular therapeutic target for glioma Arch Med Sci 3 Briefly, U87 and U251 cells in the logarithmic phase in shCALCOCO2 or shControl groups were seeded on 96-well plates at a density of 4000 cells/well. Subsequently, the cells were incubated for five days, and every day the living cells exhibiting green fluorescence in each plate were recognised and counted using ArrayScanTM HCS software (Cellomics Inc., Pittsburgh, PA, USA). MTT assay A 3-(4,5-dimethylthiazol-2-yl)-2, 5-diphenyl-tetrazoniumbromide (MTT) assay was performed to assess cell viability. Briefly, the exponential growth cells infected with shCALCOCO2 or shControl were seeded on 96-well plates at a density of 4000 cells/well and incubated for 1, 2, 3, 4, or 5 days. At a predetermined timepoint, 20 μl of MTT was added to the cells, followed by incubation for 4 h. The supernatants were removed, and 100 μl of dimethyl sulphoxide (DMSO) was added to decompose formazan. The viability of cells was analysed by detecting absorbance at 490 nm using a microplate reader (BioTek Instruments, Winooski, VT, USA). Flow cytometry U87 and U251 cells infected with shCALCOCO2 or shControl were seeded on six-well plates after lentivirus infection for 5 days at a density of 3 × 105 and cultured for 48 h. Subsequently, the cells were harvested, centrifuged, washed with PBS twice, and then resuspended using staining buffer at a cell concentration of 1.0 × 106/ml. The cell suspensions were then stained with Annexin and PI at room temperature for 15 min in the dark and evaluated by flow cytometry (FCM, FACSCalibur; BD Biosciences, Franklin Lakes, NJ, USA). Caspase-Glo 3/7 assay Caspase-Glo 3/7 reagent was prepared by mixing caspase-Glo 3/7 with the substrate and was then stored at 4°C. Cells transfected with shCALCOCO2 or shControl at the logarithmic phase were seeded on 96-well plates at a density of 4000 cells/well and then cultured for 1, 2, 3, 4, or 5 days. The caspase-Glo 3/7 reagent was added to the cells at an amount equivalent to the volume of the culture, shaken for 30 s, and cultured for 0.5–3 h at room temperature according to cell conditions. The fluorescence of each well was assessed using a microplate reader. Gene microarray The genome-wide effect of the silencing of CALCOCO2 in the U87 cell line was investigated using a GeneChip® PrimeViewTM Human Gene Expression Array (Affymetrix; Thermo Fisher Scientific, Inc., Waltham, MA, USA). Briefly, after cells were treated with shControl or shCALCOCO2 for 72 h, the total mRNA was extracted, quantified, reverse-transcribed, and labelled with biotin using the GeneChip® 3’ IVT Express Kit (Thermo Fisher Scientific). Subsequently, the labelled cDNAs were used to hybridise the GeneChip® PrimeViewTM Human Gene Expression Array consisting of 20,000 genes according to the manufacturer’s protocol. After hybridisation, the gene chips were washed and scanned using a GeneChip® Fluidics Station 450, and images were acquired using GeneChip operating software. Data were summarised, and GeneSpring software was used for data analysis. Differentially expressed genes generated from the microarray analyses were analysed by the Ingenuity Pathway Core Analysis (IPA®, QIAGEN, Redwood City, CA, USA) to interpret the underlying molecular mechanisms. The enrichment of gene networks was analysed based on the overlap score (p-value and z-score). Three main analyses were performed using IPA, i.e. analyses of diseases and functions, gene networks, and downstream targets. Assessment of downstream target proteins To investigate the role of CALCOCO2 in the pathogenesis of glioma, the expression levels of related proteins in U87 and U251 cells infected with shCALCOCO2 or shControl were assessed by western blotting. The specific methods were as described above, and the protein levels were measured. Statistical analyis All experiments were repeated thrice and the results expressed as means ± standard deviation. Statistical differences were evaluated using paired Student’s t-tests implemented in SPSS 23.0 (SPSS Inc., Chicago, IL, USA). P < 0.05 indicated statistical significance, and p < 0.01 and p < 0.001 were considered highly significant. Results CALCOCO2 expression in glioma cells and the silencing of CALCOCO2 in U87 and U251 cell lines As shown in Figure 1 A, CALCOCO2 mRNA was overexpressed in all four cell lines, and the expression was highest in U87 and U251 cells. Moreover, as shown in Figures 1 B and C, most U87 and U251 cells were positive for green fluorescent protein under a microscope, indicating the efficient silencing of CALCOCO2 in both cell lines. The relative CALCOCO2 mRNA levels for shCALCOCO2-treated cells were 0.138 ±0.014 and 0.375 ±0.025 for Zhisen Tian, Cong Ning, Changfeng Fu, Feng Xu, Congcong Zou, Qingsan Zhu, Jun Cai, Yuanyi Wang 4 Arch Med Sci Figure 1. CALCOCO2 silencing in the U87 and U251 cell lines. A – Expression of CALCOCO2 in glioma cell lines. qRT-PCR was performed to evaluate the expression levels of CALCOCO2 in four glioma cell lines (U87, U251, U373, and A-172). B, C – Microscopic images of U87 and U251 cell lines in the shControl and shCALCOCO2 groups. D, E – qRT-PCR analysis of the efficiency of CALCOCO2 silencing at the mRNA level. F, G – Western blot analysis of the efficiency of CALCOCO2 silencing at the protein level Data are shown as means ± SD (n = 5; *p < 0.05, **p < 0.01, and ***p < 0.001). Δ t( CA LC O CO 2– G A PD H ) Re la ti ve m RN A le ve l (C A LC O CO 2/ G A PD H ) Re la ti ve m RN A le ve l (C A LC O CO 2/ G A PD H ) 9 8 7 6 5 4 3 2 1 0
Introduction
A glioma is a type of malignant tumour that occurs in the neuroectoderm of the central nervous system with high morbidity, mortality, and recurrence rates [1]. It is the most common primary intracranial tumour, with an annual incidence of 3-10 per 100,000 in the United States, accounting for 46% of intracranial tumours and 2% of all malignant tumours [2,3]. Current predominant treatments are surgical resection, radiation therapy, and chemotherapy [4][5][6]. However, due to the invasion and metastasis proper-ties of gliomas, it is difficult to attain total removal by surgical resection [7]. In addition, the resistance to radiotherapy and chemotherapy frequently lead to the progression and recurrence of tumours [8]. The prognosis of patients with high-grade glioma is still poor, with a median survival time of 14 months [9]. However, gene therapy has emerged as a promising treatment for glioma, with fewer side effects and greater specificity compared to those of traditional therapies. Therefore, it is necessary to identify therapeutic targets based on the molecular mechanisms underlying tumour occurrence and progression for effective glioma treatment.
Autophagy is a homeostatic process in which cellular metabolic waste is recycled to support cellular metabolism via autophagosomes [10,11]. Recently, researchers have proven that autophagy can promote tumour growth. Activated autophagy is a mechanism by which tumour cells adapt to extreme conditions, such as hypoxia and high metabolic demand [12]. Autophagy occurs during glioma chemoresistance after the use of temozolomide, contributing to the failure of chemotherapy. Drugs targeting autophagy in glioma are urgently needed. CALCOCO2 encodes a coiled-coil domain-containing protein [13]. The protein can combine with ubiquitin-coated bacteria, recognise microtubule-associated protein 1 light chain 3 (LC3) in autophagy, and deliver bacteria to autophagosomes for elimination [14]. However, the role of CALCOCO2 in glioma is unclear.
In this study, the role of CALCOCO2 in the pathogenesis and progression of glioma was investigated.
Cell culture
Human glioma U87 and U251 cell lines were obtained from the Shanghai Institute of Cell Biology, Chinese Academy of Sciences. The cells were cultured in DMEM supplemented with 10% FBS and 1% antibiotics at 37°C with 5% CO 2 .
The present study was approved by the Ethics Committee of China-Japan Union Hospital of Jilin University and The First Hospital of Jilin University.
CALCOCO2 expression in glioma cell lines CALCOCO2 expression in four glioma cell lines, i.e., U87, U251, U373, and A-172, was assessed by quantitative real-time polymerase chain reaction (RT-qPCR). Briefly, total messenger RNAs (mR-NAs) of cells were extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). The mature mR-NAs (2 µg per sample) were reverse transcribed into cDNAs using Super MMLV reverse transcriptase (BioTeke, Beijing, China). The mRNA levels of CALCOCO2 were determined by RT-qPCR using the Bio-Rad Connect Real-time PCR platform. RT-qPCR consisted of an initial denaturation step at 95°C for 15 s, 30 cycles of 95°C for 5 s, and 60°C for 30 s. The mRNA expression levels were determined by a comparative CT (2 -ΔΔCt ) analysis.
Construction of lentivirus vectors targeting CALCOCO2
A short hairpin RNA (shRNA) was designed according to the sequence of CALCOCO2. The shRNA oligos were synthesised and inserted into the plasmid GV115 (GeneChem, Shanghai, China), and then recombinant lentiviruses were constructed by plasmid co-transfection of 293T cells according to the manufacturer's instructions. The viral supernatant was collected and filtered through a 0.45-µm filter (Millipore, Billerica, MA, USA) at 72 h post-transfection, and the viral titre was determined. Subsequently, the viral supernatant was added to the U87 and U251 cell lines, and the expression of CALCOCO2 in cells was observed under a fluorescence microscope at 48 h (Olympus America, Melville, NY, USA). The cells infected with shCALCOCO2 and control shRNA were termed shCALCOCO2 and shControl, respectively.
Silencing efficiency assessment
The silencing efficiency of CALCOCO2 at the protein level was assessed by western blotting. Briefly, after U87 and U251 cells were infected with shCAL-COCO2 or shControl for 5 days, they were collected and lysed with protein lysate (100 mM tris(hydroxymethyl)aminomethane hydrochloride (pH 6.8), 10 mM ethylenediaminetetraacetic acid, and 4% sodium dodecyl sulphate) for 20 min. The lysates were centrifuged, and the supernatants were collected. The total protein was measured by a BCA protein assay (HyClone-Pierce, Rockford, IL, USA), separated by 12.5% sodium dodecyl sulphate polyacrylamide gel electrophoresis, transferred to polyvinylidene difluoride membranes, and blocked for 1 h at room temperature (25°C). The membranes were then incubated with rabbit anti-GAPDH or rabbit anti-CALCOCO2 primary antibodies (1 : 500; Santa Cruz Biotechnology, Santa Cruz, CA, USA) and incubated at 4°C overnight. The membranes were washed with Tris-buffered saline and Tween, and a moderate volume of secondary antibody (goat anti-mouse IgG, 1 : 5000; Santa Cruz Biotechnology) was added and incubated for 3 h at room temperature. The membranes were then detected using enhanced chemiluminescence (ECL) reagent (ECL-Plus/Kit; Amersham, Piscataway, NJ, USA).
Cell counting
Multiparametric high-content screening (HCS) was utilised to determine the cell growth status.
Briefly, U87 and U251 cells in the logarithmic phase in shCALCOCO2 or shControl groups were seeded on 96-well plates at a density of 4000 cells/well. Subsequently, the cells were incubated for five days, and every day the living cells exhibiting green fluorescence in each plate were recognised and counted using ArrayScan™ HCS software (Cellomics Inc., Pittsburgh, PA, USA).
MTT assay
A 3-(4,5-dimethylthiazol-2-yl)-2, 5-diphenyl-tetrazoniumbromide (MTT) assay was performed to assess cell viability. Briefly, the exponential growth cells infected with shCALCOCO2 or shControl were seeded on 96-well plates at a density of 4000 cells/well and incubated for 1, 2, 3, 4, or 5 days. At a predetermined timepoint, 20 µl of MTT was added to the cells, followed by incubation for 4 h. The supernatants were removed, and 100 µl of dimethyl sulphoxide (DMSO) was added to decompose formazan. The viability of cells was analysed by detecting absorbance at 490 nm using a microplate reader (BioTek Instruments, Winooski, VT, USA).
Flow cytometry
U87 and U251 cells infected with shCALCOCO2 or shControl were seeded on six-well plates after lentivirus infection for 5 days at a density of 3 × 10 5 and cultured for 48 h. Subsequently, the cells were harvested, centrifuged, washed with PBS twice, and then resuspended using staining buffer at a cell concentration of 1.0 × 10 6 /ml. The cell suspensions were then stained with Annexin and PI at room temperature for 15 min in the dark and evaluated by flow cytometry (FCM, FACSCalibur; BD Biosciences, Franklin Lakes, NJ, USA).
Caspase-Glo 3/7 assay
Caspase-Glo 3/7 reagent was prepared by mixing caspase-Glo 3/7 with the substrate and was then stored at 4°C. Cells transfected with shCALCOCO2 or shControl at the logarithmic phase were seeded on 96-well plates at a density of 4000 cells/well and then cultured for 1, 2, 3, 4, or 5 days. The caspase-Glo 3/7 reagent was added to the cells at an amount equivalent to the volume of the culture, shaken for 30 s, and cultured for 0.5-3 h at room temperature according to cell conditions. The fluorescence of each well was assessed using a microplate reader.
Gene microarray
The genome-wide effect of the silencing of CALCOCO2 in the U87 cell line was investigated using a GeneChip ® PrimeView™ Human Gene Ex-pression Array (Affymetrix; Thermo Fisher Scientific, Inc., Waltham, MA, USA). Briefly, after cells were treated with shControl or shCALCOCO2 for 72 h, the total mRNA was extracted, quantified, reverse-transcribed, and labelled with biotin using the GeneChip ® 3' IVT Express Kit (Thermo Fisher Scientific). Subsequently, the labelled cDNAs were used to hybridise the GeneChip ® PrimeView™ Human Gene Expression Array consisting of 20,000 genes according to the manufacturer's protocol. After hybridisation, the gene chips were washed and scanned using a GeneChip ® Fluidics Station 450, and images were acquired using GeneChip operating software. Data were summarised, and GeneSpring software was used for data analysis. Differentially expressed genes generated from the microarray analyses were analysed by the Ingenuity Pathway Core Analysis (IPA ® , QIAGEN, Redwood City, CA, USA) to interpret the underlying molecular mechanisms. The enrichment of gene networks was analysed based on the overlap score (p-value and z-score). Three main analyses were performed using IPA, i.e. analyses of diseases and functions, gene networks, and downstream targets.
Assessment of downstream target proteins
To investigate the role of CALCOCO2 in the pathogenesis of glioma, the expression levels of related proteins in U87 and U251 cells infected with shCALCOCO2 or shControl were assessed by western blotting. The specific methods were as described above, and the protein levels were measured.
Statistical analyis
All experiments were repeated thrice and the results expressed as means ± standard deviation. Statistical differences were evaluated using paired Student's t-tests implemented in SPSS 23.0 (SPSS Inc., Chicago, IL, USA). P < 0.05 indicated statistical significance, and p < 0.01 and p < 0.001 were considered highly significant. the U87 and U251 cell lines, respectively, which were significantly lower than those of shControl cells (1.002 ±0.071 and 1 ±0.032, respectively, p < 0.001; Figures 1 D, E). Consistent with these findings, the protein levels of CALCOCO2 were significantly downregulated compared with those in shControl U87 and U251 cells (Figures 1 F, G).
Effects of CALCOCO2 silencing on cell growth
As shown in Figure 2 A, CALCOCO2 silencing significantly inhibited U87 cell growth compared to that of the shControl group (p < 0.05). The cell counting results showed that the proliferation fold change values for the shCALCOCO2 group in the U87 cell line at days 4 and 5 were 2.8 ±0.07 and 4.1 ±0.11, respectively, which were obviously lower than those for shControl cells (4.13 ±0.05 and 6.42 ±0.2, respectively). Similarly, there was a significant difference in cell counts between the shCALCOCO2 and shControl groups in the U251 cell line, especially on days 4 and 5, indicating the inhibitory effect of CALCOCO2 silencing on cell growth. An MTT assay also showed that both U87 and U251 cells exhibited slower proliferation and growth after the silencing of CALCOCO2, and these effects were even more pronounced on day 5, when the proliferation fold changes in the shCALCOCO2 group were 2.163 ±0.0068 and 1.696 ±0.0672 in U87 and U251 cells, respectively, while the proliferation fold changes in the shControl group were 3.882 ±0.0547 and 3.117 ±0.0793, respectively.
Effects of CALCOCO2 silencing on cell apoptosis
As shown in Figures 3 A-D, the percentages of cell apoptosis in shCALCOCO2-infected U87 and U251 cell lines, as detected by FCM, were 11.44 ±0.1178% and 9.32 ±0.0955%, respectively, on day 4 post-CALCOCO2 silencing, while the shControl group exhibited significantly decreased apoptotic percentages of 4.13 ±0.1308% and 4.3 ±0.1%, respectively (p < 0.001). In addition, caspase3/7 measurements showed that the expression levels of caspase3/7 were approximately 1.72 and 2.07 times greater than those in the shControl group for U87 and U251 cells after infection with shGATAD2A for 3 days (Figures 3 E, F).
Molecular mechanisms underlying the effects of CALCOCO2 in gliomas
In the gene microarray analysis, there were 586 differentially expressed genes, including 357 genes that were downregulated and 229 genes that were upregulated (Figure 4 A). These discriminative genes were functionally analysed by IPA. As shown in Figure 4 B, 17 CALCOCO2-related functions and diseases were detected by IPA, and infectious diseases, cancer, organismal injury, and abnormalities were the highest ranked categories. The gene in-
Discussion
CALCOCO2 is a coiled-coil domain-containing protein-coding gene with an important role in autophagy [15]. It serves as an autophagy receptor that interacts with targets and transfers them to autophagosomes by binding to LC3. The abnormal expression of CALCOCO2 is related to inflamma- Color key tion and Crohn's disease [16]. However, its potential role in tumours, especially gliomas, remains a mystery. U87, U373, U251, and A-172 are common glioma cell lines in the cellular experiments.
In the present study, we chose these four cell lines and selected the most abnormally expressed ones to verify the following experiments. Our results showed that CALCOCO2 is overexpressed in these four glioma cell lines, and the expression was highest in U87 and U251 cells, suggesting that it may be an important tumour-associated factor in the pathogenesis and progression of glioma. In the present study, CALCOCO2 was successfully silenced using a lentiviral vector, and the role of CALCOCO2 in cell growth and apoptosis was evaluated in U87 and U251 cell lines. Cell counting and MTT assays demonstrated that the silencing of CALCOCO2 significantly inhibited cell growth and proliferation. These results suggested that CALCOCO2 silencing had antitumor effects via its anti-proliferation functions.
In addition to proliferation, apoptosis also has a profound impact on the pathogenesis and progression of tumours. Apoptosis accomplishes programmed cell death via cell shrinkage and nuclear and DNA fragmentation [17]. Further, multiple human diseases are influenced by apoptosis, including tumours, immunological diseases, sepsis, and neurodegenerative changes [18][19][20][21]. Previous studies have demonstrated an important role of apoptosis in glioma; promoting apoptosis of glioma cells is a potential strategy for tumour therapy [22,23]. In this study, FCM and caspase-glo 3/7 assays indicated that tumour apoptosis increased significantly after CALCOCO2 silencing. Thus, apoptosis is a crucial mechanism by which CALCOCO2 influences gliomas.
To further assess the molecular mechanisms underlying CALCOCO2-associated glioma, the U87 glioma cell line was evaluated by a microarray analysis, and the results were analysed by IPA. The silencing of CALCOCO2 influenced the expression of hundreds of genes associated with various functions and diseases. CALCOCO2 was most strongly associated with cancer, supporting the important role of CALCOCO2 in gliomas. Other relevant functions, such as cell cycle, cell death and survival, cell growth, and proliferation, are also correlated with pathogenesis and progression [24,25]. To further clarify the downstream biological alterations, several genes involved in cancer development were chosen, and a core CALCOCO2 network including multifarious genes related to cancer was mapped. Several cancer-related genes exhibited significant differential expression after the silencing of CALCOCO2. In particular, the wellknown pro-apoptosis genes FAS and CASP1 were significantly upregulated and the autophagy-re-lated gene BECN1 was markedly downregulated by CALCOCO2 silencing.
We then used western blotting to investigate the expression of BECN1, CASP1, FAS, GSK3B, BIRC5, and IL-1β at the protein level. FAS is a key death receptor; when combined with FasL, the conformation of FAS is altered, which then triggers the cascade reaction of apoptosis [26,27]. Interleukin (IL)-1β (IL-1β) is a cytokine in the family of chemokines, also known as lymphocyte stimulating factor [28]. In the cellular process, it is mainly produced by activated mononuclear macrophages and is related to immune response [29,30]. CASP1 plays a crucial role in innate immunity by activating the proinflammatory cytokine IL-1β [31]. GSK3β is a proline-guided serine/threonine protein kinase involved in energy metabolism, nerve cell development, and body morphogenesis [32]. BIRC5 is a member of the inhibitor of apoptosis (IAP) gene family, which encodes negative regulatory proteins that prevent apoptotic cell death [33,34]. It has been reported that activating CASP1, FAS, GSK3B, BIRC5, and IL-1β may induce cell apoptosis [35][36][37][38][39]. In this study, the protein levels of CASP1, FAS, GSK3B, BIRC5, and IL-1β were upregulated by the silencing of CALCOCO2, in accordance with the microarray results. These results suggested that the up-regulation of CASP1, FAS, GSK3B, BIRC5, and IL-1β induced by the inhibition of CALCOCO2 result in increased tumour apoptosis. In addition, we detected the significant downregulation of BECN1 after CALCOCO2 silencing. BECN1 is a key autophagy-promoting gene that maintains the balance between cell death and survival [40,41]. The activation of tumour autophagy is decreased using a BECN1-targeted microRNA [42]. Additionally, the size and number of breast carcinoma cells decrease after the knockdown of BECN1 [43]. Autophagy in tumour cells is activated in response to cellular stress [12]. The silencing of autophagy-related genes can decrease tolerance to extreme external conditions and even contribute to tumour cell death [44][45][46]. Previous studies have shown that there is a relationship between CALCOCO2 and autophagy [13,15]. The results of western blotting and gene microarray analyses in this study suggest that the mechanisms underlying CALCOCO2-mediated glioma pathogenesis and progression are also associated with autophagy.
In conclusion, the results of this study demonstrated that the knockout of CALCOCO2 could inhibit glioma by influencing autophagy and promoting apoptosis via the activation of FAS and CASP1. | 2020-07-02T10:11:06.425Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "ccd0322619797e0eb1d73e07f4fdc5870f5fb53b",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.archivesofmedicalscience.com/pdf-120367-59596?filename=CALCOCO2%20silencing.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "653d298913d768c9eff6f080eeafa95daa270de4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3936201 | pes2o/s2orc | v3-fos-license | Hypokalemia: a clinical update
Hypokalemia is a common electrolyte disturbance, especially in hospitalized patients. It can have various causes, including endocrine ones. Sometimes, hypokalemia requires urgent medical attention. The aim of this review is to present updated information regarding: (1) the definition and prevalence of hypokalemia, (2) the physiology of potassium homeostasis, (3) the various causes leading to hypokalemia, (4) the diagnostic steps for the assessment of hypokalemia and (5) the appropriate treatment of hypokalemia depending on the cause. Practical algorithms for the optimal diagnostic, treatment and follow-up strategy are presented, while an individualized approach is emphasized.
Introduction
Hypokalemia is present when serum levels of potassium are lower than normal. It is a rather common electrolyte disturbance, especially in hospitalized patients, with various causes and sometimes requires urgent medical attention (1). It usually results from increased potassium excretion or intracellular shift and less commonly from reduced potassium intake. Although many chapters, clinical statements and guidelines refer to hypokalemia, they do so, mainly in the context of other clinical entities. The aim of this comprehensive review is to provide current knowledge regarding definition, prevalence and etiology of hypokalemia, as well as, an individualized guide for the optimal diagnostic management and follow-up strategy.
Materials and methods
In order to identify publications on hypokalemia, a literature search was conducted in PubMed using combinations of the key-terms: 'potassium' OR 'hypokalemia' OR 'hypokalaemia' OR 'electrolyte disturbances' AND 'guide' OR 'algorithm' OR 'guidelines'. In addition, a manual search of key journals and abstracts from the major annual meetings in the fields of endocrinology and nephrology was conducted. This review collected, analyzed and qualitatively re-synthesized information regarding: (1) the definition and prevalence of hypokalemia, (2) the physiology of potassium homeostasis, (3) the various causes leading to hypokalemia, (4) the diagnostic steps for the assessment of hypokalemia and (5) the appropriate treatment of hypokalemia depending on the cause. and almost all cells have the pump called 'Na + -K + -ATPase', which pumps sodium (Na + ) out of the cell and K + into the cell leading to a K + gradient across the cell membrane (K + in > K + out ), which is partially responsible for maintaining the potential difference across membrane. Many cell functions rely on this potential difference, particularly in excitable tissues, such as nerve and muscle. Two percent of K + exists in the extracellular fluid (ECF) at a concentration of only 4 mEq/L (3). Enzyme activities, as well as, cell division and growth are catalyzed by potassium and are affected by its concentrations and its alterations.
Of great importance, intracellular K + participates in acid-base regulation through exchange for extracellular hydrogen ions (H + ) and by influencing the rate of renal ammonium production (4). Counterregulatory mechanisms exist in order to defend against potassium alterations. These mechanisms serve to maintain a proper distribution of K + within the body, as well as to regulate the total body K + content. Excessive ECF potassium (hyperkalemia) decreases membrane potential, while hypokalemia causes hyperpolarization and non-responsiveness of the membrane (5). If potassium balance is disrupted (hypokalemia or hyperkalemia), this can also lead to disruption of heart electrical conduction, dysrhythmias and even sudden death. Potassium balance has a direct negative effect on (H + ) balance at intracellular and extracellular level and the overall cellular activity.
Balance of K +
External potassium balance is determined by the rate of potassium intake (normally 100 mEq/day) and rate of urinary (normally 90 mEq/day) and fecal excretion (normally 10 mEq/day). The distribution of potassium in muscles, bone, liver and red blood cells (RBC) and ECF has a direct effect on internal potassium balance (6, 7) (Fig. 1).
The kidney is primarily responsible for maintaining total body K + balance. However, renal K + excretion is adjusted over several hours; therefore, changes in extracellular K + concentrations are initially buffered by movement of K + into or out of skeletal muscle. The regulation of K + distribution between the intracellular and extracellular space is referred to as internal K + balance. Under normal conditions, insulin and catecholamines play the most important role in this regulation (8). Potassium controls its own ECF concentrations through a feedback regulation of aldosterone release. An increase in K + levels leads to a release of aldosterone through the renin-angiotensin-aldosterone mechanism or through the direct release of aldosterone from the adrenal cortex cells, which are stimulated (9). More specifically, an increase in extracellular potassium concentrations stimulates aldosterone secretion (via angiotensin II), which in turn increases urinary K + excretion. In the steady state, K + excretion matches intake and approximately 90% is excreted by the kidneys and 10% in the stool fairly constant. By contrast, the rate of K + secretion by the distal nephron varies and is regulated according to the physiological needs. The cellular determinants of K + secretion in the principal cell include the intracellular K + concentration, the luminal K + concentration, the potential (voltage) difference across the luminal membrane and the permeability of the luminal membrane for K + . Conditions that increase cellular K + concentration, decrease luminal K + concentration or render the lumen more electronegative will increase the rate of K + secretion. Conditions that increase the permeability of the luminal membrane for K + will increase the rate of K + secretion (8,9).
Two principal determinants of K + secretion are mineralocorticoid activity and distal delivery of Na + and water. Aldosterone is the major mineralocorticoid in humans and mediates the renal excretion of K + and Na + reabsorption by binding to the mineralocorticoid receptors in the distal tubules and collecting ducts of the nephron. Aldosterone increases intracellular K + concentration by stimulating the activity of the Na + -K + -ATPase in the basolateral membrane, stimulates Na + reabsorption across the luminal membrane, which increases the electronegativity of the lumen, thereby increasing the electrical gradient favoring K + secretion and lastly has a direct effect on the luminal membrane to increase K + permeability (10). Under conditions of volume depletion, activation of the renin-angiotensin system leads to increased aldosterone release. The increase in circulating aldosterone stimulates renal Na + retention, contributing to the restoration of ECF volume, but occurs without a demonstrable effect on renal K + secretion. When hyperkalemia occurs, aldosterone release is mediated by a direct effect of K + on cells in the zona glomerulosa. The subsequent increase in circulating aldosterone stimulates renal K + secretion, restoring the serum K + concentration to normal, but does so without concomitant renal Na + retention. The ability of aldosterone to signal the kidney to stimulate salt retention without K + secretion in volume depletion and stimulate K + secretion without salt retention in hyperkalemia has been referred to as the aldosterone paradox (11).
Furthermore, K + is freely filtered by the glomerulus and almost all the filtered K + is reabsorbed in the proximal tubule and loop of Henle. This absorption in the proximal part of the nephron passively follows that of Na + and water, whereas reabsorption in the thick ascending limb of the loop of Henle is mediated by the Na + , K + and 2 chloride (Cl − ) carrier (NKCC2) in the luminal membrane. The connecting segment, the principal cells in the cortical and outer medullary collecting tubule, and the papillary (or inner medullary) collecting duct via luminal potassium channels secrete K+ (12). The renal outer medullary K + (ROMK) channel is one of the two populations of K + channels, which have been identified in the cells of the cortical collecting duct and is considered to be the major K + -secretory pathway. This channel is characterized by having low conductance and a high probability of being open under physiologic conditions. The maxi-K + channel (also known as the large-conductance K + (BK) channel) is characterized by a large single channel conductance and quiescence in the basal state and activation under conditions of increased flow. In addition to increased delivery of Na + and dilution of luminal K + concentration, recruitment of maxi-K + channels contributes to flow-dependent increased K + secretion (11,12).
Returning to the function of the collecting segments, they secrete varying quantities of K + according to physiologic requirements and are responsible for most of the urinary potassium excretion. Secretion in the distal segments is also balanced by K + reabsorption through the intercalated cells in the cortical and outer medullary collecting tubules (13). The active H + -K + -ATPase pump in the luminal membrane acts as a mediator and leads to both proton secretion and K + reabsorption. The kidneys are far more capable in increasing than decreasing K + excretion. As a result, inadequate intake can lead to K + depletion and hypokalemia. Hyperkalemia usually occurs when renal excretion is impaired (glomerular filtration rate (GFR) < 20 mL/min).
Definition and prevalence of hypokalemia
Hypokalemia is an electrolyte characterized by low serum potassium concentrations (normal range: 3.5-5.0 mEq/L). Severe and life-threatening hypokalemia is defined when potassium levels are <2.5 mEq/L. In outpatient population undergoing laboratory testing, mild hypokalemia can be found in almost 14% (14). Furthermore, as many as 20% of hospitalized patients are found to have hypokalemia but only in 4-5% this is clinically significant (15). Severe hypokalemia is relatively uncommon. Approximately 80% of patients who are receiving diuretics become hypokalemic, while many of patients with hypokalemia could also have an associated systemic disease. There are no significant differences in its prevalence between males and females (16).
Causes of hypokalemia
Hypokalemia can be caused either by decreased intake of potassium or by excessive losses of potassium in the urine or through the GI tract (17,18). The latter is more common. Excessive excretion of potassium in the urine (kaliuresis) may result from the use of diuretic drugs, endocrine diseases such as primary hyperaldosteronism, kidney disorders and genetic syndromes affecting the renal function (19). Gastrointestinal losses of potassium usually are due to prolonged diarrhea or vomiting, chronic laxative abuse, intestinal obstruction or infections. An intracellular shift of the potassium can also lead to severe hypokalemia. Insulin administration, stimulation of the sympathetic nervous system, thyreotoxicosis and familiar periodic paralysis are some of the reasons for this phenomenon (20). Congenital adrenal hyperplasia due to enzymatic defects is a genetic syndrome strongly associated with hypertension and hypokalemia, resulting from excessive mineralocorticoid effects. Drugs, such as diuretics and penicillin can be often the underlying cause of hypokalemia. Finally, hypomagnesemia is very important. More than 50% of clinically significant hypokalemia has concomitant magnesium deficiency and is clinically most frequently observed in individuals receiving loop or thiazide diuretic therapy. Concomitant magnesium deficiency has long been appreciated to aggravate hypokalemia. Hypokalemia associated with magnesium deficiency is often refractory to treatment with K + (21) ( Table 1).
Signs and symptoms
The severity of hypokalemia's clinical manifestations tends to be proportionate to the degree and duration of serum potassium reduction. Symptoms generally do not become present until serum potassium is below 3.0 mEq/L, unless it falls rapidly or the patient has a potentiating factor, such as the use of digitalis, in which patients have a predisposition to arrhythmias. According to the severity of hypokalemia, symptoms can vary from absent to lethal heart arrhythmias (22). Symptoms usually resolve with correction of the hypokalemia.
More specifically, we could categorize the manifestations according to the affected system. The effects of hypokalemia regarding the renal function can be metabolic acidosis, rhabdomyolysis (in severe hypokalemia) and, rarely, impairment of tubular transport, chronic tubulointerstitial disease and cyst formation. Nervous system is affected, the patient can suffer from leg cramps, weakness, paresis or ascending paralysis. Constipation or intestinal paralysis and respiratory failure often present as signs of severe hypokalemia. Last but not least, hypokalemia can have detrimental effects on the cardiovascular system, leading to electrocardiographic (ECG) changes (U waves, T wave flattening and ST-segment changes), cardiac arrhythmias (sometimes lethal) and heart failure (23) ( Table 2).
Laboratory investigation of hypokalemia General diagnostic approach
The underlying cause of hypokalemia is usually apparent after obtaining a detailed medical history and physical examination (24). In order to evaluate the severity of hypokalemia and to initiate an effective treatment, assessment of serum and urinary potassium levels is needed. Depending on the above findings, tests and imaging of the endocrine glands are appropriate, but they should not be first-line tests unless the clinical index of suspicion for such a disorder is high. A basic biochemical laboratory panel (including serum sodium, potassium, glucose, chloride, bicarbonate, BUN and creatinine) is the core of screening in patients with hypokalemia. Urine electrolytes (potassium and chloride) in spot urine are useful in differentiating renal from non-renal causes of hypokalemia. An arterial blood gas (ABG) analysis should be performed to detect metabolic acidosis or alkalosis when the underlying cause is not apparent from the history. As the difference between arterial and vein blood samples, regarding the potassium levels, is clinically not significant, measurement of potassium in vein blood sample is not contraindicated in the emergency department. Further urinalysis and urine pH measurement should follow to assess for the presence of renal tubular acidosis. Serum magnesium, calcium and/or phosphorus levels are important to exclude associated electrolyte abnormalities, especially if alcoholism is suspected. Urinary calcium excretion is very critical to exclude Bartter syndrome. We should also measure serum digoxin level if the patient is on digitalis. In cases of high clinical index of suspicion for a disorder, a drug screen in urine and/or serum for diuretics, amphetamines and other sympathomimetic stimulants should be conducted. Assessment of TSH levels is required in cases of tachycardia or clinical suspicion of hypokalemic periodic paralysis (25).
In general, there are two major components of the diagnostic evaluation: (a) assessment of urinary potassium excretion in order to distinguish renal potassium losses (e.g., diuretic therapy, primary aldosteronism) from other causes of hypokalemia (e.g., gastrointestinal losses, transcellular potassium shifts) and (b) assessment of acid-base status, since some causes of hypokalemia are associated with metabolic alkalosis or metabolic acidosis. We present a diagnostic algorithm for the assessment of hypokalemia.
Assessment of urinary potassium excretion
Potassium excretion in a 24-h urine collection is the best way to assess the urinary potassium excretion (26). If this excretion is above 15 mEq of potassium per day, (27). Measurement of the potassium and creatinine concentrations in a spot urine is an alternative, if collection of a 24-h urine is not feasible. A spot urine potassiumto-creatinine ratio greater than 13 mEq/g creatinine (1.5 mEq/mmol) usually indicates inappropriate renal potassium loss. After determining whether renal potassium wasting is present, assessment of acid-base status can further narrow the differential diagnosis (28,29).
Assessment of acid-base status
Once urinary potassium excretion is measured, the following diagnostic possibilities should be considered in the patient with hypokalemia of uncertain origin. A metabolic acidosis with a low rate of urinary potassium excretion in an asymptomatic patient is suggestive of lower gastrointestinal losses due to laxative abuse or a villous adenoma. On the other hand, a diabetic ketoacidosis or type 1 (distal) or type 2 (proximal) renal tubular acidosis can occur because of a metabolic acidosis with urinary potassium wasting. Furthermore, surreptitious vomiting (often common in bulimic patients trying to lose weight) or diuretic use can be the cause of a metabolic alkalosis with a low rate of urinary potassium excretion. In addition, some patients with laxative abuse present with metabolic alkalosis, rather than with the expected metabolic acidosis (30).
On the other side, when a metabolic alkalosis with urinary potassium wasting is present and the patient has a normal bloοd pressure, the diagnosis is diuretic use, vomiting, Gitelman or Bartter syndrome. In this setting, measurement of the urine chloride concentration is often helpful, being normal (equal to intake) in Gitelman or Bartter syndrome. On the other hand, urine chloride concentration is high or low with diuretics, depending upon the duration of action of the diuretic. In cases of vomiting, this concentration is low at a time when urinary sodium and potassium excretion may be relatively high due to the need to maintain electroneutrality as some of the excess bicarbonate is being excreted (31). In the presence of hypertension, a surreptitious diuretic therapy comes as first in the differential diagnosis in a patient with underlying hypertension, renovascular disease, or one of the causes of primary mineralocorticoid excess (32).
Patients with either Bartter or Gitelman syndrome may present with constipation, muscle cramps and weakness and non-specific dizziness and fatigue. The biochemical features of both syndromes can include hypokalemic, hypochloremic metabolic alkalosis associated with high plasma renin activity and high aldosterone concentration (33). Patients with Bartter syndrome present in early childhood and the failure to thrive is more severe and with a great deal of growth retardation. Gitelman syndrome is associated with less severe failure to thrive and the growth retardation is milder. Symptoms of Gitleman syndrome are similar to thiazide diuretic-abusers with salt wasting. Indeed, Gitelman patients are mostly thought to be asymptomatic (34). They often present for workup of isolated, asymptomatic hypokalemia, but when closer questioned, 80% of Gitelman patients complain of dizziness and fatigue; 70% of the patients complain of muscle weakness and cramps and 50% with nocturia and polyuria, in whom 90% are subsequently found to be salt wasters. Normal blood pressure in patients with Bartter syndrome is a feature thought to be different from the occasional hypotension of Gitelman syndrome. Focal segmental glomerulosclerosis has been described in Bartter syndrome (35). In contrast, Gitelman patients often complain of nocturia and polyuria. Persistent hypokalemia may give rise to interstitial nephritis, signaled by urinary anomalies. Urinary calcium excretion is important because it distinguishes the two syndromes. In contrast to the hypocalciuria of Gitelman syndrome, Bartter patients are often documented to have hypercalciuria. Medical noncompliance to potassium chloride supplementation and other therapy is an important issue in long-term follow-up of Bartter and Gitelman patients (36). Although the chronic hypokalemia can be mildly symptomatic, it can be aggravated by diarrhea or vomiting, precipitating prolonged QT interval, increased risk of rhabdomyolysis, cardiac arrhythmia, syncope and sudden death. Alcohol abuse, cocaine or other drug abuse can also precipitate lifethreatening arrhythmia electrolyte and fluid repair, oral potassium supplementation, potassium-sparing diuretics, cyclo-oxygenase inhibitors and renin-angiotensin blockers become life-saving in such emergencies (37). Finally, in familial cases, both conditions are conveyed by autosomal recessive transmission. The site of defect in Bartter syndrome is at the thick ascending limb (TAL) of the loop of Henle, whereas in Gitelman syndrome, the defect resides at the distal convoluted tubule (DCT) (38).
Liddle syndrome is a rare form of autosomal dominant hypertension with early penetrance and impressive cardiovascular sequelae. In addition to severe hypertension, many of the patients have overt hypokalemia. Despite having the clinical presentation typical of primary aldosteronism, the actual rates of aldosterone excretion are markedly suppressed, accounting for the descriptive term 'pseudoaldosteronism.' Liddle syndrome is an extreme example of low renin, volumeexpanded hypertension. In general, inappropriate renal Na1 retention with subsequent volume expansion, low plasma renin activity and hypertension are the consequences of 'pseudoaldosteronism' that results from constitutive activation of the amiloride-sensitive epithelial Na1 channel (ENaC) in the terminal nephron segments. Cardiovascular and cerebrovascular complications of hypertension are much more common findings, and the usual cause of death in undiagnosed or untreated patients (39).
A diagnostic approach to a patient with hypokalemia is presented in Fig. 2.
Endocrine causes of hypokalemia
Screening for primary aldosteronism (PA) is recommended for any case with spontaneous or diuretic induced hypokalemia and hypertension. In such cases, a plasma aldosterone to plasma renin activity ratio (ARR) should be assessed. If it is higher than 20, further confirmatory testing is not necessary (oral sodium test, saline infusion test, fludrocortisone suppression test, captopril challenge test) (40). Conclusively, in case of patients with hypertension, who are at high risk for a primary hyperaldosteronism (patients with relatively high prevalence of PA, including patients with stage I >160-179/100-109 mmHg), stage II (>180/110 mmHg) or drug-resistant hypertension; hypertension with adrenal incidentaloma or hypertension and a family history of early-onset hypertension or cerebrovascular accident at age younger than 40 years), we should conduct a plasma aldosterone concentration/plasma renin activity ratio examination. An adrenal computed tomography scan is recommended in all patients with PA. This test also excludes large masses that may represent adrenocortical carcinoma. Moreover, adrenal venous sampling by an experienced radiologist is recommended to distinguish between unilateral and bilateral adrenal disease, when surgical treatment is feasible, and the patient is willing to undergo the procedure. Genetic testing for glucocorticoidremediable aldosteronsim (GRA) is suggested in patients, whose confirmed PA begins before 20 years of age and in those with a family history of PA or strokes at 40 years of age or younger. This condition is also called familial hyperaldosteronism type 1. In very young patients with PA, testing for germline mutations in KCNJ5 is (41). Rare causes of hypertension and hypokalemia include 11-beta hydroxylase and 17-alpha hydroxylase deficiency, which are characterized by increased production of cortisol and aldosterone precursors due to chronic stimulation of the adrenal cortex by ACTH (42). In 11-beta hydroxylase deficiency, 11-deoxycortisol is markedly elevated in the classic form, whereas in cases with the non-classic variants, 11-deoxycortisol may be normal and therefore an ACTH stimulation test is then indicated. In the classic form, deoxycorticosterone (DOC), urinary 17-ketosteroids, urinary tetra hydrometabolites, adrenal androgens, testosterone and 17-hydroxyprogesterone are elevated. On the contrast, in 17-alpha hydroxylase deficiency, 17-hydroxyprogesterone, 11-deoxycortisol cortisol, adrenal androgens and testosterone are all decreased or absent. The urinary metabolites 17-hydroxylase corticosteroid and 17-ketosteroid also are decreased or absent. The diagnosis is set by markedly elevated levels of 11-deoxycorticosterone and corticosterone (43). If there are clinical features of hypercortisolemia (e.g. Cushing's syndrome) and after excluding exogenous corticosteroid use, a diagnostic approach to confirm autonomous cortisol production is recommended. At least two of the following tests with high diagnostic accuracy is needed: 24-h urinary cortisol, late night salivary cortisol, 1 mg overnight or 2 mg 48-h dexamethasone suppression test. In some cases, a serum midnight cortisol or dexamethasone-CRH test may be useful to establish the diagnosis of endogenous hypercortisolemia (44).
Apparent mineralocorticoid excess
Apparent mineralocorticoid excess (AME) is an autosomal recessive disease caused by deficiency of the enzyme 11betahydroxysteroid dehydrogenase type 2 (11beta-HSD2) (45). 11beta-HSD2 converts cortisol into inactive cortisone and prevents the stimulation of the mineralocorticoid receptor by cortisol. In patients with AME, an enhanced stimulation of mineralocorticoid receptors by cortisol in the distal nephron causes an elevated sodium reabsorption and increased potassium excretion. Sodium retention leads to severe low renin hypertension. The diagnosis of AME is based on the detection of an increased concentration of cortisol metabolites and a low or undetectable concentration of cortisone metabolites in urine (46). Molecular analysis of the HSD11B2 gene confirms the diagnosis. AME is successfully treated by potassium-sparing diuretics, sometimes in combination with loop diuretics (furosemide). Mild forms of AME might occur more frequently than is currently known and should be suspected in patients with hypertension, hypokalemia and decreased plasma renin concentration. Since liquorice can induce the clinical symptoms of AME due to reversible inhibition of the 11beta-HSD2 enzyme by glycyrrhetinic acid, the active ingredient of liquorice, patients suspected of having AME should not consume liquorice (47).
Glucocorticoid resistance syndrome
Familial glucocorticoid (GC) resistance is a rare syndrome that is characterized by diminished cortisol action, which is mediated by the GC receptor (GR) (48). As a consequence, a compensatory stimulation of adrenocorticotropic hormone (ACTH) secretion by the pituitary occurs, resulting in elevated circulating levels of GCs, mineralocorticoids and androgens. This syndrome is inherited as an autosomal recessive or dominant disease. Several rare mutations of the GR gene (NR3C1) have been described, which were associated with clinical signs and symptoms of generalized GC resistance. In normal conditions, the secretion of GCs is regulated by the hypothalamus, which receives stimuli from the central nervous system. Due to GR defects, cortisol has impaired actions through the GR. As a consequence, the central negative feedback of GCs is diminished, GC production by the adrenal is elevated and cortisol binds with high affinity to the mineralocorticoid receptor (MR). Symptoms in patients with cortisol resistance are the consequence of this compensatory hyperactivity of the hypothalamus-pituitary-adrenal (HPA) axis (49). Due to elevated ACTH, patients suffer from an overproduction of mineralocorticoids, leading to hypertension, hypokalemic alkalosis and fatigue. Females also show signs of hyperandrogenism, such as hirsutism, male pattern of baldness and menstrual irregularities due to higher adrenal production of androgens. In males, the gonadal production of androgens is much higher, which outweighs the increased adrenal androgen production. In physiological conditions, tissues that have an important mineralocorticoid function (e.g. the kidneys) are protected from high cortisol levels by the enzyme 11b-hydroxysteroid dehydrogenase type II, which rapidly converts cortisol to the inactive cortisone. In the condition of cortisol resistance, cortisol levels exceed the capacity of this enzyme and thereby contribute to increased mineralocorticoid effects. However, GC-resistant patients can also be asymptomatic or suffer from chronic fatigue as the only complaint, which has been suggested to result from a relative GC deficiency due to insufficient compensation by the HPA axis. GCs have many functions in physiology and virtually all tissues are affected by them. These hormones are essential for cardiovascular and metabolic homeostasis and many functions of the central nervous system. Therefore, the syndrome of (partial) GC resistance is rare, and complete resistance to GCs is probably not compatible with life (50).
Geller's syndrome (constitutive activation of the mineralocorticoid receptor)
This is an autosomal dominant condition caused by gainof-function mutations in the MR located on chromosome 4q31. The onset of hypertension is before the age of 20 years. Pregnancy may exacerbate hypertension in patients with this condition because of elevated progesterone levels and altered specificity of the MR receptor with progesterone and traditional MR antagonists now acting as potent MR agonists. The biochemical profile includes normal potassium and low aldosterone, renin and urine aldosterone levels. Genetic testing of the MR is required, because a missense mutation S810L located in the hormone-binding domain of the MR has been found in patients, causing hypertension. Spironolactone, as a therapeutic choice, is contraindicated in MR-L810 carriers (51).
Further testing Imaging
Imaging of the adrenal glands (computerized tomography (CT) or magnetic resonance (MRI)) if there is a suspicion of mineralocorticoid, glucocorticoid or catecholamine excess or MRI of pituitary gland (in order to exclude Cushing's disease) are useful in establishing the cause of hypokalemia. Moreover, an abdominal CT scan should be performed if clinical and laboratory features of VIPoma are present, such as watery diarrhea that persists with fasting (stools are tea-colored and odorless with stool volumes exceeding 700 mL/day), mild or absent abdominal pain, flushing episodes and lethargy, nausea, vomiting, muscle weakness and muscle cramps (in 20 percent of patients, where symptoms are related to hypokalemia and dehydration), if the CT is inconclusive, it may be necessary to perform radiolabeled pentetreotide scintigraphy or endoscopic ultrasound to confirm the diagnosis (52).
ECG
An ECG is recommended for all patients with hypokalemia. Typically, there is suppression of the ST segment, decrease in the amplitude of the T wave and an increase in the amplitude of U waves (often seen in the lateral precordial leads V4 to V6). A variety of arrhythmias may be associated with hypokalemia, including sinus bradycardia, premature atrial and ventricular beats, paroxysmal atrial or junctional tachycardia, atrioventricular block, ventricular tachycardia or fibrillation (53).
Treatment of hypokalemia
The treatment of hypokalemia has four aims: (a) reduction of potassium losses, (b) replenishment of potassium stores, (c) evaluation for potential toxicities and (d) determination of the cause, in order to prevent future episodes, if possible. Major goal of treatment should be the management of the underlying disease or elimination of the causative factor. Discontinuation of laxatives, use of potassium-neutral or potassium-sparing diuretics (if diuretic therapy is required, such as in heart failure), treatment of diarrhea or vomiting, use of H 2 blockers in patients with nasogastric suction and effective control of hyperglycemia, if glycosuria is present, are some measures in this direction (54).
Whether oral or intravenous potassium will be administered, this should be decided according the severity of the hypokalemia. It is important to remember that every 1 mEq/L decrease in serum potassium, represents a potassium deficit of approximately 200-400 mEq. However, this calculation could either overestimate or underestimate the true potassium deficit. Patients with potassium levels of 2.5-3.5 mEq/L (representing mild to moderate hypokalemia), may need only oral potassium replacement. If potassium levels are less than 2.5 mEq/L, intravenous (i.v.) potassium should be given, with close follow-up, continuous ECG monitoring, and serial potassium levels measurements. The i.v. route should be also our choice in patients with severe nausea, vomiting or abdominal distress (55). In patients with renal impairment, potassium should be very cautiously replaced and the renal team should be also contacted, if the patient is on dialysis or has severe renal impairment. Administration of oral potassium should be accompanied with plenty of fluid (between 100 and 250 mL of water, depending on the form of the tablet of potassium) and is better to be given with or after meals (56). Regarding i.v. therapy, 0.9% sodium chloride is the preferred infusion fluid, as 5% glucose may cause transcellular shift of potassium into cells. We should prefer pre/mixed i.v. infusions. It is critical also to correct the levels of serum magnesium, in order to achieve an adequate treatment of hypokalemia (57). An extensive description of the treatment of hypokalemia can be found in Table 3.
Conclusion
In most patients presenting with hypokalemia, the cause is apparent from the history (e.g., vomiting, diarrhea, diuretic therapy). Two are the major components for the diagnostic evaluation: (a) assessment of urinary potassium excretion in order to distinguish renal potassium losses (e.g., diuretic therapy, PA) from other causes of hypokalemia (e.g., gastrointestinal losses, transcellular potassium shifts), and (b) assessment of acid-base status, since some causes of hypokalemia are associated with metabolic alkalosis or metabolic acidosis. The renal potassium excretion is better assessed by a 24-h urine collection. However, the potassium concentration or, preferably, potassium-to-creatinine ratio on a spot urine are alternatives. Management of the underlying disease or contributing factors constitutes the cornerstone of therapeutic approach. Potassium should be gradually replaced, preferably by oral administration if clinically feasible. In cases of severe/symptomatic hypokalemia and cardiac complications, i.v. administration with continuous ECG monitoring is recommended. In some patients, such as in endocrine related hypokalemia cases, multidisciplinary diagnostic and therapeutic approach is needed. | 2018-04-03T05:50:34.068Z | 2018-03-14T00:00:00.000 | {
"year": 2018,
"sha1": "3dce0a068e2a2f9ae3402177428de05dac2ead55",
"oa_license": "CCBYNC",
"oa_url": "https://ec.bioscientifica.com/downloadpdf/journals/ec/7/4/EC-18-0109.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51cea914e8df18296cb8ce74fc7d101ac6ceace9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11241334 | pes2o/s2orc | v3-fos-license | Surgical Skills Beyond Scientific Management
During the Great War, the French surgeon Alexis Carrel, in collaboration with the English chemist Henry Dakin, devised an antiseptic treatment for infected wounds. This paper focuses on Carrel’s attempt to standardise knowledge of infected wounds and their treatment, and looks closely at the vision of surgical skill he espoused and its difference from those associated with the doctrines of scientific management. Examining contemporary claims that the Carrel–Dakin method increased rather than diminished demands on surgical work, this paper further shows how debates about antiseptic wound treatment opened up a critical space for considering the nature of skill as a vital dynamic in surgical innovation and practice.
Introduction
Speaking at the sixty-fifth annual session of the American Medical Association in 1914, the renowned surgeon of the Johns Hopkins Hospital, John Finney, remarked on the desirable qualities of the surgical practitioner in a climate of professional change. 'Among the requisites necessary for a surgeon', he declared, 'is a certain saneness of mind, better understood than described. While now and then some erratic genius will, meteor-like, appear on the surgical horizon, a closer analysis will usually show that like the celestial visitor he shines with great brilliancy for a moment, but leaves behind him little that is tangible or lasting.' 1 Finney's topic was the 'standardization of the surgeon', and his aim to convince a troubled audience of the need to embrace currents of change visible more broadly across American medicine of the early-twentieth century, lest such standards be imposed arbitrarily upon them to the disadvantage and possible devastation of their inscrutable craft. Fortunately for Finney, surgeons of the celestial kind were rare exceptions to a terrestrial norm, and solutions to the problem of standardisation were various. Here I * Email address for correspondence: nicholas.whitfield@mail.mcgill.ca I thank Thomas Schlich and the participants of the 2012 'History of Skill in Medicine and Science' workshop at McGill University for commenting on an earlier version of this paper, and for helpful discussions about the history of surgical skills. I am also grateful to staff at the Rockefeller Archive Center in New York, the editors of Medical History, and to three anonymous referees who provided detailed constructive criticism. consider one attempt to standardise a therapeutic innovation during the First World War , the Carrel-Dakin antiseptic treatment for infected wounds. The focus is on its principal innovator, Alexis Carrel, the Nobel prize-winning surgeon and self-defined disciple of science, and his novel approach to the question of standardisation. Wary of his celestial status among a surgical profession of sharply varying abilities, Carrel faced the problem of promoting a difficult technique in unfavourable conditions. In doing so, he espoused a vision of surgical skill that emphasised broad experience, attention to detail and spectatorship in education. He sought not to corrode or devalue surgical skills but to enjoin surgeons to the scientific principles of antisepsis in order to improve its practice on the front lines.
Such an interpretation of Carrel's efforts can appear strange alongside much existing historiographical work on developments in scientific medicine during the years just prior to the First World War. For at least three decades, historians have contended that medical disciplines of the period both imported and contributed to a maelstrom of wider changes, loosely connected to themes of efficiency, rationalisation, economy and 'scientific management'. Ranging from the traffic of organic and industrial metaphors between physiology and industry to the importation of cost accounting by medical institutions, scholars have arrayed rich and varied evidence in support of the mutuality of medical practice on the one hand, and strategies developing in business and engineering on the other. 2 Standardisation was but one element in a much wider spectrum of changes associated with the longer rise of scientific medicine. 3 Historians have paid special attention to the influence of Frederick Winslow Taylor's famous theories of scientific management, a conscious response to the perceived inefficiency of late-nineteenth century American labour, 4 noting how scientific medicine in America and Europe exemplified a comparable will to control and standardise practitioners and patients, turning the tools of science against the varied ills of modern life. By confronting a historiographical tradition that both presumed and imposed artificial divisions between medicine-in-particular and society-at-large, this scholarship has made a powerful case that medicine and science have been as much part of broader historical forces as their passive recipients, not divorced from but thoroughly entwined with the varied dynamics and pressures of their shifting historical scenes. 5 Yet raising such parallels presents its own risks. It can, for example, obscure divergences between the realms of scientific medicine and management (of the sort considered here), and can reproduce assumptions about the forces of standardisation in medical practice. One such assumption, common to attacks made on the managerial philosophies of Taylor and his followers, is that standardisation leads inherently to a general deskilling of work, or to the devaluation of clinical skills. 6 Claims about deskilling typically present 'skill' as a stable and self-evident category. In contrast, this paper foregrounds the historical contingency and mutability of skill as a concept, the shifting definitions of which were implicated in the history of surgery and surgical innovations during the Great War, as well as in the growing frontier of scientific management. Claims of deskilling stand in opposition to the contemporary view that the Carrel-Dakin method was skill demanding: admirers and detractors alike maintained that it presupposed lavish facilities and highlytrained practitioners; their discussions reveal a conceptual preoccupation with skill as a driving force in surgical innovation.
The aim here is to supplement rather than to contradict the insights of earlier historiography, and to stress variations among the broader processes it describes. Like the multiple meanings of science in medicine, the scientific management of medicine proceeded along various routes -as did the pursuit of standardisation. 7 Undeniably, Alexis Carrel's science of wounds bore much synthetic resemblance to Frederick Taylor's reform of management. As discussed in the first two sections of this essay, Carrel presented his innovation as thoroughly scientific, and drew simultaneously on the tools of engineering and experimentation to build his case for antisepsis. 8 Like Taylor, he strove to standardise his routines across military and civil surgery, imploring colleagues to adopt his scientifically-founded method and to follow his instructions precisely. Yet differences abound in the sense of what skill comprised. The latter two sections attempt to clarify these differences. The third notes the (material and embodied) frustrations of achieving a standardised treatment of infected wounds, and the kinds of demands it placed on military surgeons. The fourth looks at Carrel's response to these difficulties, how he attempted to propagate the scientific principles of his wound treatment at a specially-appointed War Demonstration Hospital in New York City. This final section draws out the contrast between concepts of skill specific to the pedagogical practices of scientific management and those fathomed by Carrel, showing the latter to be rooted in older surgical traditions of spectatorship, apprenticeship, and direct demonstration. 9
The Carrel-Dakin Method
Late in 1916, the French surgeon and scientist Alexis Carrel received a congratulatory notice from the former President of the United States, Theodore Roosevelt, regarding his work on the treatment of infected wounds. Evident in Roosevelt's praise was much of the optimism and anxiety that underpinned the American predicament in the early decades of the twentieth century: the promise of medicine to the industrial priorities of a nation; progress equated brazenly with science; harmony among the interests of humanity and economy; efficiency and productivity; the dream of standardisation. 10 'Even a layman like myself', Roosevelt confided, can see the immense value your discovery will have not only in military but in civil, especially industrial, surgery. If accepted in the army your new method of treatment will not only conserve life and limb, -which from the economic and military standpoint is of vital importance -but will also alleviate most of the pain and suffering of the wounded. I wish it were possible to standardize this method of treatment so as to give the wounded the best that science affords. 11 This much was clear to Carrel. Never doubtful of his innovations' wider significance, by the time he won Roosevelt's praise he had already enjoyed an illustrious career. By 1912, the year he received the Nobel Prize in medicine and physiology for his contributions to vascular surgery, 12 he embodied the celestial genius about which Finney would later be so derisive. 13 The young Carrel had openly rejected the authority of his medical superiors in his hometown of Lyon, and was bluntly critical of the bureaucratic, hierarchical and antiscientific nature of French medicine he held responsible for stifling surgical innovation. 14 In 1904, following the presentation of his work on blood vessel repair at a medical congress in Montreal, Carrel won several invitations to work in America. He left France that year for the Hull Physiology Laboratory at the University of Chicago, where he continued work on vascular surgery, then in 1906 he moved to New York to the newly-established Rockefeller Institute for Medical Research. Just four years old, the institute was a citadel for the 'ideal of clinical medicine as a science', 15 and therefore a perfect destination for Carrel, who by this time had shored up divisions between clinic and laboratory, and chosen sides. 'I hate medical practice', he conceded in 1906 to the neurosurgeon Harvey Cushing: 'I would like better to make little money in doing scientific work than a great deal in doing surgical operations'. 16 The institute gave him access to unparalleled facilities, and its director, Simon Flexner, allowed Carrel complete autonomy in the organisation of his workspace and choice of research, which over the next three decades spanned vascular surgery, organ transplantation and replantation, the study of wounds, techniques of tissue culture, 17 and, towards the end of his career, social and political theory. 18 Carrel honed an infamous style, donning dark overalls in surgical theatres rendered utterly in black (the reason he gave was to reduce the sun's glare; the rooms were lit by natural light), and exercising fastidious control upon his various assistants. 19 Following his death in 1944, and partly due to the fascistic political beliefs he conveyed in his 1935 book Man the Unknown, his memory and reputation dimmed (as in Finney's prediction) to the extent that a generation of celebratory biographers now lament the faded legacy of a surgical genius and hero. 20 At the outbreak of war in Europe in 1914, Carrel had volunteered for the French Medical Corps as aide-major whilst holidaying in France. In December that year he spent several days on the front, where he was impressed by the spirit of the French fighters, disillusioned by a chance encounter with Madame Curie ('a most conceited and ugly old woman' 21 ), and regretful about the formidable problems of military surgery. 'I can not write what I saw in the hospitals', he confessed to Flexner, with whom he maintained regular contact: 'It is the failure of Medicine.' 22 Most unsettling for Carrel was the inadequate treatment of war wounds, which led to 'the frequent occurrence of gangrene, suppuration, and infections of all kinds' among the injured men. He noted with regret that '[m]edical and surgical science then has done very little . . . for the treatment of the infected wounds of war'. 23 The failure of therapeutics was not based on ignorance about causation, however -bacteriologists had determined quickly that such wounds were the product of modern agriculture, French soils heavily cultivated with manure whose mingled debris penetrated deep into the clothing and torn flesh of combatant soldiers. 24 In 1917, an American journalist offered a graphic summary of the living conditions that predisposed them to infection: 'soldiers, in the main, live in inordinately filthy surrounds. Their trenches are dug in ancient barnyards; their bodies are sweaty and dirty; their clothes are covered with mud.' 25 The result was a catastrophic rate of gas gangrene, suppuration and amputated limbs that fed the morbid iconography of the Great War and its aftermath, a grim tragedy for countless fighters, dead or dismembered, and the surgeons helpless to revive them. 26 By the later months of 1914, a 'stampede of discouragement' was pervading the ranks of military surgeons who frenzied forth 'innumerable ideas' to combat the problems of sepsis. 27 Yet this was not the first time Carrel had confronted the puzzle of wounds. He had long maintained that science had failed to optimise the rate of human reparation, which he thought could be accelerated to a hitherto unimagined extent. 28 In September 1909, his chance witnessing of an event at Lourdes fortified his convictions: A few days ago, I could make two very important observations on the activation of wound healing. I went to Lourdes, and . . . was allowed to observe a few patients. On a small ulceration, I saw the epithelisation occurring in a few minutes. This fact had never been exactly observed. I am more than pleased to have seen it. It demonstrates that my hypothesis of the possibility of an enormous activation of cicatrisation of tissues is not a dream. Unfortunately, I have not the faintest idea of the cause of the phenomenon. 29 Carrel had started work on cicatrisation in 1907 shortly after his arrival in New York. 30 By 1910 he was emboldened to make claims to readers of the Journal of the American Medical Association about the unrealised potentials of wound treatment, and make public his wish to activate and accelerate the hidden processes of human reparation: 'wounds which heal in a few days could possibly be caused to heal in a few hours.' 31 With the constant traffic of wounded soldiers, the war provided Carrel the opportunity to explore these potentials on a dramatically-increased scale, and by early 1915 he had agreed with the French Minister of War on a plan to establish a military hospital partly funded by the Rockefeller Institute. 32 He chose a location in the buildings of a once-fashionable hotel, the Rond Royal, on the edge of the Fôret de Compiègne just 12 km from the frontline. 33 Boasting unique facilities, it was an ideal venue to continue research on wound infection and reparation. When Cushing paid his friend a visit in April 1915, he could not help but remark on its lavish appointment: There are at present 51 beds with 86 attendants, including slaveys of all kinds -11 scientific, medical, and administrative officers; 13 experienced Swiss nurses supplied by Theodor Kocher; numerous secretaries, laboratory technicians, linen-room people, scrub women, ambulance men; and 47 soldier orderlies who do everything from boots to waiting on tables and keeping up the gardens. It is indeed a research hospital de luxe. . . At Compèigne, research progressed rapidly. Among Carrel's team was a talented English chemist, Henry Dakin. Carrel had determined early on to pursue an antiseptic solution to the problem of infection 35 and worked closely with Dakin to find an appropriate germicide. By March 1915 he was ready 'to try a new treatment of wounds and some of Dakin's substances', which had already produced encouraging results. 36 By July they had agreed on an antiseptic and had devised the method's essential components: 'The work of Dakin has given excellent results and we are about to try his substances on a larger scale in some of the first line ambulances.' 37 This antiseptic procedure differed from others insofar as it was founded on Carrel's earlier and ongoing experimental work on cicatrisation, and on the formulas of Pierre Lecomte du Noüy, an officer with mathematical training who, at the end of 1914, found himself in Compiègne in charge of food provisions for a division of the French army. Carrel approached Lecomte du Noüy with the problem of how to accurately determine the surface area of wounds and how to establish the geometric law of cicatrisation. In his recollections of working with Carrel, who he deemed a 'spiritual godfather', 38 Lecomte du Noüy recalled how biologists working on the same problem had been 'paralysed' by their acute awareness of the multiple factors of cicatrisation: 'My ignorance of these elements freed me from the chains which fettered them [. . . ] Dr Carrel had foreseen that a brain trained in such methods [i.e. mathematics] was better adapted to attack this problem than one inhibited by a mass of knowledge and by habits of thought.' 39 In 1916, a series of research papers in the Rockefeller Institute's journal, The Journal of Experimental Medicine, began outlining experiments conducted first in America then in France, which sought to unveil the hidden laws of wound reparation. It was in the first of these that Carrel made reference to the 'planimeter' -an engineering tool for determining the area of a surface in square centimetres, suggested to him by Lecomte du Noüy -as a means for establishing geometric order upon the anarchic complexity of sterile wound reparation. 40 During early experiments in New York, Carrel determined that the rate of cicatrisation of a wound is greater at the start of a period of repair than at the end, and, most important, that the curve representing the contraction of an aseptic wound is regular and geometric, thus offering a standard for determining the antiseptic power of germicidal agents. 41 This standard, to be expressed mathematically by Lecomte du Noüy, was vital in that it allowed Carrel both to quantify the effects of his method and to defend against sceptical attacks (see Figure 1).
Enrolling these experimental insights into the treatment of wounds, Carrel saw himself as rejuvenating the neglected wisdom of the antiseptic tradition associated with Joseph Lister. The Frenchman hoped to rehabilitate antisepsis on a rational basis, and thereby to compensate for the striking failures of aseptic, open-air and physiologic methods common at the time. Yet it was a deeply controversial position to take on -antisepsis had been 35 In this month he wrote to Flexner: 'I believe that the finding of very powerful antiseptics is of great importance. To establish the curve, Carrel made measurements of the wound at regular four-day intervals, tracing the area onto transparent cellophane with a wax pencil. The cellophane drawings were then reproduced on a sheet of paper from which the area of the wound (S) and the area of the wound and the cicatrix (S and C) were estimated in square centimetres by means of the planimeter. Carrel obtained the daily rate (R) of cicatrisation by dividing the differences of two consecutive surface estimates by the time elapsed between each observation. In this way, Carrel explained, he could ascertain the size of the wound, the size of the cicatrix, and the rate or 'velocity' of wound repair. He was further able to examine the relations between the size of a wound and the rate of cicatrisation. From Alexis Carrel and Alice Hartmann, 'Cicatrization of Wounds I. The relation between the size of a wound and the rate of its cicatrization', The Journal of Experimental Medicine, 24, 5 (1916), 429-50: 432. troubled during the rise of asepsis, the barbarity of Lister's sterilising agents cited as justification for the pre-emptive elimination of germs 42 -and in doing so Carrel stoked a storm of fierce debate about the proper approach to infection. 43 Prominent and polemical opponents to antisepsis claimed that germicidal agents were simply ineffective for the sterilisation of projectile wounds, in which fragments of shell, shrapnel, manure and mud 42 Thomas Schlich, 'Asepsis and Bacteriology: A Realignment of Surgery and Laboratory Science', Medical History, 56, 3 (2012), 308-34. 43 Haller, op. cit. (note 24), 303. lingered deeply among lacerated flesh beyond the reach of topical agents. According to the most prominent and vocal critic of antisepsis, the English pathologist Sir Almoth Wright, logic alone confirmed the futility of germicides against infection: 'the microbes are inaccessible. They have been carried down deep into the tissues, and lie on the inner face of a torn and ragged track; and that track is blocked by blood clot and hernia of muscle.' 44 Carrel had therefore to confront influential objections in order to demonstrate the effectiveness of his method to the wider surgical community in France, Britain, America and elsewhere. This was one reason why his experimental work on cicatrisation was so important. With scientific findings, Carrel could explain why antiseptic interventions had hitherto failed, and how with critical revisions they might succeed. 45 'The idea must be grasped', he wrote in 1917, 'that a given antiseptic substance, applied at a certain concentration, and during a certain time, is able to destroy microbes without damaging the normal tissues to any appreciable extent.' 46 Hence it would not be by means of 'the marvellous properties of a new drug' 47 that such results would follow, but from systematic experimentation with a whole range of chemical antiseptics applied at specific concentrations for precise intervals. Dakin considered around 200 in total. 48 Like Carrel, the chemist insisted it was not merely bactericidal quality that counted for success in antisepsis, but a medley of factors working together. These included the penetrative power of a germicide through human tissues, its toxicity and solubility, its antiseptic power among flesh and pus, and, most important, the degree of irritation it caused to patients. 49 The experimental establishment in 1915 of hypochlorite of soda as the most appropriate antiseptic was therefore an important step not only in the development of the Carrel-Dakin method, but also in the broader defence of antisepsis. 50 In particular, it provided Carrel and Dakin a rejoinder to accusations of 'the fallacy of taking the figures for an antiseptic acting on microbes in watery suspension and seeing in these an all-round formula of efficacy'. 51 Moreover, Carrel's experimental efforts offered him a means for countering the a priori doubts about antisepsis common to such sceptics as Sir Almoth Wright. 52 To the latter's insistence that germicidal agents were incapable of penetrating deep enough into human tissues to eliminate microbial infections, Carrel produced an experimentally-based rebuttal which, based on the normal curve of the planimeter, testified to the restorative power of antiseptics under controlled conditions. Indeed, Carrel and Dakin met Wright's objection not only in their experimental identification of an appropriate antiseptic but also in their claim that the effectiveness of any antiseptic hinged on its role in a wider system of wound treatment. 'In the sterilisation of a wound,' Carrel and Dehelly later explained, 'the antiseptic plays a part comparable to that of the scalpel in a surgical operation. It is only an instrument, and does not constitute a method. But the choice of a good instrument is a factor indispensable to success. Chloramines and Dakin's hypochlorite are admirable instruments.' 53 Having found a suitable antiseptic solution in Dakin's hypochlorite, the fundamentals of the Carrel-Dakin method -as opposed to the details of its specific instruments -could be set out in full. The procedure was outlined most lucidly in 1917 across several chapters of The Treatment of War Wounds (one of two monographs to appear that year on the subject), and included four distinct but occasionally simultaneous phases. 54 The first was the careful preparation of the wound for sterilisation by the debridement or 'mechanical cleansing' of infected surface tissues, in order to enable the necessary 'intimate contact' between the antiseptic solution and invading microbes. The timing of this stage was vital. Carrel, like most military surgeons of the time, attached paramount importance to the rapid treatment of war casualties and debridement of wounds. Initial cleansing was followed by the chemical sterilisation of the second stage, the intermittent or continuous instillation of the sterilising agent across all portions of the wound by means of small rubber tubes with perforated holes at half-inch intervals (Figure 2). 55 To monitor the effects of antisepsis and the progress of cicatrisation, daily clinical and bacteriological examinations were necessary (the third stage: 'control' 56 ), which preceded stage four, the timely closure of the wound, permissible once bacteriological smear tests failed to detect microbes for three consecutive days, coincident with improved clinical signs in the patient (a regular temperature and a good condition of the limb). 57 Carrel insisted that in all cases success demanded the rigorous observation of each stage in its myriad specificity. Any diversion from the procedure would proffer negative, if not disastrous, results. 58 The method produced by Alexis Carrel and his colleagues thus converged disparate spheres of expertise upon the singular problem of infected wounds. It was grounded in experimentally-produced knowledge of cicatrisation and adhered to the Listerian tradition of antisepsis. By determining the normal rate of human reparation, Carrel had developed not just a method of wound treatment but a standard upon which this and other interventions could be adjudicated. Furthermore, he was keenly aware that his scientific solution to sepsis went beyond the limbs of men into the industrial and economic heartland of nations. 'Our antiseptic treatment of wounds is very successful', he announced in November 1915: 'If it were properly applied, it would save to France many men and many millions.' 59 53
The Science of Wounds and the Science of Management
Carrel viewed his intervention as thoroughly scientific. It was a mechanical and chemical method of sterilisation founded on scientific principles, and on knowledge generated scientifically (that is, quantitatively and experimentally). After its formulation in 1915, Carrel struggled with resistance from French surgeons, which he attributed to an antiscientific mindset. Fashioning himself as a true disciple of science, he confided to Flexner that, 'the French surgeons cannot realize that Dakin and myself, that is two laboratory workers, have found what they have failed to find': namely, a solution to the problem of infected and suppurating wounds. He intimated further that since science had enabled the effective application of antiseptics, it would be resistance to science that would impair the success of the Carrel-Dakin method elsewhere. Writing in the same letter on the problem of shock, he commented that 'the men in charge of the Service de Santé do not understand that important results can be obtained from scientific studies.' For Carrel, 'true progress comes only from scientific research, and not from clinical work.' 60 Even as the irony of a war in which science had 'perfected the art of killing' became clear, such unmitigated praise was not anomalous. Long before Carrel wielded scientific means against conflicts and miracles, others had already employed them in service of peacetime dilemmas. In the late nineteenth century, an engineer named Frederick Winslow Taylor began developing a system of management that became emblematic of the broader American 'efficiency craze' of the period. Historians have noted parallels between Taylor's project and simultaneous developments in medicine. Where sometimes this has meant redefining the term 'scientific management' to include a much broader spectrum of changes than it originally encompassed, 61 for Taylor and his disciples the term denoted a specific means for the (re-)organisation of labour in factories for the purpose of increased productivity at lower costs. Though not the first attempt to tackle such problems, Taylor's effort to cure the natural and systemic 'soldiering' of workers was unique in both its employment and its valuation of scientific means. His dream was the application of science to the analysis of work, his means the decomposition of work into its elementary operations, the systematic improvement of each part, and their recombination into an optimal whole. 62 With the tools of science, Taylor maintained, work could be improved and improved work standardised.
To achieve these goals, Taylor's solution was not to multiply divisions of labour or to introduce technological innovations (the historian Samual Haber points out that Taylor regarded both strategies with suspicion); 63 rather, he sought to improve labour by separating the conception of work from its execution, and then transferring all brainwork away from the shop floor into the hands of management. This radical and absolute division of thinking from doing resulted in the dissociation of workers from the labour process and assured the absolute control of the labour process by a centralised planning department. 64 Not only were labourers excluded from conceptual work, it was crucial that they could not derive or comprehend the ideas that management controlled. 65 The science of work itself was founded on a scrupulous management of the motions of each labourer. Taylor explained that his 'whole system rests upon an accurate and scientific study of unit times, which is by far the most important element in scientific management.' 66 Hence the stopwatch, Taylor's means for dividing an activity into its elementary operations, the best of which could be reconstituted into a newly-efficient whole. If Taylorite revolutions hinged on the concentration of brainwork away from the shop floor, it was the body of the worker that constituted the target of reform in the application of science to industry. Thus the diminishment of thought among labourers accompanied a scrupulous focus on the physicality of their work. What made this focus scientific was measurement: 67 the stopwatch studies favoured by Taylor, and later the time-and-motion methods perfected by his early disciples and later antagonists, Frank and Lilian Gilbreth. 68 As Anson Rabinbach remarked, according to the new doctrines of scientific management the 'rationalization of production was predicated on the rationalization of the body.' 69 Ways to rationalise bodies reached an apotheosis in the efforts of the Gilbreths, who, in their attempts to extend scientific management beyond the sphere of industry devised various methods to visualise, measure, anatomise and quantify the physical motions of labour. 'Motion study', which determined what path a motion was to follow, gave visual enrichment to 'time study', which determined how swiftly a path was to be traversed; time-and-motion study was an effort to identify skills and transfer them among workers. Notably, Frank Gilbreth had taken a special interest in hospital management and surgery, criticising Taylor's alleged glorification of the surgeon as the 'best mechanic', and insisting to the contrary that 'surgeons could learn more about motion study, time study, waste elimination, and scientific management from the industries than the industries could learn from the hospitals.' 70 Modern surgery was full of wasteful motions, he contended, and in place of its culture of haphazard guesswork imagined 'a race of superskilled' surgeons whose elementary habits of motion were to be cleaved from a genius minority. 71 Here the image of skill was unambiguously joined to a critique of charisma in industry, and to the rejection of all forms of ineffable knowledge. It was a 'democratic' vision that sought to prise skills from the facets of personality and recast them as transferable quantities: Despite the novelty of the techniques, their underlying concerns were not unorthodox. By the time the Gilbreths published on the topic of surgical superskills in 1916, standardisation in surgery was being widely discussed. Finney's address had not been the first to consider it. Others had outlined more positive programs for achieving a standardised surgical practice, such as the Brooklyn gynaecologist Robert Dickinson, who, inspired by the time-and-motion studies of the Gilbreths, believed that surgery could import the insights of scientific management to great effect. 73 It was the sort of external interference that Finney would attack just months later from his platform at the American Medical Association. The Gilbreths, after all, had denied any qualitative difference in unit motions across practical domains (the surgeon's motions were ultimately at one with the bricklayer's), and further maintained that the effective reform of hospital management required intervention by outsiders: '[a] concession that must be made is in the willingness to allow a man not trained either in surgery, medicine, or hospital management to apply the measurement and determine the resulting standards.' 74 True to this advice, Dickinson's idea was to reform the very physicality of surgery to instil the most efficient habits of motion, a resolution that captured a defining aspect of scientific management consistent across its various guises: that having claimed conceptual control of the work process, an external force of management should reform workers to the new relations of production; the inherent limitations of workers, in terms of talent or intellect, placed no necessary boundaries on the possibilities of reform.
These features of scientific management have been the focus of much contemporary and historical criticism, a great portion of which has targeted the implications of Taylor's ideas for skilled work and craft-based trades. 'This process', wrote one early critic, 'separates skill and knowledge even in their narrower relationship. When it is completed, the worker is no longer a craftsman in any sense, but is an animated tool of the management.' 75 Such criticisms held equally for the Gilbreths, who in revising Taylor's ideas had become preoccupied with the question of skill transference. Although their idea of skill was ostensibly democratic, they could define a 'superskilled' operator as merely the aggregate and executor of the best elementary motions, motions which had been dissembled from the bodies of others and reconstituted afresh: 'there is some one best way for doing each thing that is done, but the complete best way is seldom in the consecutive acts of any one person', hence: 'The ultimate [best] method will be a synthesis of the best elements of all methods submitted.' 76 This process, the synthesis of best elements in any one person, constituted the transfer of skill. But it was a process that necessarily voided 'skill' of any conceptual requirement beyond moving habitually through the best 'elementary motions'. 77 This was not, by any measure, the sort of skill lamented by critics of Taylor and his acolytes, nor would such critics likely accept a definition of craftwork as the synthetic aggregate of best elements. On the Gilbreths' thinking, skill dwelt on the surface of workers, in their waste-eliminating motions, cycles of decomposed practice captured stereoscopically and rendered transferable. A kind of regimented mimicry, it upheld Taylor's strict division of thinking and doing, suppressed creative workers and their ideas, and diminished scope for judgement, impulse, creed or whim. 78 Despite disagreement among historians as to the role of scientific management in deskilling industrial work, it is yet easy to see why Carrel's standardisation of wounds has raised comparable accusations with regard to surgical practice. Although he never drew directly on Taylorite doctrines, there are striking parallels between his rehabilitation of antisepsis and the scientific reform of management. First was the shared reverence for science, the conviction that scientific methods would yield imperishable truths about best practice. The ironies of war, which Carrel noted, did not temper his zeal for scientific remedies. He defined himself as a laboratory worker and dedicated his time to research. For its lack of scientific promise he abhorred the clinic. Just like Taylor, he fused science to progress, and strove to substitute scientific solutions for subjectivity, guesswork and the 'rule of thumb'. 79 Also like Taylor, he insisted on the strict observation of his method. Both men attempted to quantify chaotic realities, both to build standards on quantified grounds. Both, moreover, linked quantification to the figure of the engineer -idolised across Taylor's writings -and both intimated the artistic destinations of their respective quests, Carrel by reference to the opposed arts of killing and healing, Taylor in his utopian vision of management based on fixed principles. 80 Yet as it manifested qualities of scientific management, the Carrel-Dakin method also challenged a simplistic equation of standardisation with simplicity, deskilling or the general devaluation of surgical skill. The following two sections will argue that, far from lifting it from the agenda, the attempted standardisation of the Carrel-Dakin method provoked contemporary accusations of a highly skilled technique, figuratively expanding skill as a central dynamic in surgical innovation and training.
Problems of Standardisation
By 1916, the Carrel-Dakin method had spurred widespread discussion in the medical literature of America and Europe. Convinced early on of its efficacy, Carrel hoped that the treatment he devised with Henry Dakin would be standardised throughout the French military and beyond. Yet his desires were not shared universally -especially in France, he came up against considerable resistance. By the autumn of 1915 he remarked regretfully that 'I have to spend almost all my time trying to have the doctors understand that a complete change in the results of the treatment of the wounded has already been obtained and should be obtained everywhere.' 81 'The insane opposition of the French surgeons goes on', he complained the following summer. 'It is very distressing that so many young men lose their life and their limbs, when they could be saved. Their extreme conceit has [led] the French doctors to crime.' 82 Though Carrel was in little doubt about his major obstacle, it was not just perceived dogmatism that hampered his quest for standard practice. As discussions unfolded, commentators identified three major obstacles inhibiting the wider uptake of his technique. The first concerned the extent of its dependence on the special milieu of the forest hospital. As admirers and sceptics alike maintained, Carrel's wound treatment presupposed and demanded material resources unavailable to most military surgeons. 83 On the question of whether the Carrel-Dakin method might successfully be imported to English hospitals, for instance, one cautious enthusiast remarked: Carrel's clinic is really an experimental hospital, provided with elaborate assistance in the way of laboratory, and medical and nursing staffs. The work is carried out by those who are through long experience intimately versed in the details -a really important matter -and as keenly interested in the success of the work as is their chief. The demands made by our hospitals upon surgeons and nurses, and the limited supply, render it impossible for us to have anything approaching the personnel of the Carrel hospital. 84 Rockefeller money had lavished Carrel with an elaborate workspace for surgical and physiological experimentation unavailable to most other military surgeons, and entirely unrepresentative of other wartime hospitals. These unique resources were embedded in the model of wound care he espoused -hence just as important as the institutional origins that had enabled his treatment were the material constraints it ignored. Even Sir Watson Cheyne, eminent defender of antisepsis and principal adversary of Sir Almoth Wright, 80 Ibid. doubted the extent to which Carrel's results could be replicated in most other wartime hospitals: Carrel has the advantage which I suppose very few others have had, that he has been able to keep a patient under his own treatment for any number of days that he chooses. If on the other hand hospitals are being constantly evacuated and the patients transferred from one surgeon to another there is not continuity in the work and no method of treatment has a chance of being thoroughly tested . . . [S]ome means would need to be devised by means of which the patient either is retained near the Front if badly injured or a series of teams are established of men of the same way of thinking so that the continuity of treatment is maintained. 85 Thus Cheyne presented the uniqueness of the hospital as a limitation: the Carrel-Dakin method was the product of researchers uniquely funded and favourably located, but selfconsciously distant from the day-to-day realities of surgical practice.
This first obstacle to standardisation related to a second: namely, the alleged complexity of the technique. A pioneer of tissue culture, Carrel had been criticised in 1910 for his unduly complex laboratory procedures, and by mid-century had left a legacy of difficult methods and theatricality. 86 From the autumn of 1917, a similar mysticism was arising around the treatment of infected wounds by antisepsis. Admirers and critics agreed that in almost all its components, the Carrel-Dakin method was skill demanding. Even the most convinced supporters, such as the renowned American pathologist and President of the Scientific Board of the Rockefeller Institute, William Welch, could not deny that the technic of the Carrel treatment is elaborate and requires an intelligence and skill on the part of the surgeon which cannot be counted on for the average surgeon. The preparation of the Dakin solution also requires chemical skill. There are certainly difficulties in carrying out the Carrel treatment under the condition of actual warfare, and opinions may differ as to the extent of its applicability under these conditions. . . 87 To this Welch added: 'Halsted has been using the Carrel method in suitable cases for a long time, for over a year, and is most enthusiastic over it, but seems to feel that not many surgeons will master it.' 88 The remark is noteworthy since by 1917 Halsted considered himself a most staunch supporter of the method, writing to Carrel in February that year to express his personal support: 'I doubt if anyone is more enthusiastic about it that I am. A relatively non-toxic, nonirritating antiseptic opens vistas which I have dreamed of for many years, and others of which I had not vision.' 89 Such remarks intimate a lingering ambivalence among the method's enthusiasts that to assert its success was simultaneously to pay homage to the unusual skill of its innovators, and therefore to doubt its wider application through wartime hospitals. 90 In November 1917, a report of the Surgical Committee to the Director General of the British Army Medical Services, though ultimately recommending the adoption of the technique by the British army, noted that it 'is more elaborate than that of most wound dressing', and 'Dakin's fluid is more difficult to prepare, and its preparation has to be carried out with great precision if its proper composition is to be maintained.' 91 Other commentators likewise presented skill as a thinly and unevenly distributed quality among the surgical professions of America and Europe. Carrel was among them. In the summer of 1915, as he prepared to apply his methods on a wider scale, he wrote to Henry James Jr at the Rockefeller Institute about anticipated difficulties: The results that we obtain in our hospital are far better than those I observed elsewhere. But that may be due partly to the skill of our surgeons and nurses. I want to be sure that the treatment in an ordinary hospital is efficient. A surgical method is practical only when it can succeed in the hands of unskilled and ignorant doctors. 92 Carrel wrote from experience. His method for suturing blood vessels -which he put to sensational effect with the blood transfusion of a dying baby, Mary Lambert, in New York City in 1906 and which won him the Nobel Prize -had faced comparable difficulties. 93 'The operation requires delicate technic', wrote two surgeons in Chicago, 'such as is possessed only by those who have had extensive experience in blood-vessel surgery.' 94 Another remarked: 'Its general applicability has . . . been considerably restricted owing to its difficult technic. The suture of the vessels requires a marked degree of skill, and even in the hands of men more or less experienced . . . it often fails'. 95 The result was incremental changes to the blood transfusion procedure until it required only the most basic and well-known surgical skills, such that it could be rendered in step-by-step instructions and imitated with ease. Such changes were based on recognition that there were consequential gradations across the surgical profession. 96 Such a conception of skill as unevenly distributed among American surgeons framed a specific problem of standardisation: how to make a method that could overcome the common limitations of a surgical profession. Carrel believed that besides the hospital at Compiègne, it was at only a handful of other medical facilities that 'the method is employed in its integrity.' 97 One common explanation for failure was the lack of appropriate surgical training in the technique. This resulted in surgeons and chemists skipping or misapprehending crucial parts of the procedure, most frequently in the preparation and application of Dakin's hypochlorite solution. The large list of possible errors (as noted by various authors), attested both to the difficulty of the method itself and to the requirement of following it exactly (see Figure 2). 98 The issue resolved into the third problem hampering the standardisation of wounds: that learning and applying the technique required direct demonstration. 99 To avoid poor results, Carrel and his followers stressed the importance of dedicated firsthand experience as the only means for learning the proper application of antisepsis. So it was that on hearing from a supportive colleague, Charles Langdon Gibson, that a 'very excellent surgeon who enjoyed in Paris the reputation of having mastered the technic' was obtaining poor results, Carrel had merely smiled and responded 'The gentleman stayed here only an hour.' 100 Gibson reported that the principal operator of Carrel's hospital had been recently detached and replaced by a surgeon 'equally experienced and competent, but unfamiliar with the method [. . . ] Carrel told me that it took about a month for the later arrival to become familiar with the factors necessary to complete success.' 101 These assumptions were reiterated in an exchange in the pages of Journal of the American Medical Association in 1917, when Arthur Dean Bevan of Chicago wrote an open letter to Welch expressing his reservations about some sensationalist reporting of the Carrel-Dakin method in portions of the popular press. Bevan maintained that the results of the antiseptic method had been strongly overstated, that its foundations were not scientific, and that further controls were needed to establish its therapeutic superiority over other methods of wound treatment. His remarks incited strong reactions. In a response published the following week, the surgeon Arthur McCormack criticised Bevan for having relied on a medley of 'letters from Joe this, and Fred that, and one operation done by Josh somebody else', and condemned Bevan's lack of experiential knowledge and his related need to base the condemnation of a scientific method on a mere analysis of a little précis or manual which plainly states that its chief purpose is to refresh the memory of those who have participated in the course of study of the treatment under one who has had it demonstrated to him until he has mastered it. 102 Despite his hostility, McCormack's response unwittingly confirmed the severe practical problems of the Carrel-Dakin method. The very fact that the technique had failed by amateurish hands of 'Joe this', 'Fred that' and 'Josh somebody else', a trio of average surgeons lacking the lavish clinics of the Rockefeller or trips to French military hospitals presided on by Nobel laureates, resonated with wider concerns about its standardisation. Further embedded in this rejoinder was a specific vision of surgical learning that ceded the epistemological paucity of textual knowledge to the priority of apprenticeship and face-to-face education. If not ineffable, surgical skills were nonetheless hard-won and would not transmit from word to hand by some mystical procedure of close reading. Instead, mastery of the method depended upon sustained observation. The 'little précis' in question, Carrel and Dehelly's Treatment of Infected Wounds, insisted as much: 'The best way to learn the method is to see it applied', it stated. 'Hence this book is especially intended to recall essential details of the technique to those who already know something of its application.' 103
Means and Workers
In the autumn of 1915, Carrel had presented surgical skill as a quality scarcely distributed among military surgeons. This conception resolved the problem of standardisation into how to make a method successful in the hands of 'unskilled and ignorant doctors'. 104 His was not the normative stipulation that high skill should not play a dominant role in the progress of surgery but the descriptive estimation that it did not and indeed plausibly could not play such a role, and that the paucity of skill, conceived as a distributed quality, explained the need to work around the limits of a surgical profession. Carrel had written twice in 1916 of his desire to simplify his technique, particularly its chemical component. 105 His preference for modification signals a first departure from the doctrines of scientific management. Rather than reforming the bodies and motions of labourers to optimise surgical performance, he approached the problem of wound care from the technical point of view: first by devising an innovation, then by simplifying its components. Where Taylor saw technology as fixed and bodies as malleable, Carrel sought to develop technologies around the fixed limits of (un)skilled bodies.
Yet despite his early intentions, the Carrel-Dakin method proved difficult to simplify. As the war went on, detractors and admirers alike maintained that the method required exceptional skill for its successful application. To this extent, Carrel had faltered in his original wish to create a technique easily adaptable to the hands of the average surgeon. Yet his mounting belief that the method should be taught face-to-face had led him to another solution to the problem of standardisation. Turning his attentions from the technique to the surgeon, he and his supporters at the Rockefeller sought to educate military and civilian practitioners in the correct principles and routines of antisepsis. This represents a second point of divergence from scientific management since teaching surgeons the technique required training them in the principles of antisepsis -it required surgeons to fully comprehend the conceptual 'brainwork' behind the procedure.
In the spring of 1917, following Carrel's insistence that the work of Compèigne was nearing an end, Simon Flexner proposed the construction of a 100-bed War Demonstration Hospital on the grounds of the Rockefeller Institute in Manhattan (see Figure 3). Designed as a movable wartime hospital, its establishment was rapid. Construction began on 1 June and staff admitted the first patient on 26 July. 106 The hospital served three principle functions: to make available to civilian and military patients the Carrel-Dakin treatment; to demonstrate and teach the method to American civil and military surgeons and nurses; and to test the feasibility of a portable military hospital unit modelled on those on the Western Front. 107 As well as its pedagogical and clinical features, the hospital included a large laboratory space (unlike Western Front hospitals), for research into the chemical component of the treatment. 108 Instruction covered four areas in two-week courses that ran from July 1917 to March 1919: a surgical course, a chemistry course, a laboratory course and a course on special instruction. 109 Over 800 surgeons attended, many of whom kept up correspondence with hospital staff to report successes and difficulties, or to enquire about the availability of equipment and solutions. After several months of teaching, Carrel concluded the following: Experience has shown that it is comparatively difficult for the average surgeon to learn these techniques, because it requires more accuracy than they are accustomed to practice. It was observed also that the training of the surgeons to use these methods has a very good influence even in their improvement in other branches of surgery, because it teaches the advantage of a precise method. 110 In the Carrel-Dakin method, the science of wound care succeeded not by the rigid partitioning of thought and deed but by enjoining surgeons to partake in the correct scientific and surgical principles of antisepsis; surgeons were to be improved generally and across several domains. At the War Demonstration Hospital, confrontation with surgeons' bodies was not through the identification and control of their elementary actions but proceeded by enjoining surgeons to science, and by instilling in them the conceptual as well as the physical components of surgical precision. This positive programme for education was a foil to Carrel's earlier indictments of French surgeons as much as it was a counterpoint to the standardisation of industrial labour.
Both at Compiègne and at the War Demonstration Hospital, Carrel saw the disciples of his wound treatment as pupils of an exact method founded on scientific principles, the various stages of which demanded sound training in chemistry and detailed knowledge of such complex phenomena as wound topography and cicatrisation. His emphasis on adhesion to rules owed to his strictly holistic understanding of antiseptic action rather than to a will to transform the relations of surgery along managerial lines -it was above all the artefact of a consciously intricate antiseptic procedure combined with a dim view of a surgical profession thrust suddenly into the trials of war. The division of labour it implied was not in the spirit of divvying elements of a complex task among equivalent workers, as, say, in the production lines of Ford's great factories, but of converging disparate fields of expertise upon a single objective, and expecting the utmost of each participant. Carrel hinted at this in his explanation of why so many surgeons before him had failed in their attempts to perfect wound care: 'Experimenters have attempted, working alone, researches which needed the co-ordinated efforts of chemists, pathologists, bacteriologists, trained in scientific technique . . . Despite the academic toil of many surgeons, wounds suppurate to-day as freely as ever.' 111 Science, if it was to be successful, must be collaborative. The solution to infected wounds was therefore to expand the therapeutic procedure across multiple domains of expert knowledge. Only jointly could the precision of the chemist, the dexterity of the surgeon and the fastidious care of the nurse guarantee the remarkable healing phenomena reported at Compiègne. In the Carrel-Dakin method, each role summoned the full force of its bearer, and each certainly demanded no less than the traditional cicatrisation of wounds by nature in which the surgeon alone had struggled, and only then as passive witness to the caprice of fate.
The hospital at Compiègne came under attack and was evacuated on 21 March 1918. It was destroyed completely the following day. The Great War ended on 11 November and Carrel was discharged from the French army in January the following year, after which he resumed work at the Rockefeller Institute. 112 With the destruction of the hospital and cessation of hostilities, the Carrel-Dakin method began to lose its practical relevance as medical priorities shifted and surgeons turned upon new problems. For all the emphasis on Carrel's method during conflict, it was Dakin's hypochlorite that survived as an innovation in antisepsis. As Carrel implicitly foresaw, his irrigation technique was not amenable to most surgeons. His original image of skill prevailed, confining his method first to a minority and then to obscurity.
Conclusion
In his study of how the treatment of wounds by the Carrel-Dakin method became standardised during the Great War, the historian Perrin Selcer has argued convincingly that the story of the ill-fated technique exemplifies the political dynamics of standardisation, how it acts to 'reconfigure and formalize power relations in medical practice.' 113 This paper has contended that a conception of surgical skill was central to those processes. It has argued further that the relationship of standardisation to skill does not appear through history as an inverse tendency. According to contemporary observers, the Carrel-Dakin wound treatment (and the divisions of labour it implied) demanded rather than diminished the requirement for exemplary surgical, chemical and diagnostic skills. Such commentators presented Carrel as fatally disconnected from the common realities of wartime surgery. This was the innovation of a man removed from his peers in terms of both personal and material resources. The division of labour in the Carrel-Dakin method was reflective of this unique positioning. It represented not the simplification of a method by way of dividing its elements among equals but the coordination of disparate experts around the shared dream of antiseptic control.
As such, attempts at standardisation did not dissolve debates about the status of skill in surgery, rather, they entered and complicated those debates, and compelled disputants into conceptual struggles over the nature of skill as an embodied but scarce quality. From late 1914, the Carrel-Dakin method was bound to a figurative expansion of surgical skill: how a technique of wound antisepsis was to be shared uniformly and universally prompted various engagements about the idea of skill relative to surgical practice. Carrel's early view, resonant with the later remarks of Welch, Halsted and others, was that surgical skill was thinly distributed among surgeons ill-prepared for the horrors of war. Despite his reverence for science, and despite his intentions to employ its procedures against infection, Carrel's dismal estimation that a sloppy surgical workforce posed intractable barriers to the treatment of infected wounds jarred markedly with the distinctive core optimism of Taylor's scientific management, which denied that the (in)competence of labour placed fixed limits on production in abundance. 114 Skill as a fixed limitation was a consequential idea: it delimited the innovation, promotion and standardisation of an antiseptic wound treatment. Failing in his efforts to simplify the technique, Carrel insisted on sustained direct demonstration. When finally he confronted the surgical body in the War Demonstration Hospital, it was not to dissect, optimise and standardise its elementary motions. Rather, with the aim of intellectual expansion, Carrel preached the foundational scientific tenets at the heart of his technique. It was the broader condition of mind rather than the elementary motions of a body that mattered for the propagation of antiseptic wound treatment. As surgeons debated the efficacy and practicability of a controversial idea -whether or not it counted truly as a moment of scientific medicine, and to what extent its elaborate demands on surgeons hindered its wider application -they espoused ideals of skill fundamentally at odds with those connected to scientific management. Hence skill came to the forefront both as a practical and a conceptual issue in the standardisation of wound treatment, enlivening questions about pedagogy and science in medicine, and steering the shape and fate of a surgical innovation. | 2016-05-12T22:15:10.714Z | 2015-06-19T00:00:00.000 | {
"year": 2015,
"sha1": "3ff9cab714d7b6e44954b44fb86e763f2b08b50e",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc4597249?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "85d9e344f36e057219517875393bf6339cfda4bc",
"s2fieldsofstudy": [
"History",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229713603 | pes2o/s2orc | v3-fos-license | Phytoglycogen Nanoparticle Delivery System for Inorganic Selenium Reduces Cytotoxicity without Impairing Selenium Bioavailability
Purpose Selenium is an essential trace element that supports animal health through the antioxidant defense system by protecting cells from oxidative-related damage. Using inorganic selenium species, such as sodium selenite (Na Sel), as a food supplement is cost-effective; however, its limitation as a nutritional supplement is its cytotoxicity. One strategy to mitigate this problem is by delivering inorganic selenium using a nanoparticle delivery system (SeNP). Methods Rainbow trout intestinal epithelial cells, bovine turbinate cells and bovine intestinal myofibroblasts were treated with soluble Na Sel or SeNPs. Two SeNP formulations were tested; SeNP-Ionic where inorganic selenium was ionically bound to cationic phytoglycogen (PhG) NPs, and SeNP-Covalent, where inorganic selenium was covalently bound to PhG NPs. Selenium-induced cytotoxicity along with selenium bioavailability were measured. Results SeNPs (SeNP-Ionic or SeNP-Covalent) substantially reduced cytotoxicity in all cell types examined compared to similar doses of soluble inorganic selenium. The SeNP formulations did not affect selenium bioavailability, as selenium-induced glutathione peroxidase (GPx) activity and GPx1 transcript levels were similarly elevated whether cells were treated with soluble Na Sel or SeNPs. This was the case for all three cell types tested. Conclusion Nanoparticle-assisted inorganic selenium delivery, which demonstrated equal bioavailability without causing deleterious cytotoxic side effects, has potential applications for safely supplementing animal diets with inorganic selenium at what are usually toxic doses.
Introduction
Selenium is an essential micronutrient forming the active catalytic site of several selenium-dependent antioxidant enzymes such as glutathione peroxidases (GPx). GPx catalyzes the reduction of hydroperoxides and hydrogen peroxide by reduced glutathione to protect cells from oxidative damage. 1,2 When consumed at high doses, selenium initiates the induction of reactive oxygen species causing oxidative stress in cells which results in cytotoxicity. 3 Toxicity of selenium for humans and animals not only depends on the quantity of the element consumed but also its chemical form. [4][5][6] However, all forms of selenium species have a very narrow range of nutritional and tolerable upper intake levels. 7 Selenium at supranutritional doses has therapeutic value for treating several human health conditions including some cancers. [8][9][10] Conversely, when selenium levels drop below the recommended dose, selenium deficiency causes multiple organ pathologies. 7,11 Mostly, supplemental selenium is acquired through diet, 12 and supplementing the diet of animals with sodium selenite (Na Sel) and selenium yeast is common due to the variation of selenium content between feed ingredients. 8,12 Selenium can be delivered either 1) in its compound state, 2) self-assembled into nanoparticles with the help of proteins or reducing agents (nano-selenium), or (3) conjugated onto the surface of nanoparticles. [13][14][15][16] Na Sel is an inorganic form of selenium that can be used as a supplement, and although it is commonly used as a nutritional supplement over organic selenium species due to its low cost, its cytotoxic effects are well established. 17 Nanoselenium, such as zerovalent selenium nanoparticles and selenium polysaccharides, are selenium supplements that can be consumed with lower risk of toxicity and higher levels of bioavailability. 18,19 In animals including fish, dietary nano-selenium has been shown to improve the growth performance and antioxidant defense system as it is a more readily available source of selenium. 16,[20][21][22][23] One of the limitations of nano-selenium is its synthesis requiring proteins such as albumin as reducing agents, which increases cost and size variability with nano-selenium synthesized in bacteria. 24 Finally, nanoparticles that have been used to deliver drugs can also be used to deliver micronutrients, including inorganic selenium, that otherwise are toxic. To this end, inorganic selenium encapsulated into chitosan nanoparticles have demonstrated lower cytotoxic effects, while maintaining selenium's antioxidant properties. 25,26 In the present study, phytoglycogen nanoparticles (PhG NPs), a naturally occurring nanoparticle derived from sweet corn (commercialized as NanoDendrix TM by Glysantis TM , Guelph, ON, Canada) was used to deliver inorganic selenium (either sodium selenite (Na Sel) or selenium dioxide) using two forms of chemistry, namely, ionic and covalent bonding. 27 Notably, the hydrodynamic diameter of the selenium-conjugated nanoparticles (SeNPs), 53-68 nm, is comparable to most nano-selenium produced by chemical technology. 24 PhG NPs are superior to polysaccharides such as chitosan because chitosan is synthesized mostly from chitin shells of shrimp and other crustaceans and their solubility in water and absolute purity are common concerns. Additionally, chitosan requires extensive chemical modifications to obtain high-grade chitosan derivatives. 28 PhG NPs are highly water soluble with high purity and are monodisperse in aqueous solutions. The dendrimeric glucose structure of PhG NPs makes these NPs amenable to chemical and enzymatic modifications for incorporating various structural and functional groups to which a range of therapeutic molecules can be bound. 27 Additionally, PhG NPs can be degraded into Dglucose by intracellular enzymes and further metabolized by normal physiological glycolysis as energy sources to the cells. 27 PhG NPs have been used in a previous study for efficient delivery of RNA molecules into fish cells. 29 The present study tested whether this biodegradable nanoparticle delivery method for inorganic selenium reduced selenium-mediated cytotoxicity without impairing bioavailability, thus allowing a higher tolerable intake level of selenium; a desirable feature when supplementing diets. Thus, the effects of SeNPs (SeNP-Ionic or SeNP-Covalent) were evaluated in intestinal cells of economically important farmed animals like rainbow trout and in cells derived from bovine species. In the current study, SeNPs possessed reduced cytotoxicity, but maintained selenium's bioavailability as shown by increased upregulation of the expression of GPx1 and the activity of GPx.
Materials and Methods Cells
RTgutGC, an epithelial cell line derived from the rainbow trout (Oncorhynchus mykiss) gut, was obtained from Dr. Niels Bols (University of Waterloo, ON). 30 RTgutGC was cultured in Leibovitz's L-15 medium (Hyclone) supplemented with 1% penicillin/streptomycin and 10% fetal bovine serum (FBS) (Seradigm Life Science). Cells were grown in 75 cm 2 plastic tissue culture flasks (BD Falcon, Bedford, MA) and sub-cultured every 10 days. BTC, a cell line derived from the turbinate of Bos taurus, was obtained from the American Type Culture Collection (ATCC ® CRL-1390 TM ). The bovine intestinal myofibroblast cell line (BT-IMF) was derived from the small intestine of a Bos taurus fetal calf delivered through cesarean section and was obtained from Dr. Lucy Lee (University of the Fraser Valley, BC). Both bovine cell lines were propagated in Dulbecco's Modified Eagle's Medium (DMEM) (Corning) supplemented with 1% penicillin/streptomycin, 10% heat inactivated FBS, 1% nonessential amino acids and 25 mM HEPES and subcultured every 7 days in 25 cm 2 Falcon tissue culture flasks. Unless otherwise indicated, in all experiments, fish cells were incubated at 20℃ in the absence of CO 2 , whereas bovine cells were incubated at 37°C in a 5% CO 2 humidified incubator. submit your manuscript | www.dovepress.com
Preparation of Phytoglycogen NPs (PhG NPs) and Inorganic Selenium Derivatives
Cationic PhG NPs prepared as previously described were used to bind inorganic selenium derivatives. 29 To form SeNP-Ionic, 2.5 g Na Sel (Na 2 SeO 3 ; Sigma Aldrich) and 2.5 g cationic PhG NPs were mixed in 50 mL deionized water, stirred overnight, dialyzed, and lyophilized. To form SeNP-Covalent, 15 mg LiCl and 1.5 mL N,N-dimethylacetamide (DMAc) were added to 84.3 mg PhG NPs in a conical Wheaton vial equipped with a triangular stir bar. The white suspension was then heated to 80℃ for 2 hr with stirring. To the opaque reaction mixture, 55 mg selenium dioxide (Sigma Aldrich) was added, which was cooled and left to stir overnight. Following the addition of 33 mg K 2 CO 3 and stirring for an hour, the crude product was washed with 10 mL distilled Et 2 O and centrifuged (for a total of four times) to recover a pellet which was dried under vacuum at 45-50℃. The synthesis of the two selenium formulations is illustrated in Figure 1. In both SeNP-Ionic and SeNP-Covalent modifications, the amount of selenium incorporated into the PhG NPs was determined by hydrolyzing PhG NPs with 70% nitric acid and quantifying by an UV-vis colorimetric assay using standard curves generated from known concentrations of Na Sel as described previously. 31 The prepared SeNP (SeNP-Ionic or SeNP-Covalent) were further characterized for particle size using previously described methods. 29
Cytotoxicity of SeNPs in Fish and Bovine Cells
The effects of soluble inorganic selenium alone and SeNPs (SeNP-Ionic or SeNP-Covalent) on cell metabolism and cell membrane integrity was evaluated using either RTgutGC that was seeded at 3x10 4 cells/well in a 96well plate (BD Falcon) for 24 hr in L-15 medium and incubated at 20℃ or BTC and BT-IMF which were seeded at 1x10 4 cells/well in a 96 well-plate in DMEM for 24 hr and incubated at 37 o C. The cell monolayer was then washed twice with media and treated with a 10-fold serial dilution of soluble Na Sel, SeNP-Ionic and SeNP-Covalent. Na Sel was used as the inorganic selenium control in all experiments. Bovine cells were treated with Na Sel or SeNPs ranging from 10 to 0.0001 µM while rainbow trout cells exhibited greater resistance to selenium-induced cytotoxicity and were treated with a range of Na Sel or SeNPs between 100 and 0.001 µM. The selenium concentration in the SeNP preparations was calculated based on the amount of selenium incorporated into the cationic PhG NPs as described above and matched with the concentration of the soluble Na Sel. Control groups received media alone (Media) or cationic PhG NPs without selenium (Mock NPs). After 24, 48, 72 and 96 hr of treatment, medium was removed, the cell monolayer was washed twice with 1x PBS and incubated with AlamarBlue (AB) and 5-carboxyfluorescein diacetate-acetoxymethyl ester (CFDA-AM) (Invitrogen) for 1 hr at 37℃ for bovine cells and at 20℃ for fish cells. Fluorescence was measured in a Synergy HT plate reader (BioTek, Figure 1 Illustration of the synthesis of the two selenium formulations. Na 2 SeO 3 was bound by electrostatic interaction to PhG NPs (ionic formulation). In the second formulation branched PhG NPs was linked with selenium dioxide (covalent formulation). The synthesized products were lyophilized or dried under vacuum. rt refers to room temperature. N,N-dimethylacetamide (DMAc). All other abbreviations are given in the materials and methods section.
DovePress
Winooski, VT) at the excitation/emission wavelengths of 530/590 nm and 485/528 nm for AB and CFDA-AM, respectively. All experiments were performed in three independent trials.
Determining the Expression of GPx1
RTgutGC, BTC and BT-IMF cultured in the same conditions as described above were treated with 0.01µM Na Sel or SeNPs (SeNP-Ionic or SeNP-Covalent) for 48 hr. The 0.01µM dose was chosen as it was non-toxic dose for all three cell lines. Control groups were media alone and cationic PhG-NP alone (Mock NPs). Total RNA was extracted from the cells using TRIzol ® Reagent (Invitrogen). The 2.5 µg of RNA was treated with DNAse (DNA Free ® , Ambion, Austin, TX) following manufacturer's instructions and cDNA was synthesized from 500 ng of RNA using iScript™ cDNA Synthesis Kit System (Bio-Rad). The expression of GPx1, a ubiquitously expressed selenium specific form of GPx in fish and bovine cells, was quantified by SYBR Green real-time PCR (Bio-Rad). The primer pairs for bovine β actin (AY141970) were 5ʹ-GCCCATCTATGAGGGGTACG-3ʹ and 5ʹ-ATGTCACG GACGATTTCCGC-3ʹ, and for bovine GPx1 (BC149308) were 5-TTGGGCATCAGGAAAACGCC-3ʹ and 5ʹ-GCCA TTCACCTCGCACTTTTC-3ʹ. Rainbow trout β actin (NM_001124235) was amplified by the primer pairs 5ʹ-GTCACCAACTGGGACGACAT-3ʹ and 5ʹ-GTACATGGC AGGGGTGTTGA-3ʹ, and GPx1 (HE687023) by 5ʹ-AGTT CGGACATCAGGAGAACTG-3ʹ and 5ʹ-TCAAGGAGCT GGAACTTAGGC-3ʹ. The PCR reactions included: 4 µL of diluted cDNA (1:10), 2x SsoFast EvaGreen Supermix (Bio-Rad), 0.2 µm forward, 0.2 µM reverse primer and nuclease-free water in a total volume of 10 µL. The qPCR conditions for all genes were 98℃ for 2 min, 40 cycles of 98℃ for 5 sec, 55℃ for 10 sec and 95℃ for 10 sec. A melting curve was completed from 65℃ to 95℃ with a read every 5 sec. Gene expression levels were normalized to β-actin and expressed as relative fold changes compared to media treated group.
The primer pairs were designed using NCBI Primer blast.
Determining GPx Activity
As a measure of selenium bioavailability, glutathione peroxidase (GPx) activity was determined from cell culture homogenates. For this purpose, RTgutGC was seeded at 1x10 6 cells/well in a 6-well plate (BD Falcon) and incubated for 24 hr at 20℃ in RTgutGC culture medium. BTC and BT-IMF were seeded at 4x10 5 cells/well in a 6-well plate in BT culture medium for 24 hr at 37℃ in a 5% CO 2 humidified incubator. The cell monolayer was washed twice with media and cells were treated with 0.01 µM Na Sel or SeNPs (SeNP-Ionic or SeNP-Covalent), which was the non-toxic concentration for all cell types and incubated for 72 hr. Control groups include media alone and cationic PhG NP alone (Mock NPs). Cells were washed twice with cold 1x PBS, scraped, collected and ultra-sonicated in a lysis buffer (50 mM Tris-HCl, pH 7.5, 5 mM EDTA and 1 mM DTT) for 30 sec. Cell homogenates were centrifuged at 15,000 ×g for 20 min at 4℃, supernatant collected, and total protein concentration in the supernatant was determined by Quick Start™ Bradford Protein Assay (Bio-Rad). GPx activity in the cell supernatant was determined using the Glutathione Peroxidase Assay Kit (Cayman Chemical, MI, USA). GPx activity was expressed as nmol/min/mL. Data were normalized to total protein concentrations of the cell lysates. All experiments were performed in three independent trials.
Statistics
Data were analyzed in GraphPad Prism (Version 7, GraphPad, La Jolla, CA). Results obtained from GPx assays and GPx1 expression were analyzed by one-way analysis of variance with a Dunnett's test. Seleniuminduced cytotoxicity generated from cell culture studies were analyzed with a Kruskal-Wallis (nonparametric) test. In all cases, comparisons were made between treatment groups (Media, Mock NPs, Na Sel, SeNP-Ionic and SeNP-Covalent) within a defined concentration for each time point. Results were given as mean values and standard errors. A value of P <0.05 was considered significant.
SeNPs Characterization
SeNPs (SeNP-Ionic or SeNP-Covalent) were first characterized for their hydrodynamic sizes. SeNP-Covalent and SeNP-Ionic had a diameter of about 68.54 nm and 55.02 nm, respectively, which is larger than the original cationic PhG NPs (53.56 nm; Figure 2). The particles had a very low polydispersity index, ranging from 0.066 to 0.107, indicating a monodisperse solution. The amount of selenium incorporated into PhG NPs varied between the methods of modification. In the case of SeNP-Covalent, selenium was incorporated at 180 µg selenium/mg PhG NPs, whereas for SeNP-Ionic, it was 125 µg/mg PhG NPs. DovePress cytotoxicity as early as 24 hr post-treatment; while 1 µM Na Sel also induced significant toxicity but at later time points. Cytotoxicity was substantially reduced in SeNPs. In both assays, 1 µM SeNP-Ionic and SeNP-Covalent did not induce substantial toxicity at all time points investigated. Even at 10 µM, SeNP-Covalent demonstrated reduced toxicity compared to the other forms. At the lower concentration ranges (0.001-0.1 µM), no cytotoxicity was detected for all formulations at all time points. Concentration-matched cationic PhG NPs that delivered the highest concentrations of Na Sel, which were 100 and 10 µM induced about 23% and 3-7% cytotoxicity, respectively, in fish intestinal epithelial cells. All other concentrations of our cationic PhG NPs did not cause any cytotoxicity (data not shown).
The lethal concentration 50 (LC 50 ) of the three formulations was calculated for RTgutGC at 24 hr using the AlamarBlue (AB) data (Table 1). Data obtained from CFDA-AM were similar to that of AB ( Figure 3B).
SeNPs Have Reduced Cytotoxicity in Bovine Cells
The effects of SeNPs (SeNP-Ionic or SeNP-Covalent) and inorganic selenium alone on cytotoxicity were measured next in bovine cell lines (Figures 4 and 5). Na Sel and SeNPs produced striking differences in cellular toxicity. In both bovine-origin cells, SeNPs effectively reduced induced toxicity by several fold compared to Na Sel alone. Bovine turbinate cells (BTC) were susceptible to the free form of Na Sel toxicity, with cytotoxicity beginning at 0.1 µM. At later time points, 0.1, 1 and 10 µM Na Sel were highly toxic to BTC as determined by both complementary assays ( Figure 4A and B). Conversely, the toxicity was substantially reduced in SeNPs groups. In both assays (AlamarBlue and CFDA-AM), even 1 µM SeNP-Ionic and SeNP-Covalent lacked cytotoxic effects at all time points tested. At 10 µM, however, SeNPs formulations were toxic. In BTC, ionic and covalent forms exhibited almost identical cytotoxicity profiles. No formulation exhibited substantial toxicity at lower concentrations (0.01-0.0001 µM) for all time points. In BTC, concentration matched cationic PhG NPs used to deliver the highest concentration of Na Sel, which was 10 µM, induced 5-9% cytotoxicity, whereas all other concentrations of cationic PhG NPs did not cause cytotoxicity. In BTC, the LC 50 for Na Sel, SeNP-Ionic and SeNP-Covalent were calculated as indicated in Table 1.
The effects of Na Sel and SeNPs toxicity in bovine intestine myofibroblast cells (BT-IMF) ( Figure 5A and B) resembled that in BTC. BT-IMF was highly susceptible to Na Sel at concentrations ranging from 0.1 to 10 µM. The ionic and covalent SeNP forms had very similar protective profiles, except that at 24 and 96 hr, 1 µM SeNP-Ionic showed slightly higher toxicity than SeNP-Covalent. Lower concentrations in SeNPs were nontoxic. The cytotoxic effect of cationic PhG NPs on BT-IMF was similar to BTC. The LC 50 values of Na Sel and the SeNPs for BT-IMF are summarized in Table 1.
In all three cell lines, the LC 50 increased significantly in the SeNP treated groups compared to the soluble Na Sel treated cells (P <0.05; Table 1). Data extracted from Table 1 indicated that in RTgutGC the LC 50 increased 9.5-fold for SeNP-ionic and 19.2-fold for SeNP-covalent. While in the bovine cell lines the LC 50 increased 64.5 and 59.4-fold for SeNP-Ionic and SeNP-Covalent respectively in BTC and 26.9 and 88.43fold for SeNP-Ionic and SeNP-Covalent respectively in BT-IMF.
PhG NPs Did Not Compromise Selenium Bioavailability
Next, the expression of GPx1 at the transcript level was measured in order to assess whether the reduction in cellular toxicity induced by the SeNPs (SeNP-Ionic or SeNP-Covalent) affected bioavailability of selenium. Using a non-toxic dose (0.01 µM), GPx1 transcript levels were induced by Na Sel at 48h post treatment, as expected. Interestingly, SeNP-Ionic and SeNP-Covalent induced levels of GPx1 transcript similar to Na Sel in all three cell lines tested; RTgutGC ( Figure 6A), BTC ( Figure 6B) and BT-IMF ( Figure 6C) with the exception of BTC treated with SeNP-Covalent which demonstrated higher levels of GPX1 transcript compared to NaSel alone.
GPx activity was then measured at 72h post-treatment with Na Sel, SeNP-Ionic or SeNP-Covalent in RTgutGC ( Figure 7A), BTC ( Figure 7B) and BT-IFM ( Figure 7C) cells. Na Sel and SeNP treatments resulted in comparable GPx activity suggesting that the NP delivery system did not impair the bioavailability of selenium.
Discussion
In the present study, a phytoglycogen nanoparticle (PhG NP), which is a naturally occurring nanoparticle derived from sweet corn, 27 was used to deliver inorganic selenium.
DovePress
Sodium selenite (Na Sel) was used as a model inorganic compound because of its wide usage as a nutritional supplement due to low cost despite its well-established cytotoxicity. Sodium selenite (Na Sel) was readily incorporated by electrostatic (ionic) bonding into the PhG NPs with the method described above. However, for the covalent formulation, a selenium dioxide polymer was used as the source of inorganic selenium. SeNP-Ionic or SeNP-Covalent demonstrated lower cytotoxicity compared to Na Sel alone in both fish and bovine cells. While the toxicity of Na Sel increased over time, cytotoxicity induced by the SeNPs remained similar over the 4 days of investigation. There are similarities between the present study with SeNPs and nano-selenium studies found in the literature. In a short-term oral toxicity study in mice, nanoselenium made from Na Sel was found less toxic than Na Sel. 32 Indeed, further work in mice demonstrated that nano-selenium (20-60 nm) prepared as described, 32 was less toxic compared to other forms of selenium. 33 This correlates with the present study, where the LC 50 values from the three cell lines indicate that both SeNP formulations were significantly less toxic compared to Na Sel. Interestingly, the protective effect of nano-selenium was reported as 7-fold in mice, 32 while in the present study using SeNPs, the protection was in all cases higher than that reported in mice; demonstrating between 9.5 and 88.43 fold protection compared to Na Sel alone.
The mechanisms attributing to a reduction in cytotoxicity in the case of SeNPs compared with soluble Na Sel is not explored in our study. However, Zhang et al (2001) showed the pro-oxidative effects of nano-selenium was significantly lower than that of soluble Na Sel. 32 We speculate that SeNPs compared to Na Sel might have a balanced antioxidant and pro-oxidant property as opposed to Na Sel, which always have a pro-oxidant property when used at a higher dose. 34 It is also unclear how much, if any, Se is released into the extracellular space prior to uptake by cells. The early cytotoxicity observed at 24h in RTgutGC and BT-IMF cells treated with SeNP-Ionic compared with SeNP-Covalent (at 10 µM for RTgutGC and 1µM for BT-IMF; Figures 3 and 5) suggests that there may be an early release of Se to toxic levels with SeNP-Ionic that does not occur with SeNP-Covalent. Once within the cell, the selenium likely releases from either formulation with hydrolytic enzymatic digestion cleaving the glucose branches of the nanoparticle, releasing soluble selenium intracellularly. Thus, the present nanoparticle delivery system may facilitate a constant release of selenium in the cells over time that may reduce cellular cytotoxicity.
The abilities of selenium species to induce seleniumcontaining enzymes predict their bioavailability in vivo and in vitro. 17,35 The glutathione peroxidase (GPx) family is an important group of selenium-containing enzymes that detoxify hydroperoxides and lipidic hydroperoxides at the cellular level. 1 In the current study, the SeNPs enhanced GPx activity to the same level as Na Sel, when used at a non-toxic concentration (0.01 µM). The observed increase in enzyme activity correlated well with a corresponding increase in GPx1 transcript expression. These findings align well with studies with nano-selenium in mice, where GPx and thioredoxin reductase activity was similar or higher to Na Sel alone treatments. The mechanism of action for this bioavailability in the SeNPs may be comparable to nano-selenium, both may initiate the synthesis of selenomethionine, which would lead to seleno-cysteine formation and incorporation into GPx catalytic sites. 36 Alternatively, SeNPs may be inducing GPx transcripts and activity via formation of selenophosphate, an integral part of tRNA selenocysteine. 37 Previous studies suggest that the size of seleniumbased nanoparticles may be important for seleniuminduced cytotoxicity and bioavailability. With respect to cytotoxicity, in fish intestinal cells smaller nano-selenium (13 nm in diameter) were found more toxic than intermediate (42 nm) and larger (92 nm) nano-selenium and such size-related cytotoxic effects may be cell or tissue dependent. 38 In mice, however, nano-selenium of 5 nm, 20 nm, 36 nm, 90 nm and 200 nm possessed similar cytotoxicity profiles in vivo and in vitro in hepatocytes, however, their toxicity was by far lower than Na Sel or organic selenium species. 13,[32][33][34] With respect to bioavailability, elemental selenium nanostructures (300 nm in size) in anaerobic bacteria were not bioavailable. 39,40 In selenium deficit mice, smaller sized (36 nm) nano-selenium caused elevated serum and liver selenium compared to larger nano-selenium (90 nm); however, differences in plasma GPx activity were not correlated with the size of the NPs. 41 Additionally, studies in primary cultured intestinal epithelial cells of crucian carp fish showed that intermediate (42 nm) and large (92 nm) sized nano-selenium were found more potent enhancers of GPx activity compared to smaller sized nano-selenium (13 nm). 38 As the size of the SeNPs used in the present study falls between 55 and 68 nm in diameter, our SeNPs appear to be an ideal size to reduce selenium-induced cytotoxicity while maintaining selenium bioavailability.
Although the uptake and transport efficiency of Na Sel compared to SeNPs were not investigated in the current study, it has been shown in a human intestinal cell line (CaCo-2) that both nano-selenium and selenomethionine had higher transport efficiencies compared to Na Sel. 42 Interestingly, this difference in transport efficiency did not impair or enhance GPx activity in all the formulations tested. 42 Moreover, the nanoparticle delivery system may facilitate a constant lowlevel release of selenium over time compared to a single exposure to Na Sel delivered in its soluble form.
Conclusion
Inorganic selenium delivered by PhG NPs demonstrated reduced cytotoxicity but maintained its bioavailability compared to Na Sel alone in both rainbow trout and bovine cell cultures. The present study suggests that SeNPs are likely a safer alternative to Na Sel, where increased doses of selenium delivered by the nanoparticle International Journal of Nanomedicine 2020:15 submit your manuscript | www.dovepress.com DovePress formulation can have reduced cytotoxicity but maintained reduction in oxidative stress, to ultimately increase the productivity of stress-ridden fish and livestock.
Acknowledgments
This work was supported by the Alberta-Ontario Innovation Program and an NSERC discovery grant, both awarded to SDO. We would also like to thank Niels Bols (University of Waterloo) and Lucy Lee (University of the Fraser Valley) for providing the cell lines for this study, and Glysantis Inc. for providing the cationic phytoglycogen nanoparticles.
Disclosure
The nanoparticles used in this study were supplied by Glysantis, free of charge. Jondavid deJong reports that they were associated with Glysantis Inc in an unpaid role, and was involved in experimental execution but not nanoparticle development and production.
Emily Moore, an employee of Glysantis Inc., produced the selenium-conjugated nanoparticles but was not involved in experimental design or execution, and reports a patent pending: 55727555-16USPR. Stephanie DeWitte-Orr and Tamiru N Alkie report a patent pending: Compounds and compositions of selenium with reduced toxicity; and worked with Glysantis Inc, an industrial partner, on this project. The work was funded by a grant received through a competitive grant review process by federal and provincial funding agencies. Glysantis provided the nanoparticle as an in-kind contribution to the project. The money to fund the project came from the grant. The authors report no other potential conflicts of interest for this work. | 2020-12-24T09:04:28.063Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "5d8ebe3d9b62cafd369ff89f67e6bd6457188edb",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=65162",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88ab5f11b142536ce08e476d9b236bb4c76f03a6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
261829731 | pes2o/s2orc | v3-fos-license | A Prospective Comparative Study between Three Chemical Markers for Predicting Delayed Neurological Sequelae in Patients with Acute Carbon Monoxide Poisoning of Poison Control Center in Minia University Hospital
Carbon monoxide poisoning (CO) is a major public health problem. Brain is the most sensitive organ to hypoxia induced by CO poisoning. Delayed Neurological Sequelae (DNS) is considered to be a delayed onset of neuropsychiatric symptoms after apparent recovery from acute CO poisoning. Therefore, this study was aimed to make a prospective comparative study between three markers (serum glutathione reductase, S100b protein and serum neuronespecific enolase) to predict the occurrence of DNS. This study was performed on 57 adult patients with acute CO poisoning. The markers were measured after arrival and the patients were divided into two groups: the DNS group (8 patients) & the non –DNS group (49 patients). There was a statistical difference between the two groups in terms of significant increase in loss of consciousness, syncope, dizziness, ECG changes, pneumonia, carboxyhemoglobin level, creatine phosphokinase, creatine phosphokinase-MB, troponin I, S100b protein, neurone-specific enolase in DNS grouped patiens and significant decrease in glasgow coma scale and glutathione reductase in DNS group. The cut off value of glutathione reductase was ≤ 30 U/L with a percentage of accuracy 94. 74. The cut off value of S100b protein was > 18.94 Pg/ L with 98.25 % percentage of accuracy, while, the cut off value of neurone-specific enolase was > 30.49 ng/ml and its accuracy was 96.49 %. All these cut off values predicted the occurrence of DNS. SO, it is concluded that serum S100b protein may represent the most reliable chemical marker for the prediction of DNS after acute CO poisoning by logistic regression analysis.
Introduction
arbon monoxide (CO) poisoning is a major public health problem worldwide & considered to be one of the most common causes of death in the world.It is the commonest cause of morbidity and mortalilty in the United Kingdom and the United States.CO is a toxic, colorless, odorless, tastless and non irritating gas (Durmaz et al., 1999;Satran et al., 2006).
Carbon Monoxide is formed by incomplete combustion of organic materials due to insufficient oxygen supply to enable complete oxidation to carbon dioxide.The atmospheric concentration of CO is generally below 0.001%, but it may be higher in urban areas or enclosed environments (Weaver, 2009).CO has a significant affinity for all iron or copper containing sites and competes with oxygen at these active sites.The affinity of hemoglobin for CO is 250 times higher than that for oxygen , the result is the formation of carboxy hemoglobin (CO-Hb) which is a molecule incapable of carrying O2 to tissue sites resulting in tissue hypoxia (Suner and Jay, 2008).
Brain is the most sensitive organ to hypoxia induced by CO poisoning.The major neurological manifestation of CO toxicity is delayed neurological syndrome (DNS) which includes many symptoms and signs as mental deterioration, amnesia, gait disturbances, psychosis, depression, parkinsonism (Pang et al., 2013).There are many prognostic factors that were suggested in previous studies to be associated with DNS in COpoisoned patients e.g.older age, prolonged coma, headache upon hospital admission , metabolic acidosis, C high lactate levels and globus pallidus or white matter lesions on early brain computed tomography or magnatic resonance image (Hu et al., 2011 ;Moon et al., 2011).
Many studies were done to detect reliable plasma biomarkers that could be of a great value in the prediction of the development of DNS such as Plasma copeptin, nitrix oxide, serum s100b protein, serum Tau protein, carboxy hemoglobin level (Co-Hb), white blood cells (WBC) count, creatine phosphokinase (CPK), creatine kinase-MB (CK-MB) and others (Pang et al., 2013).Therefore, this present study was aimed to assess the usefulness of serum 100b protein, neuron specific enolase (NSE) and glutathione reductase (GSH) as biomarkers for the prediction of DNS in CO poisoned patients and compare the accuracy, sensitivity and specificity of them to detect the best one by logistic regression analysis.
Subjects & methods
This prospective comparative study was conducted on 57 patients (aged from 20-45 years old) with acute CO poisoning, admitted to the Poison Control Center in Minia University hospital (tertiary-care hospital) in the period from November, 2016 to March 2017.Diagnosis of CO poisoning was made according to medical history, clinical manifestations at the time of admission, CO-Hb level > 5% in non smokers ( > 10 % in smokers) and improvement on 100% high flow oxygen therapy through a face mask and hyperbaric oxygen if indicated (if CO-Hb > 25%, or the prescence of syncope, seizures, evidence of focal neurological deficits or acute myocardial infacrction (Brvar et al., 2004).
Exclusion criteria
1) A previous history of neuropsychatric disease.
2) Pregnancy 3) Concurrent head trauma or toxicity with another poison.4) Refusal to participate in this study 5) Administration of any medications or presence of any systemic diseases that can affect CO-HB level as hemolytic anemia, hemolytic jaundice, severe sepsis and pneumonia.6) Exclusion of any cause that can elevate S100b protein or NSE as status epilepticus, permanent neurological injury, current head trauma, dementia, parkinsonism, or failure to follow up after discharge and presentation more than 24 h after acute CO poisoning because the half life of serum NSE is 24h (Rasmussen et al., 2004).The Protocol of this study was approved by the Medical Ethical Committee of Minia-University hospital and also it was done according to the ethical guidelines of Declaration of Helsinki.A written consent was taken from all patients or from their relatives in cases of unconscious patients including their agreement to participate in this study.Finally, patients were informed the symptoms of DNS (delayed symptoms of gait disturbances, mental deterioration, urinary incontinence, psychosis, depression and Parkinsonism (Hu et al., 2011) at the time of hospital discharge and were encouraged to return to the hospital if they experienced one of these symptoms.Follow up of discharged patients were for at least 2 months based on Choi , s (1983) observation who was indicated that the lucid interval for the development of DNS is generally from 2-40 days.Data, symptoms and signs of the development of DNS were investigated by reviewing the medical records of the patients or by completing a questionnaire containing simple yes / no questions.Patients , confidentiality was considered and ascertained in reviewing their records and questionnaires.
Clinical Assessment
Demographic data of patients were collected (age, sex, occupation, residence, special habits as smoking).Clinical assessment of patients recruited for the study was performed at the time of their admission.This assessment included symptoms, signs and investigations.Symptoms included headache, dizziness, nausea, vomiting, dyspnea, muscle weakness, blurred vision, confusion, palpitations, agitations and syncope.Clinical evaluation of patients was carried out regarding vital signs (temperature, pulse, blood pressure, and respiratory rate), conscious level, and Glasgow coma scale (GCS) scoring symptoms for coma.Assessment of complications during admission such as cardiac complications e.g.myocardial infarction, rhabdomyolysis and renal problems.
Electrocardiography (ECG) and Laboratory investigations included CO-Hb level, liver function tests including serum aspartate aminotransferase (AST), Serum alanine aminotransferase (ALT), alkaline phosphatase (ALP), renal function tests including blood urea and serum creatinine, random blood sugar (RBS), serum electrolytes (Na and K), pH, creatine phosphokinase (CPK), creatine kinase-MB (CK-MB), Troponin I were done.Also, serum GSH, S100b protein and NSE were also assessed.Kits of GSH, S100b protein and NSE were obtained from Bio-diagnostic company-Egypt.GSH was measured according to Goldberg and Spooner, 1983, while serum S100b protein was measured as described by Goncalves et al., 2008.NSE was measured according to Kirino et al., 1983.The previous 3 parameters was measured using ELISA ( Humareader plus, Germany).
Statistical analysis
The collected data were statistically analyzed using SPSS program version 20.Descriptive statistics were done as follow, continuous (quantitative) data were presented as median and IQR (inter quartile range), while categorical data were presented as number and percentage.Comparison between groups was done using Mann Whitney test for quantitative data, while, Fisher Exact test was used for categorical data.Pearson's correlation was used.Logistic regression analysis was used to determine the predictors of DNS.Receiver Operating Characteristics Curve (ROC curve) was done to determine sensitivity, specificity and accuracy of the predictors.Comparison between Predictors and determination of the best one to predict DNS was done by Z-statistics test.Significance difference was taken at P value < 0.05.
Results
This study was conducted on 57 patients aged from (20-45 years old) with acute CO poisoning.Acute CO poisoning was diagnosed according to medical history, physical examination, laboratory investigations including CO-Hb level > 5 % in non smokers (> 10% in smokers).The studied patients were classified according to the development of DNS into 8 patients with DNS (DNS group) and 49 patients without DNS (Non-DNS group).The mortality rate within this study was zero.DNS developed in cases with CO-Hb level > 40 % and not received hyperbaric oxygen.
Acute CO poisoning was increased in married, non-smoking males.Also, it was frequently found in students and in subjects living in urban areas (table 1).As regards, table (2) revealed that there was significant increase of respiratory rate, dizziness, syncope, loss of consciousness, pneumonia and cardiac affection in the form of inverted T-wave in DNS grouped patients.Also, there was significant decrease in GCS in the same group.
There was significant increase in CO-Hb level, CPK, CK-MB, troponin I, S100b protein NSE and significant decrease in pH and GSH in cases with DNS (table 3).There was a significant correlation between serum GSH, S100b protein and NSE and certain significant numerical parameters (GCS, CO-Hb level, CK-MB, CPK, Troponine I, pH and respiratory rate) and there was insignificant correlation between NSE level and CK-MB level (table 4).
Simple logistic regression analysis of serum GSH, S100b protein and NSE levels showed that Odds ratio (OR) of GSH is less than one (0.81) and this means that increased level of GSH in CO intoxicated patients has a protective effect (increased GSH level led to decrease the incidence of occurrence of DNS).While, Odds ratio of S100b protein and NSE is more than one (1.51& 2.3) respectively and this indicated that if their levels increased, the incidence of occurrence of DNS is increased (table 5).
Table ( 6) showed multiple logistic regression analysis of GSH, S100b protein and NSE.The use of combination of the previous parameters in prediction of DNS revealed insignificant changes in Odds ratio.Table (7) indicated that there was one significant model to predict DNS by multiple stepwise logistic regression analysis which was the use of S100b protein (Odds ratio = 1.51).Increased the level of S100b protein increased the incidence of occurrence of DNS by one and half time.ROC curve analysis of the previous parameters revealed that the most accurate one in prediction of DNS was S100b protein; its sensitivity and specificity were 87.5 & 100 respectively.Also, if the cut off value of S100b protein is > 18.94 Pg/L, DNS will be occurred.While, if the cut off value of GSH is ≤ 30 U/L & NSE is > 30.49ng/ml, DNS will be occurred (table 8 The results of comparison between GSH, S100b protein and NSE by Z-statistics test to determine the best predictable value for the occurrence of DNS in table (9) revealed that there was not a significant clear difference in AUC between them which means that there is no superiority of one to the others.Finally, if we need to depend on one of them to predict DNS, we will use multiple stepwise logistic regression analysis test that showed that the use of only one model which was S100b protein model.
Discussion
Carbon monoxide (CO) has been termed "the unnoticed poison of the 21 th century as it lacks a unique clinical signature.CO poisoning is difficult to be detedcted and can mimic other common disorders such as food poisoning.CO competes with oxygen for hemoglobin binding leading to reduction of the delivery of oxygen to tissues and the occurence of cellular hypoxia (Won and Jae, 2010) The severity of CO poisoning depends on several factors as CO concentration, duration of exposure, individual susceptibility to CO effects, general health status of the exposed individual.The brain and heart are the most susceptible organs to CO toxicity because of their high metabolic rate (Weaver, 2009).CO poisoning increased the release of nitric oxide and other reactive O2 free radicals, the end result is lipid peroxidation and a variety of lesions in myelin base protein (MBP) which constitutes about 30 % of myelin protein of CNS with the influx of macrophages and CD-4 lymphocytes.This mechanism can explain the delayed CO neurological sequale (Yu et al., 2012).
Many studies were done to detect reliable biomarkers for the prediction of the possibility of development of DNS as nitric oxide, serum TAU protein, serum GSH, S100b protein and NSE (Pang et al., 2013).NSE is one of the five isoenzymes of the glycolytic enzyme "enolase".This enzyme is released into CSF when neural tissue is injured.It is released from the neuronal and glial tissue to the blood only when the axons are damaged.It can be used as a marker neuronal cell damage in patients with certain tumors e.g.neuroblastoma, medullary thyroid cancer, endocrine tumors of pancreas and melanoma.Also, it is increased in traumatic and hypoxic brain damage, status epilepticus and cardiac arrest (Akelma et al., 2013).
The biomarker S100b protein is a calciumbinding protein that , s produced mainly by glial cells of brain.Its secretion is increased in response to ischemic or oxidative stress injury e.g.traumatic head injury, stroke, subarachnoid hemorrhage, cardiac arrest (Yang & Rosenberg, 2011).This present study was aimed to assess the usefulness of serum S100b protein, NSE and GSH biomarkers to predict DNS and determination of the most accurate one by logistic regression analysis.
The results of this study revealed significant increase in some symptoms , signs and laboratory parameters in DNS grouped patients as respiratory rate, dizziness, syncope, loss of consciousness, inverted T wave, pneumonia, CO-Hb level, CPK, CK-MB, troponin I, S100b protein and NSE.Also, there was significant decrease in GCS, pH and GSH.The results of these study agree with the results of YS et al., 2017.Their study revealed significant increase in loss of consciousness, CPK, troponin I, CO-Hb level, NSE, cardiac affection and pneumonia in DNS grouped patients.
On the contrary of these findings, YS et al., 2017 concluded that there is no significant difference in respiratory rate between DNS and non-DNS grouped patients.Also, the study of Chou et al., 2000 showed that low teperature is highly associated with CO poisoned patients (DNS grouped patients) and this finding does not agree with the results of the present study that revealed that no signifficant difference in temperature between DNS and non-DNS groups .Eunjung et al., 2012 conducted a study indicated the usefulness of S100b protein for predicting DNS in acute CO poisoning.Their study revealed significant increase in CO-Hb level, AST, creatinine, blood urea nitrogen (BUN), CPK and serum S100b protein.
The study of Giuseppe et al., 2011 andChan et al., 2016 found that GCS score of 3 and loss of consciousness were possible prognostic factors for the development of DNS, their results agree with the results of this study.The currect study revealed that there was a significant correlation between serum GSH, NSE, S100b protein and some parameters (e.g.GCS, CPK, CK-MB, troponin I, pH).These findings disagree with the results of YS et al., 2017.Their results revealed that there was no variables showing significant correlation with the level of serum NSE.
Serum GSH, S100b protein and NSE were analyzed in this study by logistic regression analysis to identify predictors releated to the development of DNS.Multiple stepwise logistic regression analysis revealed one significant model which is the use of S100b protein (OR= 1.51, 95% CI was 1.15-1.98).These results disagreed with the results of YS et al., 2017 who reported that there were only 2 significant predictors by multivariate logistic regression analysis which were GCS (OR=3.336,95% CI was 0.130-0.867)and serum NSE (OR= 1.105, 95% CI was 1.019-1.199).
The results of ROC curve analysis of GSH for predicting DNS in this study were (AUC: 0.964, optimal cut off value ≤ 30 U/L , sensitivity =87.5 , specificity = 95.95 and its accuracy was 94.74%).While, the values of S100b protein were (AUC: 0.990, cut off value > 18.94 pg/L, sensitivity= 87.5, specificity= 100 and its accuracy was 98.25%).NSE values were (AUC: 0.954, cut off value > 30 ng/ml, specificity : 100 , sensitivity : 75 and its accuracy in prediction of DNS was 96.49%).These results were in contrast to those reported by Eunjung et al., 2012, in which their results revealed that the cut off point of S100b protein was 0.165 and this value predicted the development of DNS after CO poisoning with 90% sensitivity and 87% specificity.Also the results of this present study disagreed with the results of YS et al., 2017.Their study showed that NSE is a good predictor of DNS (OR= 1.105, 95% CI was 1.019-1.199and AUC = 0.836).
This current study revealed that there was no significant differences between areas under the ROC curve by Z-statistics when we compared GSH vs S100b protein (AUC = 0.062), GSH vs NSE (AUC = 0.010) and S100b protein vs NSE (AUC= 0.036) which means that there is no superiority of one to the others, and if we need to choose one predictor of DNS we use the only one model of multiple stepwise logistic regression analysis which was S100b protein.These results were not in accordance with that of YS et al., 2017 whose study indicated that combination of initial GCS and NSE was better than the use of GCS or NSE alone.
Yang and Rosenberg, 2011 clarified that acute CO poisoning is associated with hypoxia and ischemia which can activate two substances which are gelatinase A (MMP-2) & gelatinase B (MMP-9) with subsequent degradation of tight junctions of endothelial cells and basal lamina and finally compromising the blood brain barrier and so the passage of elevated S100b protein or NSE from CSF to blood.The explanation of decreased GSH, increased NSE & S100b protein in DNS grouped patients in CO poisoning may be due to neuronal cell injury, hypoxia, oxygen radical-mediated lipid peroxidation and nitric oxide liberation from platelets (Weaver, 2009).
Conclusions & recommendations
Basing on the results of the current study , there are some significant clinical manifestations and laboratory parameters affecting the development of DNS as respiratory rate, GCS, syncope, dizziness, loss of consciousness, CO-Hb level, CPK, CK-MB, pH, troponin I, GSH, NSE, S100b protein.Also, these results indicated that if the cut off value of serum GSH is ≤ 30 U/L, S100b protein is > 18.94 pg/L and NSE > 30.49ng/ml predicte the development of DNS after acute CO poisoning.Finally, it is concluded that serum S100b protein may represent a novel biomarker for predicting DNS after CO poisoning by multiple stepwise logistic regression analysis (its accuracy was 98.25%).
It is advised to make further studies with large sample sizes of acute CO poisoned patients with severe neurological injury to validate the results of the present study.The observation period for the development of DNS was relatively short (2 months), so there is a possibility that the DNS incidence rate was underestimated, and so it is advised to prolong the period of observation.It is recommended that an out patient clinic to be initiated to assess all possible delayed neurological manifestations of acute CO poisoning and this clinic should be related and linked to the poison control center.Finally, it is recommended to find new predictors for DNS and other complications of CO poisoning.
Figure (1): Receiver Operating Characteristics Curve (Roc curve) analysis of GSH for prediction of DNS
Table ( 2): Mann Whitney & Fisher exact statistical analysis of some clinical parameters affecting the development of DNS.
DNS: delayed neurological sequelae, GCS: Glasgow coma scale, Continuous data presented as median and IQR while categorical data presented as number and percentage, Mann Whitney test for quantitative data between the two groupsFisher exact test for qualitative data between the two groups,*: Significant difference at p value < 0.05 | 2018-12-27T03:51:08.815Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "457a8114e8f63c8b54b0de5b473f1e906d22e1e0",
"oa_license": "CCBY",
"oa_url": "https://ajfm.journals.ekb.eg/article_15874_f931c9814d588d8dab902f6680501cfb.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "457a8114e8f63c8b54b0de5b473f1e906d22e1e0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
272039869 | pes2o/s2orc | v3-fos-license | The Effect of Moringa oleifera Flour Addition on The Quality of Chicken Meat Dimsum
The effect of Maringa oleifera flour addition on protein, moisture content, ash, fat content, texture, water holding capacity, organoleptic and color L*, a*, b in chicken meat dimsum. The treatments used were without the addition of moringa flour as a control and the addition of moringa flour 1%, 2%, and 3%. The variables measured were protein, moisture content, ash, fat content, texture, water holding capacity, organoleptic and color L*, a*, b*. The results showed that the treatment gave a significant effect (p < 0.05) on protein, moisture content and water holding capacity, gave a highly significant effect (p < 0.01) on ash content, organoleptic color, aroma, and taste, gave no significant effect (p > 0.05) on fat content, texture and texture organoleptic. The results of color testing L*, a*, b* has a highly significant effect (p < 0.01) on brightness (L*), a significant effect (p < 0.05) on redness (a*) and no significant effect (p > 0.05) on yellowness (b*). It can be concluded that the addition of moringa leaf flour at 3% in chicken meat dimsum gives the best results in terms of protein content of 13.45%, moisture content of 60,20%, ash content of 1,53%, fat content of 1,66%, texture of 5.15 N, water holding capacity 31%, organoleptic values of color 1.4, aroma 1.56, taste 2.05, and texture 3.1 and color lightness (L*) 54,25%, redness (a*) 0,97%, and yellowness (b*) 17,62%. The research suggestion is to use moringa leaf flour at 3% for good result in chemical qualities and use moringa leaf flour at 1% for good result in physical qualities.
INTRODUCTION
The population of broiler chickens in Indonesia has increased in the last decade.According to the Central Statistics Agency (BPS), Indonesia will produce around 3.76 million tons of chicken meat in 2022.Broiler chickens are breeds of chicken whose main output is meat in a relatively short time of around 5-7 weeks.Chicken meat is a food source of animal protein because it contains complete and balanced essential amino acids.Apart from that, meat is very popular with the public because of its delicious taste and high protein content (Taus et al., 2022).Along with developments in technology, chicken meat is not only processed into household food products, but also processed into ready-to-eat food to improve the quality and extend the shelf life of chicken meat.
Efforts to improve the quality and maintain the shelf life of chicken meat can be done by processing.One technique that can be used for processing chicken meat.namely the restructured meat technique.Restructured meat is a meat processing technique that uses irregular cuts of meat (the meat is ground and then restores its texture by adding fillers and binders into a product (Balia, 2018).This technique is useful for improving the quality of a product and adding flavor to a processed product.One product that uses this technique is dimsum.
Dimsum is a typical Chinese food originating from Cantonese which means steamed snacks and is usually served with chili sauce to add flavor.Dimsum, a snack that has high nutritional value, is usually filled with meat, chicken, fish, shrimp, fruit and vegetables (Manik et al., 2020).Dimsum usually only uses tapioca flour as a filler and binder.One diversification of dimsum can be done by adding other fillers such as moringa leaf flour which can improve the quality of the dimsum.
Moringa with the Latin name Moringa oleifera, is usually called a miracle plant because it has many benefits and potential in every part of the plant.The leaves of the moringa plant are most often used to increase human nutritional needs.Moringa leaves can be used as a vegetable or as an addition to processed food products.Moringa leaves contain lots of protein, minerals and antioxidants.Apart from being used directly in fresh form, moringa leaves can also be processed into flour which can be added to food products such as biscuits, cakes, nuggets and other products.Moringa leaf flour contains a protein content of 26.02%, a water content of 9.57%, a fiber content of 4.03%, a carbohydrate content of 51.91%, and a fat content of 2.52% (Augustyn et al., 2017) The addition of moringa leaf flour will affect the nutritional and organoleptic content of dimsum.The more moringa leaf flour added to chicken meatballs, the protein content will increase (Zulmy, 2018).The addition of moringa flour in making dimsum can reduce the water content which determines the texture, appearance and durability of food ingredients (Kinanti, 2016).The presence of lipoxidase, chlorophyll and tannin enzymes in moringa leaf flour will reduce the acceptability of the aroma, color, taste and texture of meatballs (Djawa et al., 2021).The ash content in moringa leaf flour is relatively high, this is because the water content can affect the increase in nutritional value including minerals (Kurniawati et al., 2018).The fat content of moringa leaf flour is 2.3%, the low fat content will not increase the fat content of a product without adding other ingredients, the protein content in moringa leaf flour is 26% can increase the protein in a product, the texture is affected by the water content of the product, and organoleptic and L*a*b* color is an important aspect in consumer acceptance of a food product.Therefore, research is needed to determine the effect of adding moringa leaf flour on the quality of chicken dimsum in terms of protein content, water content, ash content, fat content, texture, water holding capacity, organoleptics, and L*a*b* color.
The main ingredient in each analysis is moringa leaf flour.Meanwhile, the supporting materials used include H2SO4, CuSO4, H3BO4, NaOH, HCl, BCG-MR indicator, distilled water, petroleum ether and boiling stone.
Research Method
The research method used was a laboratory experiment using a Completely Randomized Design (CRD) experiment with a factorial pattern of 4 treatments with 4 replications.The treatments used were without the addition of moringa flour as a control and the addition of moringa flour 1%, 2%, and 3%.
Research Procedure
The process of making chicken dimsum with the addition of moringa leaf flour based on Murdiasa et al. (2021).The process begins with preparing the tools and ingredients.The chicken meat is washed first then ground using a chopper.Prepare the ingredients for tapioca flour, moringa leaf flour, egg white, sugar, salt, onion, garlic and pepper powder, according to the specified formulation.Mix all the ingredients until smooth according to the specified treatment for adding moringa leaf flour, namely 0%, 1%, 2% and 3% to form a dimsum dough.Take the dough, then place it on the dimsum skin and wrap the dough using the dimsum skin.Steam the dimsum for 20 min at an initial temperature of 70⁰C using a stove.
Protein Content
The process of measuring protein levels based on Bakhtra et al. (2017).The process starts by weighing 1 g of the sample, then placing it in a Kjeldahl flask.Enter 10 ml of concentrated H2SO4 into the Kjeldahl flask containing the sample.Then, add 1 g of selenium mixed catalyst to speed up the digestion process and the Kjeldahl flask is heated in a fume cupboard until it stops smoking.Heating until it boils and the liquid changes color to clear.The heating process is stopped and wait for the Kjeldahl flask to cool.
After cooling condition, the solution was diluted with distilled water in a 100 ml volumetric flask to the mark and homogenized.Pipette the resulting dilution to 10 ml, then put it into a Kjeldahl flask for distillation.Then, slowly add 10 ml of 33% NaOH solution, heat it until the two layers of liquid are mixed until it boils.The distillate was collected in an Erlenmeyer flask filled with 10 ml of 0.1 N HCl solution.Check the distillation results on litmus paper, if the results are not alkaline then the distillation can be stopped.At the titration stage, 4 drops of phenolphthalein indicator were added to the distillate which was then titrated with 0.1 N NaOH solution until it turned pink.%N can be calculated using the formula:
Water Content
Water content measurements were carried out using the Thermogravimetric method based on AOAC (2005) by first placing the cup to be used in the oven for 1 h at a temperature of 105⁰C, then the cup was cooled in a desiccator for 30 min and weighed (B1) then 5 g of the sample was weighed in the cup then oven at a temperature of 100-105⁰C for 6 h (B2) after that, the sample was cooled in a desiccator for 30 min and weighed (B3).Calculation of water content was calculated using the following formula:
Ash Content
The process of measuring of the ash content value was carried out using the dry ashing method by AOAC (2005) by first drying the cup to be used for 30 min or until a constant weight was obtained in an oven at a temperature of 105°C.The sample was cooled in a desiccator for 30 min and weighed (B1), then 2-3 g of sample was put in a cup of known weight, after that put it in an ashing furnace, then burned at a temperature of 550°C for ± until grayish white ash was obtained.or the sample has a constant weight, then the sample is cooled in a desiccator for 30 min and weighed (B2).The ash content calculation is as follows:
Fat Content
The process of measuring the fat content value was carried out using the Soxhlet by AOAC (2005) extraction method by placing filter paper and cotton which will be used in an oven at 105°C for 1 h.Filter paper and cotton are cooled in a desiccator for 15 min and weighed (Wa), then add 2-3 g of sample from the ground water content test results, then weighed (Wb) and wrapped in filter paper lined with cotton to form a thimble.The extraction tool is assembled from a heating mantle, tallow flask, soxhlet to condenser then add petroleum ether solution, after which it is extracted for ± 6 h until the solvent falls back through the siphon into the tallow flask.Samples were taken and subjected to an oven for 24 h at a temperature of 105°C.Cool the sample in a desiccator for 15 min and weigh it (WC).Fat content is calculated using the formula: The Effect of Moringa oleifera Flour Addition
Texture
The process of measuring texture values is carried out by sample preparation based on (Aisya et al., 2021).Then, the speed of the texture analyzer is adjusted by pressing the sample with a cylindrical probe with a diameter of 35 mm.Then, the sample is placed under the probe and the sample is pressed with a pressing force according to the settings until the level of elasticity and stickiness is read on the texture analyzer monitor.
Water Holding Capacity
The WHC measurement by AOAC (2005).The process is carried out by preparing a 0.3 g sample placed on Whatman No. filter paper.42.Then, place it between 2 glass plates which are given a load of 35 kg for 5 min.After 5 min the filter paper and sample were taken.The wet area and the pressing sample area are drawn on transparent plastic.The area of the sample circle and the area of the outer circle formed by water are measured.In this way, the area of the circle formed by water is the reduction of the area of the outer circle from the area of the inner circle using the following formula:
Organoleptic
Organoleptic procedures are carried out by providing dimsum samples to the panelists according to the test code (Putri and Mardesci, 2018).Panelists are given a test form organoleptic which has a hedonic scale of 1-5.Panelists provide assessments on color, aroma, taste and texture the product corresponds to a scale of 1 with lowest value up to a scale of 5 with the highest score.
L*a*b* Color
The L*a*b* color testing process uses a color reader by turning on the on button on the color reader then determining the standardization value by measuring the porcelain color, attaching the color reader perpendicularly then pressing the target button.The sample to be tested is then color read in the same way and the dE, dL, dad and db values will appear (Lindriati et al., 2020).
Protein Content
The results of analysis of variance showed that the addition of moringa leaf flour had a significant effect (p < 0.05) on the protein content of dimsum.The results of the average protein content of chicken meat dimsum are presented in Table 1 below.The average value of protein content in dimsum is between 11.73%-13.45%.The highest average was obtained with a protein content of 13.45% with the addition of 3% moringa leaf flour, while the lowest protein content resulted in 11.73% in the control treatment.The results of the research show that adding the percentage of moringa leaf flour to chicken meat dimsum will affect the protein content in the dimsum.The increase in protein levels is influenced by the protein content in moringa leaf flour.The percentage of added moringa leaf flour can affect the protein content of dimsum because moringa leaf flour contains 26.03% protein (Angelina et al., 2021).The protein content in dimsum is influenced by the water content contained (Anjalani et al., 2023).The denaturation process is related to the water content which will affect the protein content in the product, the lower the water content of the food, the higher the protein content contained (Lindasari et al., 2021).Product processing processes that can reduce the amount of water content will affect the protein content, causing the percentage of other organic ingredients to increase (Afiyah, 2022).This is in accordance with research that dimsum experiences a decrease in water content and an increase in protein content.
Water Content
The results of analysis of variance calculations showed that the addition of moringa leaf flour using different percentages had a significant effect (p < 0.05) on the water content of chicken meat dimsum.The average water content of chicken meat dimsum with the addition of moringa leaf flour is presented in Table 2.
The highest average water content value of 62.25% was obtained at T0 (control treatment) and the lowest was 60.20% obtained from the addition of 3% moringa leaf flour (T3).The standard for water content in dimsum set by the National Standardization Agency in SNI is a maximum of 60%.The water content in chicken meat dimsum with the addition of moringa leaf flour in all treatments with different percentages was still within the SNI standard range, namely 60%.The results of the study showed that the addition of moringa leaf flour to chicken meat dimsum had an effect on the water content.Gasperz (2018) states that the water content of food is affected by the processing process, steaming process with steam heat tends to increase the moisture content of the material food.The water content in chicken meat dimsum decreased as the percentage of added moringa leaf flour increased.The more concentration of moringa leaf flour added, the resulting water content will decrease (Sinaga et al., 2023).
Ash Content
The results of analysis of variance calculations showed that the addition of moringa leaf flour using different percentages had a higly significant difference (p < 0.01) on the ash content of chicken meat dimsum.The average ash content of chicken meat dimsum with the addition of moringa leaf flour is presented in Table 3. Explanation: a, b Difference in superscript on column shows a highly significant difference (p < 0.01).
The highest ash content of 1.56% was obtained with the addition of 2% moringa leaf flour (T2), while the lowest ash content of 1.31 was obtained in the control treatment (T0).The results of the research showed that the addition of moringa leaf flour had a highly significant effect on the ash content of chicken meat dimsum.It supported by Augustyn et al. (2017) that the decrease in ash content is caused by the water content in the raw material which is quite high so that it can dissolve the ash content.The higher the addition of moringa leaf flour to dimsum, the more likely it is to increase the ash content of chicken meat dimsum because moringa leaf flour contains high ash content.This is in accordance with the statement by Yunita et al. (2022) that the ash content value from the analysis of moringa leaf flour was 10.53%.
Fat Content
The results of the analysis of variance calculation show that the average fat content of chicken meat dimsum with the of moringa leaf flour is presented in Table 4.The highest fat content of 2.45% was obtained in the control treatment (T0), while the lowest fat content was 1.66 with the addition of 3% moringa leaf flour (T3).This is in accordance with the Indonesian National Standard (SNI) 7756:2020 which states that the fat content in dimsum is a maximum of 20%.The results showed that the addition of moringa leaf flour had no significant effect on chicken meat dimsum.In this research the highest fat content was obtained from the control treatment (T0) with 2.45% while the fat content the lowest was obtained from the flour addition treatment Moringa leaves as much as 3% (T3) with the addition of moringa leaf flour has no impact significant in chicken meat dimsum.This matter due to the fat content in meat dimsum chickens are not affected by moringa leaf flour because moringa leaf flour has a low fat content, namely 7.28% (Yunita et al., 2022).There is content flavonoids and phenols in moringa leaf flour so fat content decreases, apart from the steaming process in dimsum can affect the value of fat content namely relatively low (Salsabila and Ismawati, 2023).
Texture
The results of the analysis of variance showed that the addition of moringa leaf flour had no effect on the texture of the dimsum.The average texture results for chicken meat dimsum are presented in Table 5 below.The texture value for the tenderness of dimsum in the research ranged from 4.225-5.2N. The texture of the product is influenced by water content and high water holding capacity so that it will form elasticity in the product.According to Anggraini et al., (2023) a decrease in water holding capacity will affect the texture value of the product to (1994), the components that play a role in determining the texture of a product are the myofibril structure and its contraction status, the connective tissue content and the level of cross-linking, and the binding capacity of protein and juiceness.The length of storage will affect the texture value of the dimsum.This can result in protein damage which causes changes in the texture of the product (Handayani et al., 2019).
Table 5.Average texture (N) in chicken meat with the addition of moringa leaf flour (Moringa oleifera)
Water Holding Capacity
The results of analysis of variance showed that the addition of moringa leaf flour had a significant effect (p < 0.05) on the WHC content of dimsum.The average WHC results for chicken meat dimsum are presented in Table 6 below.
The average WHC value of dimsum obtained was between 31-46%.The highest average was obtained with a WHC of 46% in the control treatment, while the lowest average was with a WHC value of 31% with the addition of 3% moringa leaf flour.The results of the research show that adding the percentage of moringa leaf flour to chicken meat dimsum will affect the level of WHC contained in the dimsum.
The decrease in the WHC value in dimsum is influenced by the water content in the product.According to Firsta et al. (2022) water content is influenced by water holding capacity which is closely related to the presence of hydroxyl and sulfihydryl in hydrogen bonds.The average water content in chicken meat dimsum with the addition of moringa leaf flour obtained was 60.21-62.75%.
The lowest average resulted from adding 3% moringa leaf flour.According to Lapase et al. (2016) that as the water content in the product decreases, the protein's ability to bind water will decrease.
The protein content in dimsum with the addition of moringa leaf flour can affect the water holding capacity if heating or boiling occurs which will cause a decrease.According to Kasri (2022), the decrease in WHC values is caused by protein denaturation and depolymerization processes due to heating or boiling activities which cause damage and changes in muscle protein structure, especially actin and myosin, so that the ability to bind water decreases.
Table 6. Average water holding capacity in chicken meat with the addition of moringa leaf flour (Moringa oleifera)
Explanation: a, b Difference in superscript on column shows a significant difference (p < 0.05).
Organoleptic Color
The results showed that the color score value of chicken meat dimsum had a highly significant influence (p < 0.01).The average results of the organoleptic characteristics of the color of chicken meat dimsum are The highest color average was in the control treatment with a value of 5, while the lowest average was found in the addition of 3% moringa leaf flour with a value of 1.4.Based on this score, it can be seen that the higher the concentration of moringa leaf flour added to the dimsum, the lower the panelists favorite score for the dimsum.The more moringa leaf flour added, the greener the resulting product will be.This is in accordance with Ardhanareswari (2019) statement that the higher the proportion of moringa leaves, the darker the color of the dimsum contents.
The change in color of the dimsum to dark is what caused the panelists to dislike it.According to Ramadhani (2023), adding more moringa leaves will cause the color of the dumplings to become greener so that the percentage of color acceptance in the product will decrease.
Table 7.Average color organoleptic in chicken meat with the addition of moringa leaf flour (Moringa oleifera) Explanation: a, b, c Difference in superscript on column shows a highly significant difference (p < 0.01).
Aroma Organoleptic
The results showed that the dimsum aroma score between treatments had a highly significant influence (p < 0.01).The average results of the organoleptic characteristics of the aroma of chicken dimsum are presented in Table 8 below.
The average aroma value obtained was 1.56-4.45.The highest value was obtained in the control treatment with a score obtained of 4.45, while the lowest value obtained was in the treatment with the addition of 3% moringa leaf flour with a value of 1.56.Based on these values, it can be seen that the greater the concentration of moringa leaf flour added to the dimsum, the lower the panelists favorite value.
The more concentration of moringa leaf flour added will cause the dimsum's aroma to become pleasant.Based on the research results of Sari and Ulilalbab (2020), adding a 1% concentration of moringa leaf flour will cause the aroma produced from the product to become unpleasant and less acceptable in terms of organoleptic aroma.According to Viani et al. (2023) the unpleasant aroma of moringa leaf flour is caused by the presence of the lipoxidase enzyme which breaks down fat into compounds belonging to the hexanal group 7 and hexanol which causes the unpleasant smell.
Table 8.Average aroma organoleptic in chicken meat with the addition of moringa leaf flour (Moringa oleifera) Explanation: a, b, c Difference in superscript on column shows a highly significant difference (p < 0.01).
Aroma is a very important factor in organoleptic evaluation which will influence how well a product can be accepted by consumers.The aroma comes from the main ingredients for making dimsum products, namely moringa leaf flour and other ingredients which are added to produce a fragrant aroma in the product.Panelists did not like the aroma produced by dimsum with the addition of moringa leaf flour.The resulting aroma is a pleasant aroma caused by the lipoxidase enzyme contained in moringa leaf (Sari and Ulilalbab, 2020).
Taste Organoleptic
The results showed that the dimsum taste score between treatments had a significant influence (p < 0.05).The organoleptic average results for the taste of chicken meat dimsum are presented in Table 9 below.Table 9.Average taste organoleptic in chicken meat with the addition of moringa leaf flour (Moringa oleifera) Explanation: a, b, c Difference in superscript on column shows a significant difference (p < 0.05).
The average taste value obtained was 2.05-4.45.The highest value was obtained in the control treatment with a score of 4.45, while the lowest value was obtained in the treatment with the addition of 3% moringa leaf flour with a value of 2.05.Based on this value, it can be seen that the higher the concentration of moringa leaf flour added, the lower the panelists favorite value so adding moringa leaf flour to dimsum will give it a bitter taste.So, the more moringa leaf flour that is added, the more the panelists preference for the taste of dimsum will decrease.Based on the research results of Sari and Ulilalbab (2020), the more moringa leaves added will cause the dumplings to taste slightly bitter, which is due to the amino acid content in moringa leaves as a component that forms taste and aroma.
Taste is the most important factor in evaluating a product which is carried out by the sense of taste when consuming food or drinks.The bitter taste produced by moringa leaf flour comes from the tannin content which causes an astringent taste when consumed.This is in accordance with the statement by Sari and Ulilalbab (2020) that when consumed, tannin will form cross-links between tannin and protein in the oral cavity, resulting in a dry and wrinkled or astringent taste.The addition of moringa leaf flour to dimsum will result in a bitter taste resulting in the panelists giving a low score.
Texture Organoleptic
The results showed that the texture score values produced between treatments had no significant influence (p > 0.05).The average results of the organoleptic properties of the texture of chicken meat dimsum are presented in Table 10 below.The average texture value obtained was 3.1-4.35.The highest value was obtained in the control treatment with a score obtained of 4.35, while the lowest value obtained was in the treatment with the addition of 3% moringa leaf flour with a value of 3.1.The panelists level of preference decreased as indicated by the increasing number of moringa leaf flour added to the dimsum.According to Paramata et al. (2023), adding moringa leaf flour will result in a hard product texture, so that a lot of water will react with the flour and form a gel.The more moringa leaf flour added to the dimsum mixture, the denser the dough will produce.The average L*a*b* color of chicken meat dimsum with the addition of moringa leaf flour is presented in Table 11.
Lightness (L*)
The results of analysis of variance showed that the addition of moringa leaf flour using different percentages had a highly significant difference (p < 0.01) on the lightness (L*) of chicken meat dimsum.The range of lightness values (L*) is between 54.25-60.85.
The highest average lightness value (L*) was in the control treatment, namely 60.85, while the lowest average lightness value (L*) was in T3 or samples with 3% treatment, namely 54.25.This decrease was due to the increasing use of moringa leaf flour with each treatment.The results of the research showed that the addition of moringa leaf flour to chicken meat dimsum had an effect on the L*a*b* color in terms of lightness (L*).Lightness (L*) chicken dimsum samples decreased in lightness as the percentage of moringa leaf flour increased.The lower the lightness value (L*) obtained, the darker the color of the chicken meat dimsum.The decrease in lightness (L*) value was due to the increasing addition of moringa leaf flour.
Color changes in meat products can be caused by the pigment color of raw meat with the addition of additional ingredients used (Ayandipe et al., 2022).The color of the food ingredient will determine the brightness of a product.Dark colored ingredients will reduce the brightness of a product, conversely if the additional ingredients are bright colored it will increase the brightness of a product.This is supported by research by Jonathan et al. (2016) that the lower the amount of free water causes the less light to be reflected, thereby reducing the brightness.
Table 11.Average L*a*b* color in chicken meat with the addition of moringa leaf flour (Moringa oleifera) Explanation: a, b, c Difference in superscript on column shows a highly significant difference (p < 0.01) in lightness.a, b, c Difference in superscript on column shows a significant difference (p < 0.05) in redness.
Redness (a*)
The results of the analysis of variance showed that the addition of moringa leaf flour using different percentages had a significantly different effect (p < 0.05) on the redness (a*) of chicken meat dimsum.Ring value range.
The highest average redness (a*) value was in the control treatment, namely 4.57, while the lowest average redness (a*) value was in T3 or the sample with 3% treatment, namely 0.97.The redness value (a*) decreased due to the addition of more moringa leaf flour due to the increase in pigment (Munthe, 2022).Pigments have great potential as natural coloring agents in a product and can increase or decrease the redness value (a*) depending on the type of pigment contained in ingredients (Khasanah and Pudji, 2019).
Yellowness (b*)
The results of analysis of variance showed that the addition of moringa leaf flour using different percentages did not have a significant effect (p > 0.05) on the yellowness (b*) of chicken meat dimsum.The range of yellowness values (b*) is between 16.37-17.62.The highest average yellowness (b*) value was in T3 at 3%, namely 17.62, while the lowest average yellowness (b*) value was in the control treatment (T0), namely 16.37.The higher the addition of moringa leaf flour, the yellowness value (b*) increases.This is because moringa leaf flour contains chlorophyll or green pigment so the yellowness value (b*) is towards yellowish.This research shows that the resulting color yellowness (b*) is brighter due to steaming at a low temperature, namely 70℃.This is supported by Nilasari et al. (2017) the higher the temperature used, the lower the brightness value and the darker it is because the steaming process occurs during the cooking process.
CONCLUSION
Based on the results of the research that has been carried out, it can be concluded that the addition of 3% moringa leaf flour to chicken meat dimsum gives the best results in terms of protein content, water content, ash content, fat content, texture, WHC, organoleptic value color, aroma, taste, texture and color lightness (L*), redness (a*), yellowness (b*).This study suggests that moringa flour can be an effective dietary supplement for improve the chicken meat dimsum quality.The mechanism of action that moringa flour enhance texture and acceptability.
cup + sample before oven B3 = Weight of cup + sample after oven filter paper and cotton wool before extraction Wb = Weight of dry samples, filter paper and cotton wool before extraction Wc = Sample weight, filter paper and cotton wool after extraction and drying
Table 1 .
Average protein content (%) in chicken meat with the addition of moringa leaf flour (Moringa oleifera) Explanation: a, b Differences in superscripts in columns indicate significant differences (p < 0.05).
Table 2 .
Average water content (%) in chicken meat with the addition of moringa leaf flour (Moringa oleifera)
Table 3 .
Average ash content (%) in chicken meat with the addition of moringa leaf flour (Moringa oleifera)
Table 10 .
Average texture organoleptic in chicken meat with the addition of moringa leaf flour (Moringa oleifera) | 2024-08-29T18:09:14.055Z | 2024-07-30T00:00:00.000 | {
"year": 2024,
"sha1": "02f5056b64e49d89695950f3a110ff66ddd6212e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21776/ub.jitek.2024.019.02.2",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e87403c85da014b89bf01ce3ba971b3397a455cd",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
448534 | pes2o/s2orc | v3-fos-license | Subdivision by bisectors is dense in the space of all triangles
Starting with any nondegenerate triangle we can use a well defined interior point of the triangle to subdivide it into six smaller triangles. We can repeat this process with each new triangle, and continue doing so over and over. We show that starting with any arbitrary triangle, the resulting set of triangles formed by this process contains triangles arbitrarily close (up to similarity) any given triangle when the point that we use to subdivide is the incenter. We also show that the smallest angle in a"typical"triangle after repeated subdivision for many generations does not have the smallest angle going to zero.
Introduction
Given a triangle and an interior point of the triangle we can divide the triangle up into six smaller triangles, also called daughters, by drawing line segments (or Cevians) from the vertex through the interior point to the opposite side (see Figure 1). The process can then be repeated with each new triangle with its own corresponding interior point and again repeated over and over. When the interior point is the centroid this corresponds to barycentric subdivision. P Figure 1: How to subdivide the triangle given an interior point P .
Stakhovskii asked whether repeated barycentric subdivision for a starting triangle is dense is the space of all triangles, i.e., the space of triangles up to similarity where two triangles are -close if the maximum difference between their corresponding angles is less than . This question was answered in the affirmative [2].
Theorem 1 (Bárány-Beardon-Carne). Successive subdivisions of a non-degenerate triangle using the centroid point contains triangles which approximate arbitrarily closely (up to similarity) any given triangle. p 0 A + p 1 B + p 2 C where A, B, C are the vertices with p 0 + p 1 + p 2 = 1 and p i > 0 (the centroid corresponds to p 0 = p 1 = p 2 = 1 3 ). Our main result is to show that a similar statement holds if we choose the interior point to be the incenter, which can be found by taking the intersection of the angle bisectors.
Theorem 2. Succesive subdivisions of a non-degenerate triangle using the incenter point contains triangles which approximate arbitrarily closely (up to similarity) any given triangle.
We will see however that there is a difference in the behavior in that not almost all triangles become flat as in the centroid case.
We will proceed as follows. In Section 2 we will give a quick sketch of Theorem 1, while in Section 3 we will give a proof of Theorem 2 and establish several properties about this subdivision. Finally, in Section 4 we will give some concluding remarks.
2 Subdividing using the centroid In this section we give a quick sketch of the ideas behind Theorem 1. The method of Bárány et al. [2] was to first associate triangles with points in the hyperbolic half plane, namely each triangle T is associated with (up to) six points z in the hyperbolic upper half plane H as shown in Figure 2. This is done by placing some edge of T with vertices at z = 0 and z = 1 and the third vertex is located at the complex coordinate z with positive imaginary part. Observe that reflecting z across the three circles (z) = 1 2 , |z| = 1, and |z − 1| = 1 induces a natural action of S 3 on the hyperbolic half plane H in which all six orientations of T occur. Now, the centroid point of a triangle with vertices at 0, 1 and z is the point (z + 1)/3, and so one of the corresponding daughters becomes 2(z + 1)/3 when normalized. The argument reduces to showing that the group of automorphisms of H generated by the map B(z) = 2(z + 1)/3 and the above S 3 action is dense in Aut(H), in particular for any starting z (i.e., any intitial triangle T ) the set of all resulting points is dense in H (i.e., dense in the space of triangles).
Further, using results of Furstenberg [7], it follows that almost all random walks formed from products of B(z) and elements of S 3 tend to infinity (in the hyperbolic plane) as the length of the product increases. This then implies that almost all of the nth generation daughters have smallest angle tending to 0 as n increases. By different techniques, Robert Hough [8] was able to show that the largest angle approaches π and moreover was able to give asymptotic bounds for the proportion of triangles with angles near π.
3 Subdividing using the incenter The important step in the proof of Theorem 1 was to find a way to associate triangles with points where the action of finding a daughter triangle was natural. The first step in proving Theorem 2 is to do the same. However, we will find it more convenient to associate each triangle with a point(s) in R 3 where the coordinates are the angles. The set of all possible triangles (including degenerate cases), denoted P, is the intersection of the hyperplane x + y + z = π with the first octant (see Figure 3). Note that P also corresponds to a two dimensional equilateral triangle (see [1,5,10,11] for previous applications involving P).
As noted in the introduction the incenter is found by the intersection of the angle bisectors. So the angles of the new triangles created by subdivision are linear combinations of the angles of the original triangles (hence the reason it is more convenient to work with P). In particular, if we let t = (α, β, γ) * denote a triangle then the six new triangles are found by M i t where This can be seen by examining Figure 4. Alternatively, this says that every point in P has a preimage in P under some M i . For the case that t = (α, β, γ) * with α ≤ β ≤ γ we have that Other possible arrangements for the ordering of α, β, γ can be handled with the remaining M i .
To see this we can put P into R 2 by t = (α, β, γ) * → α+2β √ 3 , α * ; note that this will preserve distance. Under this map we would also have that If we now let s = (α , β , γ ) then a calculation shows The result now follows for M 1 and similar calculations establish it for the remaining M i . We now prove Theorem 2. Let q be the initial triangle we apply subdivision to. We need to show that for any triangle t and > 0 there is some sequence of i j so that Choose k sufficiently large so that π √ 3 2 k < . By Observation 1 we can successively find a kth generation preimage of t in P, which corresponds to multiplying by an appropriate (M i ) −1 at each step. Denote this preimage by (where the i j are chosen according to how we construct the preimage). Repeatedly using Observation 2 we have In the last step we used that points in P are at most distance π apart. This finishes the proof of Theorem 2.
The limiting distribution
In fact more can be said about the iterated subdivision of triangles using the incenter. Namely, since the maps M i are contracting with Lipschitz constant √ 3/2 then it follows (see [6]) that there is a fixed limiting distribution on P that the process converges to. Further it converges exponentially.
To get some sense of what this limiting distribution looks like we can simply start with any triangle (in our case we will use an equilateral triangle) and plot all of the nth generation daughters for some n in P. This is done for n = 5 in Figure 5a.
Examining Figure 5a we see that the daughters seem to fill in most of P (agreeing with Theorem 2). However, a patient count will reveal that there are far fewer than 6 5 triangles in Figure 5a. This is because some points have been mapped onto several times (a consequence of starting with such a symmetric triangle). So to get a better sense of the limiting distribution instead of plotting the individual triangles in P it is better to look at a histogram. We will divide P into a large number of small regions and then shade each region according to the number of triangles that fall into that region, the darker a region is the more triangles fall into that region. In Figure 5b we give the histogram for n = 12 generations starting with the equilateral triangle. Very little is known about the limiting distribution. Experimentally, it appears that the densest point on the limiting distribution (i.e., the darkest region in Figure 5b) corresponds to the triangle ( π 5 , π 5 , 2π 5 ) * . This is likely because this triangle corresponds to an eigenvector of eigenvalue 1 of two of the M i . In other words under subdivision using the incenter this triangle has two of its daughters which are similar to it (see Figure 6). No other triangle has this property, and the triangle ( 2π 9 , 3π 9 , 4π 9 ) * is the only other triangle with one of its daughters similar to itself, but this triangle does not appear to play a significant role in the distribution. Figure 6: The triangle ( π 5 , π 5 , 2π 5 ) * subdivided using the incenter (the shaded triangles are similar to the original).
If we were to draw the histogram for n = 20 or n = 50 and compare it to Figure 5b we would see almost no perceptible difference between them. This is because, as we noted above, the convergence to the limiting distribution is exponential. Or put another way, if we look at what happens when we map a triangle under n applications of the M i in P then knowing what the last few steps that we applied gives us a good handle on where we are in P. This is essentially the heart of the proof of Theorem 2.
For example, the region M 1 M 2 M 3 P corresponds to a triangle with vertices in P at ( π 4 , π 4 , π 2 ) * , ( π 8 , π 4 , 5π 8 ) * and ( π 8 , π 8 , 3π 4 ) * . In particular, looking at all the daughters for n large at least 1/6 3 of the daughters must lie in this subregion of P (i.e., 1/6 3 of the possible products of the M i will have M 1 M 2 M 3 as the leading term). Since points inside this subregion of P must have minimum angle at least π/8 then we have that at least 1/6 3 of the daughters in the nth generation must have minimum angle at least π/8.
Of course, by looking at larger products and looking over more prefixes we can say a lot more about what happens with the minimum angle. For example, in Figure 7a We can now bound the limiting cumulative distribution function (CDF) for the smallest angle in the limiting distribution of triangles. This is done by considering all resulting 6 n images of P in the nth generation. Then for any angle θ, a lower bound for the number of triangles with minimum degree θ (or less) is found by counting the number of the images of P which have largest minimum degree at most θ. Similarly, an upper bound for the number of triangles with minimum degree at most θ (or less) is found by counting the number of the images of P which contain a triangle with minimum degree at most θ. Doing this for n = 11 gives Figure 8. (By comparison the limiting CDF under subdivision using centroids is the constant function 1, showing that these two methods of subdividing are fundamentally different.)
Conclusion
We have seen that like the centroid, when doing subdivision using the incenter the resulting triangles are dense in the space of all triangles. However, unlike the centroid the smallest angle in a typical triangle does not tend to 0. This is important since certain methods can fail when the subdivision creates a large number of triangles with minimal angles going to 0 as n gets large (see [3,13,14]). One interesting question is to understand the limiting distribution for repeated subdivision using the incenter, an approximation of which is shown in Figure 5b.
One can also consider what happens for subdivision using other interior points. For example, the Gergonne point is found by taking the inscribed circle in the triangle and connecting a vertex to the point of tangency on the opposite edge; these three lines intersect at the Gergonne point. When using the Gergonne point to subdivide it is known [4] that the triangles are not dense in the space of all triangles. In Figure 9a we have given a histogram of P for the tenth generation of subdividing using the Gergonne point (notice the large white spaces where there are no triangles).
An interesting point for which little is known about what happens after repeated subdivision is the Lemoine point, which is found by taking the lines from a vertex through the median of the opposite edge and then flipping them across the angle bisectors; these three lines intersect at the Lemoine point. In Figure 9b we have given a histogram of P for the eleventh generation of subdividing using the Lemoine point. It is currently unknown if this method of subdivision is dense in the space of all triangles and what the limiting behavior is (there is some experimental evidence that the triangles become flat, but the convergence seems to be relatively slow).
More information about the Gergonne and Lemoine points, as well as a large number of other interesting points available to investigate, can be found online (see [9]). More information about what happens under repeated subdivision using a central point can be found in [5]. | 2010-07-14T11:44:27.000Z | 2010-07-14T00:00:00.000 | {
"year": 2010,
"sha1": "d461ed6ef236861cff2422317f94900a1d183966",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d461ed6ef236861cff2422317f94900a1d183966",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
45263006 | pes2o/s2orc | v3-fos-license | The Art of Protein Purification
Describing, in words, the details of protein purification to a relative novice in the field is not unlike explaining on paper the steps required to turn a set of colored oils into a beautiful pastoral scene on sheet of stretched canvas. Playing the oboe in a sophisticated metropolitan orchestra or performing a solo aria in a Gilbert & Sullivan operetta are accepted artistic endeavors that command great mastery of technique. Each of these art forms requires years of experience and endless experimentation and refinement of technique. Protein purification is no different. It is an art form. Like all other art forms, perfecting the art of protein purification requires a long apprenticeship. But, like all other art forms, protein purification is aesthetically rewarding to the practitioner. Every day brings new challenges, new insights, new hurdles, and new successes. Art is a process, not a destination. Protein purification fits the same definition.
Introduction
Describing, in words, the details of protein purification to a relative novice in the field is not unlike explaining on paper the steps required to turn a set of colored oils into a beautiful pastoral scene on sheet of stretched canvas.Playing the oboe in a sophisticated metropolitan orchestra or performing a solo aria in a Gilbert & Sullivan operetta are accepted artistic endeavors that command great mastery of technique.Each of these art forms requires years of experience and endless experimentation and refinement of technique.Protein purification is no different.It is an art form.Like all other art forms, perfecting the art of protein purification requires a long apprenticeship.But, like all other art forms, protein purification is aesthetically rewarding to the practitioner.Every day brings new challenges, new insights, new hurdles, and new successes.Art is a process, not a destination.Protein purification fits the same definition.
Perfecting the skills of protein purification can take many years of hands-on experience as well as periodic upgrading of those skills.Perhaps the most important part of protein purification is the set of pre-column steps that precede column chromatography.Precolumn steps are not covered as much in the protein purification literature as column chromatography, HPLC, and electrophoresis.So, I have chosen to focus much of my attention on the earlier stages of protein purification.More than column chromatography, pre-column steps are highly diverse and highly creative.Here the artistic aspects of protein purification are most apparent.But, still, there are basic guiding principles that can be communicated fairly effectively in written form.The purpose of this chapter is to outline many of these principles and techniques such that a relatively inexperienced biochemist can get started.Getting started is never easy.Inertia always seems to get in the way.When I think of the problem of overcoming inertia, I am reminded of the words of my first graduate school mentor.He chose to explain overcoming inertia with a metaphor based upon physical chemistry, "The function of education is to help others overcome their own energy barriers."In part, overcoming energy barriers is what I hope to accomplish in this chapter.
Protein purification in the analytical field
The words in my introductory paragraphs are more relevant to preparative techniques of protein purification than they are to analytical methods.Most of my research career has been focused upon preparative methods-the approach I liken to other art forms.Analytical www.intechopen.comProtein Purification 2 methods of protein purification are less likely to encompass the artistic range I ascribe to preparative methods.
The focus of analytical methods is usually to make a large number of precise measurements in a short period of time.One version of analytical methodology used extensively in the biopharmaceutical industry is called high throughput screening (HTS).Most commonly, HTS is used in drug screening.But HTS and other high throughput methods are applicable to analytical protein purification as well.But, as HTS is, by its very definition, a very rapid process, extensive protein purification is not possible by this method.Complex, multistep processes are almost always precluded.To meet time demands, just one simple and rapid purification step may be all that is permitted.Often this means that fast "sample cleanup" is the major goal of analytical processes.This "cleanup" may require nothing more than the removal of a particular interfering substance-an endogenous enzyme inhibitor, for example.External effector molecules may give falsely high assay values, or, more commonly, may inhibit enzyme activity, lowering an assay value, significantly.If, for example, one has a large number of relatively impure samples for which accurate values of the glucose oxidase activity is needed, it may be necessary to separate all other oxidoreductases from glucose oxidase.Alternatively, it may be sufficient to remove all endogenous sources of glucose.These types of separations are done routinely in clinical, medical, and pharmaceutical diagnostics laboratories.Sometimes, microliter samples are robotically introduced into small HPLC (high performance liquid chromatography) columns followed by on-line analysis of the protein of interest.On other occasions, machineprocessed samples are introduced robotically into multi-well microtitre plates.Then, built-in robotic components introduce enzyme substrates and cofactors as the plates are stacked up by thousands to be measured after a precise incubation period.
In such analytical operations, the art is in the design of robust sample handling methods including electronic, mechanical, and robotic components.Optimization of protein separation may be an integral part of system design, but once the entire system is on-line, only routine validation tests along with periodic trouble-shooting of the overall system are required.Once the creative aspects of system design have been completed, everything now devolves into system maintenance.
General Strategy
The greatest differences between analytical-scale and preparative-scale protein purification processes are that preparative methods (1) usually involve much larger volumes of starting material, (2) generally take much longer to carry out (days, weeks, or months), (3) usually require a variety of different purification methods or techniques (sometimes repeated), and (4) almost always have, as the primary goal, achieving very high purity (rather than high throughput).Sometimes, the amount of desired protein is so small, and the amount of macromolecular contaminant is so high, that one needs to employ nearly every "trick of the trade" to achieve high purity.Imagine wanting to isolate milligrams of a precious protein from thousands of liters of crude jellyfish extract.Our research group has done this for almost 3 decades (Roth, 1985, Johnson and Shimomura, 1972, Blinks, et. al., 1976, and others).Sometimes, purifying a protein to homogeneity, from such large volumes of highly viscous starting material, may involve separating one milligram of the protein-of-interest (POI) from 100 mg of initial total protein.This is called a 100-fold purification.In other cases the required purification factor may be on the order of 1000-fold or 10,000-fold.My most difficult purification project was to isolate microgram amounts of green-fluorescent protein (GFP) from the homogenates of whole sea pens.In this instance, not only was the GFP present at about 1 part in 100,000 of total protein, but the proteoglycan-derived viscosity in the crude extract was so great that a magnetic stir bar failed to rotate (Ward and Cormier, 1978).So the issues facing a scientist working on a difficult protein purification project are many.Among these issues are those shown in Table 1.
1
Choosing or developing a sensitive, reproducible, and selective assay for the proteinof-interest (POI). 2 Establishing conditions under which the POI is stable and biologically active.3 Finding conditions under which the POI can be stored safely between steps.4 Choosing the best biological starting material (natural source or recombinant).Finding the substrate(s), inhibitors, activators, allosteric effectors, etc., if the proteinof-interest is an enzyme.
Table 1.Early steps in designing protein purification strategies Some very useful information can be acquired, unambiguously, if a small sample of pure protein can be obtained.A former professor of mine said to our group of graduate students, "Don't waste clean thinking on a dirty enzyme."It is so easy to make major errors if you try to over-analyze a crude sample.Acquiring a pure sample of the protein-of-interest may be difficult (if the specific purification methods have not been optimized).But, obtaining a small amount of pure protein can be very useful for future optimization of purification.Table 2 lists a few of the characteristics of a pure POI that can be used to design a more effective purification strategy.Unless the protein-of-interest is pure, data on its characteristics can be very misleading (Karkhanis and Cormier, 1971
Where to Begin
It is difficult to suggest a logical order of steps leading to a successful protein purification project.Proteins are very different from each other (and so are the mixtures of other components in which the protein-of-interest is found).So there is no common approach.Perhaps the best way to introduce protein purification is by example.I will do this by showing some of the intimate details of how one protein, Aequorea victoria GFP, has been purified in our academic lab at Rutgers University (Roth, 1985, Ward andSwiatek, 2009).In parallel, I will discuss the similarities and differences that accompany purification of another protein, soybean hull peroxidase.The latter has been purified in our Rutgers spinoff, start-up company, Brighter Ideas, Inc. (Holman, C., manuscript in preparation, Ward, 2012).I will not discuss, in detail, purification methods employed with recombinant proteins.These methods are much simpler and much more straight-forward (requiring considerably less "art" once the molecular biology has been completed).
The Assay
Before a protein purification process can begin, there must be a way to identify the proteinof-interest (POI).The means for identification is called an assay.For enzymes, the assay is usually a measure of enzyme activity.For proteins with distinctive chromophores, spectroscopic measurements of the chromophore help to distinguish the POI from other proteins.Sometimes all that one knows about the protein-of-interest is its molecular weight.
In such cases the POI can be followed by SDS gel electrophoresis.Sometimes a protein is assayed by its immune response.Sometimes immune response is all that the scientist knows in the beginning.The protein, calmodulin, was discovered in brain tissue solely on the basis of its ability to bind radioactive calcium (Cheung, 1971).Binding calcium was all that was known about calmodulin in the earliest stages of its purification.But, the more one knows about alternate ways to detect the protein of interest, the easier the chore is likely to be.
GFP is not an enzyme, so there is no enzymatic assay.But, it has a spectroscopically measurable, covalently-bound chromophore (Fig. 1) that absorbs light maximally at 397 nm (Ward, 2005).GFP fluoresces brilliantly (emission peak at 509 nm) when excited in the UV.
A hand-held, 365 nm, mercury vapor lamp ("black light") becomes a convenient, portable detector.Molar extinction coefficient at 397 is 27,300, but that value varies 5-10% depending upon the degree of dimerization of the protein (Ward, 2005, Ward, et. al., 1982).Fluorescence quantum yield is 80%.With all proteins, measurements by absorbance or fluorescence requires samples with VERY low turbidity (light scatter).Even partially clarified crude extracts have far too much scatter to measure any protein accurately by UV/Vis spectrophotometery (Fig. 2).Sometimes it takes a few purification steps before the level of GFP, for example, can be measured with any reliability.
Soybean peroxidase (SBP), like GFP, has a chromophore-a heme group that absorbs maximally at 403 nm.Absorbance at this wavelength can be used to quantitate the enzyme.But many other substances in crude soybean hull extracts absorb strongly at the same wavelength.So, the enzyme needs to be highly purified before this measurement is useful.Another assay is needed.Peroxidases, in general, bind to hydrogen peroxide, creating an active oxygen species that can then attack another molecule.In our case, the other molecule is ABTS (2,2'-azino-bis(3-ethylbenzthiazolene-6-sulfonic acid) available from the Sigma Chemical Co.ABTS, dissolved in a pH 5 buffer with added hydrogen peroxide, has only a very slight visible absorbance.But in the presence of peroxidase, the active oxygen attacks the ABTS producing a teal colored solution.As with many other colorimetric assays, attention must be paid to the stability of the assay solution and the kinetics of the reaction.
Stability
Probably the second most important characteristic for an effective protein purification scheme is the protein's stability, especially stability to heat and pH.But, just determining conditions of high stability at the outset of purification is seldom sufficient.Some proteins are more stable in the crude form and others more stable when pure.So, at each step along the way, stability needs to be checked.
GFP and SBP are both thermally stable, up to 65 C for GFP (Bokman andWard, 1981, Ward andBokman, 1982) and nearly 90 C for SBP (Holman, C, manuscript in preparation).GFP is stable to proteases and aqueous alcohol solutions (Roth, 1985).The C-terminal 8 amino acid tail of native jellyfish GFP is protease labile, so we usually keep the crude extracts cold.We use sodium azide to inhibit microbial growth and phenylmethyl sulfonyl fluoride (PMSF) to inhibit the activity of serine proteases (Ward, 2005).Circular dichroism measurements confirm that the native secondary structure of GFP (predominantly beta pleated sheet and just a small amount of alpha helix) is directly proportional to the protein's fluorescence (Ward, et. al., 1982).GFP retains its fluorescence and its secondary and tertiary structure at elevated pH (up to 12.2) but loses fluorescence at pH 12.3 ( and simultaneously loses its CD signature) (Ward, 2005).Under acidic conditions (pH 6 and below) GFP fluorescence also fades as does the CD signal.Under the right conditions, GFP will recover most of its fluorescence after denaturation in acid, base, and guanidine hydrochloride (Bokman and Ward, 1981).The only known detergent to destroy GFP fluorescence, permanently, is sodium dodecyl sulfate (SDS).
Soybean peroxidase is stable over a wider range of pH and a wider range of temperature than GFP.But, its activity is inhibited by sodium azide and other agents that react with heme proteins.Instead of sodium azide, we use 10 % ethanol as a preservative for SBP.However, not all enzymes are stable in the presence of alcohol.
Storage Conditions
It is usually necessary, in multi-step purification protocols, to store the POI between steps.Generally, this is accomplished by freezing the protein solution.Freezing and cold storage work for both GFP and SBP, but not for all proteins.Some multisubunit proteins are cold labile.In such cases, the subunits are held together by hydrophobic interactions.Such hydrophobic bonding can be entropy driven, as structured water (surrounding the monomers in an ordered way) becomes released (and more disordered) when subunits bind to each other.The ∆S term in the equation: ∆G = ∆H -T∆S increases with increasing temperature.GFP, SBP, and most monomeric proteins, are not cold sensitive.In addition, based upon its long-term retention of fluorescence, GFP appears to be stable for months at room temperature (Roth, 1985).But, isoelectric focusing of GFP may show extensive microheterogeneity after prolonged room temperature storage.The highly proteasesensitive eight amino acid C-terminal segment of native jellyfish GFP, (that extends from a protease-resistant beta barrel) is easily clipped by proteases-often in different places (Roth, 1985).When the recombinant protein is C-terminally tagged with hexa-histidine (for eventual immobilized metal affinity chromatography (IMAC), now both the naturally occurring octapeptide and the added hexapeptide are susceptible to cleavage at many sites by a variety of proteases.
Starting Material
In some cases, one has a choice of starting material.Luciferase, for example, can be isolated from a variety of fireflies and beetles.But, some firefly luciferases are very hard to purify while others are much easier.The sea pansy, Renilla reniformis (Wampler, et. al., 1971, Matthews, et. al., 1977, Prendergast and Mann, 1978, Ward and Cormier, 1979) and the jellyfish, Aequoria victoria, (Morise, et. al., 1974, Roth, 1985, Ward, 2005) were chosen as the starting materials for isolating and purifying GFP.In part, the selection of organisms was based upon their geographical locations, the availability of nearby laboratory facilities, and the means for collecting the animals.The shallow waters off the coast of Georgia proved to be a good location for collecting sea pansies and there was a local shrimper only too willing to do the collecting before the shrimp season began.The University of Georgia had a primitive laboratory on Sapelo Island, but early stage processing did not require sophisticated facilities.Aequorea jellyfish were abundant for decades at the University of Washington's Friday Harbor Labs (FHL) and the lab facilities were excellent.Excellent facilities were essential, as extensive floating docks were needed to provide close access to the water (so that the jellyfish could be scooped up with pool skimming nets).Processing involved holding the jellyfish (sometimes 10,000 per collection day) in large, circulating sea water aquaria.The FHL facilities include many circulating sea water aquaria, a walk-in coldroom, and a Sorvall centrifuge for further sample processing.The FHL staff was particularly supportive and encouraging.
While peroxidases can be isolated from many sources including horseradish, potatoes, sweet potatoes, and other plants, we chose soybean hulls as our starting material.The choice was based primarily upon easy access and low price.Perdue Farms processes huge quantities of soybeans for chicken feed.The hulls, a byproduct of their processing the more valuable soybean oil and soybean meal, are usually shipped to multi-grain bread manufacturers.To reduce storage and shipping volume, the hulls are crushed, on the Perdue site, into finer particles ranging down to the micrometer range.The bread producers apparently pay very little for an otherwise "throw-away" byproduct of the soybean.We, for example, ordered 2000 lbs of hulls, paying $400 for hulls.The price included seven 55-gal barrels plus shipping.While access, ease of acquisition, and facilities were more than adequate for early, on-site processing of sea pansies, jellyfish, and soybean hulls, later laboratory processing was VERY demanding.This leads us into the next section, "Extraction".
Extraction
In the case of the sea pansy, extraction of GFP was accomplished by first anesthetizing the animals in a bath of the calcium-chelating agent EGTA plus magnesium sulfate.This was to preserve a luciferin binding protein, easily triggered to luminesce with calcium ions.
Grinding the sea pansies with protein-saturating levels of ammonium sulfate came next, followed by acetone precipitation and rapid drying of the organic solvent.The powder that resulted, largely ammonium sulfate, was stored in chest freezers until processing time (Matthews, et. al., Ward and Cormier, 1979).
GFP isolation from the jellyfish was entirely different.A single jellyfish has a volume of about 35 ml.On days when we collected 10,000 animals, the volume we needed to process reached 350 liters.However, all of the luminescent tissue is found in a very narrow strip along the margin of the "bell" (Fig. 3).Special dissecting tables were constructed, allowing a small team of workers to dissect up to 10,000 animals in one collecting day.Dissection reduced the volume to about 15 liters.Next, the tissue was shaken vigorously, 500 ml at a time, in 3 liters of sea water (in a 4-liter flask).Seventy-five shakes released most of the photocytes into suspension.After crude filtration, the photocyte suspension was trapped in a large cake of celite (diatomaceous earth) held in a large Buchner funnel.After a wash with 75% saturated ammonium sulfate solution (containing EDTA to chelate calcium), the photocytes were lysed with dilute EDTA solution.A gentle vacuum applied to the suction flask released an amazingly bright stream of fluorescence that was captured in the 4-liter vacuum flask.The extract was precipitated with solid ammonium sulfate-the precipitated protein being trapped on a smaller cake of celite or collected by centrifugation.These procedures were developed by Dr. John Blinks (Blinks, et. al., 1976).Soybean peroxidase extraction just requires that the pulverized hulls be stirred in five volumes of distilled water for one hour.
Viscosity Reduction and Particle Removal
As one might imagine, extracts of whole coelenterates or coelenterate tissues (jellyfish or sea pansies) present a huge problem with viscosity.Aside from water, the animals are almost entirely composed of connective tissue and very high molecular weight proteoglycans.For 17 seasons, we solved the viscosity problem by passing crude extracts of jellyfish photocytes (and surrounding tissues) through an 8-liter gel filtration column of P-100 BioGel (our next step after ammonium sulfate precipitation).The void volume fraction (calibrated to have a molecular weight of 40 million Daltons or greater) contained most of the viscosity and none of the GFP.But, while this 3-day procedure worked quite well as a viscosity reduction method, each gel filtration run could handle, one at a time, only 5% of a season's collection.Larger amounts of extract invariably fouled the column.If one includes the frequent column washes, required to maintain reasonable flow, it takes 5-6 months to pass a season's worth of jellyfish extract through the column.It was not without trying many alternative methods that we settled on this highly unusual first chromatography step (Fig 4).Gel filtration is generally reserved as a late-stage polishing step.Much later in our work, we discovered that simple passage through a column of Celite easily solved the viscosity problem (W.Ward, unpublished).Diatomaceous earth is so inexpensive that the column contents could be discarded after the desired protein easily passed through.The above example illustrates one of the great dilemmas in selecting steps for a protein purification protocol.When do you decide that you have spent enough time searching for a better way to do things?When do you give up trying to search for a better procedure by settling on a brute force method?The expression, "Are you going to fish or cut bait?" seems appropriate here.After trying everything we could imagine and after investing money in a variety of expensive filter devices (G.Swiatek and M. Browning, personal communication), we suspended this project for several years.Then we happened upon an ion exchange method normally applied to water purification.We found a company called ResinTech that provides, at very low cost, a high capacity polystyrene-based anion exchanger.The beads are large (1 mm) and dense, so, after stirring, they quickly settle to the bottom of a large container.Binding kinetics, however, are slow, because of the large size of the beads and relatively small pore size (access to the interior is slow and limited to proteins of MW 50 kdal or lower.So, notwithstanding the slow kinetics of binding and elution, these beads are useful for batch ion exchange applications-in our case, to trap the highly anionic soybean peroxidase (C.Holman, manuscript in progress, Ward, 2012).A provisional patent for our unique SBP purification method has been filed with Rutgers University.The fine particles of soybean hull extract (much too fine to settle on their own) are, however, too large to enter the ResinTech pores.So the bound SBP can be separated from these fine particles.But, much to our surprise, we found that the fine particles, as soon as stirring ceases, immediately aggregate into a dense gelatinous mass that settles above the beads.By aspiration, this gelatinous mass is easily separated from the beads that now containing nearly all of the SBP.
Volume Reduction
In a typical academic or start-up corporate laboratory, the starting sample of crude protein might range in volume from a few milliliters to tens or hundreds of liters.In commercial operations, liquid volumes may reach thousands or hundreds of thousands of liters.Here, I focus on moderately large volumes that require much more effort than smaller volumes.The volume of starting sample dictates, in a sense, the methods that are appropriate for early stages of purification.Large aqueous volumes require an early stage trapping step-a step that eliminates large quantities of water while binding (or otherwise retaining) the proteinof-interest.The focus is not on separating a variety of macromolecules from each other.The focus is to reduce aqueous volume to a more reasonable level.Higher resolution methods can come later.Generic trapping can be accomplished by tangential flow ultrafiltration (Scopes, 1994), so long as the feed stock is not so viscous as to plug the membrane pores with large particles, colloidal materials, or slimy DNA or polysaccharides.Such membrane fouling will slow down (even halt, altogether) the trans-membrane penetration of water, salts, and small molecules.
Alternative methods include ion exchange or hydrophobic interaction.If ion exchange is chosen, the adsorbent should have relatively large particle size (several hundred micrometers to 1 millimeter in diameter).Large size ion exchange beads or fibers are preferable when trapping proteins from large volumes of dirty samples.It is advisable to save, for later, the higher resolution ion exchange materials, (such as positively charged DEAE Sepharose Fast Flow or negatively charged CM Sepharose Fast Flow-GE Healthcare).It is only after viscosity and the presence of particulates have been greatly reduced that high resolution ion exchangers can be expected to deliver superior flow with relatively little fouling.Crude starting materials are best processed in batch mode rather than by axial flow chromatography.Radial flow columns offer much greater surface area, but even these columns can clog if the feedstock has high viscosity (from DNA, polysaccharides, or lipid micelles).Turbid samples containing small particles or colloidal suspensions can be as troublesome as samples with high viscosity.Frequent stirring in batch mode overcomes this problem by allowing the POI to bind to the matrix, without the problems of column fouling.However, highly acidic DNA and sulfonated or carboxylated polysaccharides will also bind to anion exchange materials, such as DEAE.While batch adsorption to DEAE can work well, the viscosity problems may return if the POI and the highly acidic biopolymers come off the anion exchanger together.But, DNA and acidic polysaccharides generally bind to DEAE, or other anion exchangers, much more tightly than the POI.When this is the case, the desired protein will elute from the anion exchanger at much lower concentrations of aqueous salt solutions than the highly acidic biopolymers.DNA and anionic polysaccharides will remain bound to the anion exchange material, while the protein-of-interest elutes with greatly reduced viscosity.
Hydrophobic protein-binding materials, like Phenyl Sepharose (GE Healthcare), are excellent trapping agents for most proteins.This method is called hydrophobic interaction (HIC).Just a few exposed hydrophobic amino acid R-groups are needed for binding to the phenyl group.The amino acids having R-groups that are strongly attracted to an HIC matrix include: phenylalanine, tyrosine, tryptophan, methionine, leucine, isolucine, valine, proline, and lysine.It may be surprising that lysine is included as a very hydrophobic amino acid because lysine carries a positive charge at all pH values below 10.Hydrophobic interaction is not favored when charged residues are present.There is an exception when oppositely charged groups, within hydrophobic patches, are close enough to each other to bond electrostatically.Under these conditions, the electrostatic bond is exceedingly strong.Independent of electrostatic bonds, in which lysine could participate, the R-group of lysine is frequently exposed to the exterior (lysine has the greatest exposure of all amino acids, as its long string of methylene groups extends far into the aqueous medium).Hydrophobic interaction is not with the epsilon amine of lysine at the end of this string, but with the four methylene groups, themselves, to which the amine is attached.HIC and IEX media are available as very soft beads made of cross-linked dextran polymers or polyacylamide, or they come in a more rigid form that is agarose-based.An agarose-based HIC medium, such as Phenyl Sepharose, is more pressure-tolerant and more robust than the older style, softer beads.Additionally, the agarose pores are larger, allowing very large proteins to enter the internal spaces.Despite the fact that some nucleic acids and some anionic polysaccharides could enter agarose beads, this does not happen with HIC media.In the case with ion exchange trapping chemistry, DNA and other acidic biopolymers may compete with, or displace, an anionic protein-of-interest.But, highly charged nucleic acids, as well as acidic and neutral polysaccharides, are not sufficiently hydrophobic to bind tightly to Phenyl Sepharose and related HIC materials.So, they easily separate from a protein-of-interest having a few exposed, hydrophobic amino acid side chains.On the downside, HIC, as a trapping step, can become very expensive if the volume of crude extract is large.HIC gels are expensive.There is an additional economic downside to HIC when large volumes must be processed.Highly purified ammonium sulfate is fairly expensive and the cost of disposal may be even higher.Many kilograms of ammonium sulfate may be required to trap proteins by HIC, especially if the protein of interest is fairly hydrophilic (highly water soluble).
Proteins that are quite hydrophilic may require a very large amount ammonium sulfate to induce binding to the HIC resins.
For a protein that is very stable at its isoelectric point (pI), isoelectric precipitation can provide an excellent, inexpensive trapping step (Scopes, 1994).Almost always this method requires a very low salt concentration, as electrostatically-driven protein-protein interaction is the mechanism that promotes precipitation.The flocculated protein may settle to the bottom of the container.If not, it may be pelleted in a centrifuge or collected by simple filtration on beds Celite.Resolubilization is accomplished by raising or lowering the pH or by adding salt.For proteins that remain soluble at their pI values, addition of a watersoluble organic solvent (generally a small aliphatic alcohol) may be used to promote isoelectric precipitation.Addition of a somewhat non-polar solvent lowers the dielectric constant of water, promoting charge-charge interactions among protein molecules.If this does not work, lowering the pH below the protein pI with simple addition of acetic acid, phosphoric acid, or HCl may cause precipitation.Occasionally, one finds that diatomaceous earth, alone, will bind certain proteins quite selectively.Because Celite is so inexpensive (available in 50 lb bags at pool supply stores), it makes sense to try Celite as a trapping agent.
With native Aequorea GFP, we never encountered a huge volume reduction problem because the dissection step and the trapping of whole photocytes on Celite greatly reduced the volume.But, soybean peroxidase is a different matter.
Chromatographic Methods
On Table 1 and Table 2 are shown the categories of basic information generally needed to facilitate early stages of protein purification.The properties of a POI that should be known are listed here in no particular order of importance.In fact, almost never is the order of information discovery the same for any two proteins.In the course of developing a start-tofinish protocol for any given protein, unexpected information is uncovered along the way.
Long after developing a working protocol, one may discover, for example, that the POI is glycosylated.Following this discovery, one might want to experiment with affinity chromatography using an immobilized lectin or may wish to try a boronate column that binds vicinal hydroxyl groups on sugar residues (Scopes, 1994).The message is that no purification protocol is ever final.There are always alternate ways that could improve or streamline an earlier protocol.This is one of many places that the artistry of protein purification comes into play.
Ion Exchange Chromatography (IEX)
Once viscosity has been largely eliminated and once the crude protein sample is particle free, it may be time to use ion exchange chromatography (IEX)-the most frequently employed chromatographic method for proteins.Early, small-scale testing with a relatively salt-free sample is advised.There are simple, syringe-operated ion exchange columns available from Pall Corporation or GE Healthcare-both anion exchange columns and cation exchange columns.These columns can be used to determine (within one-half of a pH unit) the isoelectric point of the protein.This is accomplished by equilibrating the two columns with low ionic strength buffers of varying pH values.The most common cation exchange functional group is carboxymethyl, abbreviated CM.CM is essentially immobilized acetic acid and, like acetic acid, CM takes on a negative charge at pH values of 4 and above.CM is designated a weak cation exchanger as it has little binding capacity below pH 4. Sulfonated or phosphorylated exchangers are called strong cation exchangers because they can be used at pH 2. For the POI to bind to CM, the protein must be positively charged (below its isoelectric point).CM is not satisfactory for GFP purification as GFP is unstable below its pI of 5.3.When GFP takes on a positive charge (below pH 5.3) the protein slowly denatures, losing its fluorescence.So, it is not possible to use CM with GFP in any slow process like column chromatography.But, if GFP exposure time is kept at a minimum, the pI of GFP can be estimated by its binding to CM at pH's below 5.3.Diethylaminoethyl (DEAE) is the most commonly used anion exchanger.The DEAE functional group is a tertiary amine, protonated (and positively charged) at pH values below 10.DEAE is designated a weak anion exchanger as it cannot be used effectively above pH 10.But, a bead-bound quarternary amine extends the range of anion exchange to pH 12.So any medium designated Q (or QAE, for quarternary amino ethyl) is called a strong anion exchanger.All four of these types of these media (weak and strong cation exchangers and weak and strong anion exchangers) are available in small, syringe-operated columns.If one of these DEAE columns is equilibrated at a variety of pH values (10, 9, 8, 7, 6, 5, and 4), GFP will bind from pH 10 to pH 5.5, but not at pH 5, indicating that the pI of GFP is below 5.5.
Once the pI has been determined and the anion exchanger has been chosen, a preparative column can now be poured.Most ion exchangers can bind 30 to 50 mg of protein per 1 ml of swollen gel.One can estimate the total amount of protein in the sample by absorbance at 280 nm, ascribing one absorbance unit to one mg/ml of protein.But, high levels of DNA and moderate turbidity (Fig. 2) will artificially elevate this absorbance number (sometimes greatly).It is good practice to test, experimentally, the capacity of an ion exchange material in a small trial.Using 1 ml of swollen gel, add crude extract in successive 100 microliter volumes until the gel becomes saturated with protein.The saturation limit can be determined by taking POI activity measurements after each incremental addition of extract.
For enzymatic measurement, remove just a few microliters of the supernatant after the gel settles (so the aqueous volume remains about the same).When the activity appears in the supernatant, you will have determined the saturation point in terms of mg of extract per ml of gel.Now fill a chromatography column with at least 5-times as much gel as your preliminary testing indicates you will need for total binding.Short, stout columns are usually better than long thin ones.Resolution comes not from column dimensions, but from the rate at which the eluting strength of the salt (usually sodium chloride) is raised in the elution phase.Take note of the fact that an ion exchanger is an excellent buffer, so pH equilibration of the gel requires many column volumes of dilute buffer solution.
Alternatively, a very high concentration of buffer may be used to titrate the column, first.But, after titration, at least one column volume of the dilute (low ionic strength) buffer must be passed through the column.It is also necessary to use a buffering salt that has the same charge as the ion exchange gel.When using positively charged DEAE columns, positively charged Tris(hydroxymethyl aminomethane) buffer in the chloride form (generally abbreviated as Tris) is commonly used.For negatively charged CM, negatively charged sodium phosphate buffers are recommended.The protein of interest should be equilibrated in the same dilute buffer.For best resolution, a shallow, continuous gradient (50 column volumes or greater) from 0.0 M NaCl to 0.5 M NaCl is recommended.To achieve near base line resolution of 5 GFP isoforms (differing from each other by one or two amino acids), I have eluted a 100 ml DEAE column with 80 column volumes (8 liters) of sodium chloride solution from 0.05 to 0.25 M (Ward, 2009).In this case (and in all other cases) the salt solutions need to be prepared in the same buffer used to equilibrate the column.
Hydrophobic Interaction Chromatography (HIC)
HIC media are available in several strengths.The hydrophobic ligands are usually attached to the porous hydrophilic gels via a 3-carbon spacer based on epichlorohydrin chemistry (Scopes, 1994).From strongest binding to weakest binding ligands, the order is Phenyl > Octyl > Butyl > Methyl.Strongly hydrophobic ligands are appropriate for weakly hydrophobic proteins and weakly hydrophobic ligands for strongly hydrophobic proteins.Early testing, calculation of gel volume, and choice of column dimensions are carried out in a similar fashion as the protocols used for ion exchange.Hydrophobic binding is favored by very high salt concentration (up to 3 molar ammonium sulfate, in some cases).Elution is accomplished by lowering the salt concentration in increments (step gradient) or by applying a continuous linear gradient of decreasing salt concentration.Be aware that gradients of ammonium sulfate produce gradients of refractive index, easily confused by a spectrophotometer as a higher UV absorbance value or a lower UV absorbance value.If precise 280 nm absorbance measurements are desired following gradient elution of proteins from an HIC column, it is necessary to have a continuously changing blank that closely matches the salt concentrations of the samples.An advantage to having HIC follow IEX is that one need not remove the NaCl in the fractions eluted from the IEX column.NaCl neither favors nor inhibits hydrophobic interaction nor does it interfere with spectroscopic measurements as much as ammonium sulfate.If the two steps are reversed, ammonium sulfate must be removed entirely before going on to IEX.
Affinity Chromatography
Some prefer to use affinity chromatography very early in a protein purification process-as a "one-step purification method" (Scopes, 1994).I use quotation marks because, despite frequent claims, affinity chromatography is seldom a one-step method.Often contaminants remain in affinity-purified proteins.Commonly, those contaminants are large protein aggregates that result from the almost inevitable leaching of " bound" ligand.That released ligand then forms a high molecular weight complex with the protein-of-interest.When we purify 'anti-GFP' antibodies on an immobilized GFP affinity column we almost always detect , by SEC-HPLC, a high molecular weight aggregate that is distinctly fluorescent, suggesting that an antigen(GFP)-antibody complex has formed.Because most affinity columns are quite expensive and could be plugged by crude starting samples, I prefer to use affinity chromatography late in a protocol.The principle is easy.Take for example, that a ligand, recognized by an enzyme, is covalently bound to the matrix (usually agarose).That ligand may be a pseudo-substrate, a cofactor, an inhibitor, or an antibody.Binding is easy, but elution may be difficult.It is preferable to use, as the eluting solvent, a solution containing a competing ligand (the pseudo-substrate, cofactor, inhibitor, or antibody).But, sometimes the competing ligand is very expensive, unavailable, or irreversibly bound to the enzyme.In such cases, other eluting solvents must be used.Dilute solutions of ethylene glycol in buffer are sometimes used.So are buffers of low pH, a variety of salts, metal chelators, etc.Many other forms of affinity chromatography exist.We purify anti-GFP antibodies on a column to which GFP is covalently immobilized.We normally elute with a concentrated pH 3.0 solution of sodium citrate.The pH 3 buffer temporarily denatures both the antibody and the GFP.Both column-bound GFP and the eluted antibody are rapidly renatured with a strong pH 8 buffer.Based upon analytical techniques (including size exclusion (SEC), HPLC, SDS gel electrophoresis, UV absorption spectroscopy and western blotting) purity of GFP-specific antibody can approach 99% (see Fig 5. a,b,c,d,and e).
However, if purity greater than 99% is desired, affinity chromatography requires a followup step.Most commonly we use preparative gel filtration to remove protein aggregates that may form when a small quantity of bound ligand leaches from the column.
For recombinant proteins, the favorite affinity column is an immobilized (chelated) metal ion column (abbreviated IMAC for immobilized metal ion affinity chromatography) (Scopes, 1994).In IMAC columns, nickel ions or cobalt ions are bound to the column in a chelation complex.The column-bound chelator is usually nitrilotriacetic acid.The metal ion, chelated to the IMAC column, can be co-chelated, non-specifically, by the R-groups of histidine, cysteine, and tryptophan.Binding may occur if one or more of these amino acids are exposed on the surface of the protein-of-interest (or any protein contaminant in the mixture).Almost universally, recombinant proteins that are subjected to generic affinity chromatography are processed by IMAC.But to achieve specificity (and tight binding), the recombinant proteins are genetically modified by the addition of a string of 6 histidine residues, sometimes on the C-terminus, sometimes on the N-terminus, and sometimes within exposed loop regions.The string of 6 histidines (the HIS-tag) is a strong co-chelator and the tag is sufficiently exposed that the His-tag almost always out-competes any naturally occurring co-chelators found in high abundance on the surface of a protein contaminant.The method is carried out at pH of 8 or higher and it must be performed in the absence of other metal chelators such as EDTA, citrate, oxalate, ammonium ion, etc. Concentrated solutions of imidazole are usually used for elution.
In my experience, all affinity chromatography columns, each time they are used, leach a bit of their covalently bound ligand, often as high molecular weight complexes with the POI.That ligand winds up in the fractions that have eluted from the column.So, in every case in which IMAC is used, it is wise to follow this step with a gel filtration run.
Gel Filtration Chromatography
Low pressure gel filtration is the easiest chromatographic method in principle, but it is the hardest method to administer properly.Because gel filtration seems so straight forward, liberties are sometimes taken in utilizing the method.For best results, attention to detail is essential.Gel filtration (or size exclusion as the method is called in HPLC) separates macromolecules by size.Size exclusion chromatography (SEC) is generally used as an analytical HPLC method while gel filtration is used primarily in preparative protein separations.Size exclusion HPLC utilizes small, rigid, uniform, spherical beads of 5 micrometer or 10 micrometer diameter.Small, porous, silica beads used in SEC provide higher resolution than low pressure gel filtration.But, the price per ml of HPLC column packing material is much higher than that of any soft gel used in low pressure applications.
For further discussion of HPLC, refer to the HPLC section later in this chapter.
Low pressure gels are comprised of small (20 to 300 micrometers) porous beads which, unlike Fast Flow adsorption beads, have blind cul-de-sacs that provide differential flow paths through the column.The largest molecules are unable to enter any pores, so they must travel around the beads.This means that large molecules exit first while smaller molecules spend some time inside the beads, so they exit later.The volume in which the very large molecules exit (DNA, proteoglycans, ribosomes, lipid micelles, and protein complexes) is called the void volume.The void volume, usually 25% of the total column volume, is often measured by the elution position of Blue Dextran (GE Healthcare), a covalently-dyed sugar polymer having a molecular weight of 2 million Daltons.So, if the column volume (π r 2 h) is 200 cubic centimeters (200 ml), the center of the Blue Dextran-calibrated void volume peak will appear close to the 50 ml mark.The next 25% of the column volume (the second 50 ml in this example) is the resolving zone, accessible to moderate size proteins.The final 50% of the column volume (100 ml) is the zone in which peptides, very small proteins, oligonucleotides, other small molecules and salt ions will elute.The total liquid volume in the column (salt volume) is accepted as being either the total volume of the column, as calculated from π r 2 h, or it is the volume measured by adding a measurable salt to the applied sample.The salt can be sodium chloride, detectable by conductivity, or sodium nitrite, detected by its fairly strong absorbance at 280 nm.
Gel filtration is, intrinsically, a low resolution separation method for proteins, yet it is frequently used in protein purification.Gel filtration is gentle to the sample and it is the best preparative method for fractionating native proteins by size and shape.Passage though the partially accessible pores in the beads will generate broad elution bands, each band lying within just 25 percent of the total column volume, thus the intrinsically low resolution of the method.Generally, the highest resolving columns, containing very small beads of soft gel materials, like Sephadex G-100 Superfine (GE Healthcare) or BioGel P-100 minus 400 mesh (BioRad Laboratories), operate under low gravitational force fields (50 cm pressure head, or smaller).Beads used for relatively large proteins must have low degrees of cross-linking, making the gels soft and highly compressible.For the most compressible beads (G-200 Sephadex, for example), pressure heads may need to be as small as 15 cm.In general, gel filtration columns are able to give baseline resolution for no more than 4 proteins, each differing in molecular weight by a factor of 2. So, under the best of conditions, a mixture of globular proteins of MW 200,000, 100,000, 50,000, and 25,000 Daltons can be baseline resolved.
Listed in Table 3 is a set of "best conditions,"-those that give maximum resolution by gel filtration.
1 Sample volume divided by column volume must be in the range of 1-2 %. 2 Sample must be applied very carefully to avoid channeling.3 Beads must be very small (20-50 micron size range).4 Flow rate must be very low (<2 ml/cm2 per hour), a rate which requires >47 hours for 121 cm X 1 cm columns.5 The biological sample applied to the column must be low in viscosity.6 At least 100 fractions should be collected, preferably in the protein resolving zone.7 The pressure must be low, so as not to collapse the beads.
Physical Set-ups in Column Chromatography
Those new to column chromatography often ask, "What size column should I use and what are the most appropriate dimensions of length and width"?Clearly there is no one correct answer.But there are some appropriate generalities that can help with column selection in adsorption chromatography.Adsorption chromatography includes ion exchange, hydrophobic interaction, affinity chromatography, and all other forms of chromatography in which the analyte binds to the stationary phase (all methods other than gel filtration and SEC).Protein resolution in adsorption chromatography depends upon the rate of change of the eluting solvent, not upon the length or width of the column.Better resolution results from gradual change in the strength of eluting solvent.The limit of "slow rate of change" is no change at all.In chromatography, we call "no change at all" isocratic elution.Isocratic elution, at the right solvent concentration, generates, the highest possible resolution, but peak spreading will be greater in isocratic elution than in gradient elution.In general, the amount of adsorbent in a column should have the capacity to bind three-to-five-times the amount of protein being loaded.The length and width of the column are not critical.It is not unreasonable to use a short, stout column for adsorption chromatography-a column with length 2-to 3-times the column diameter, for example.Such columns allow very high flow rates, so a large volume of eluent can be used in a fairly short period of time.But, one should not greatly extend column width at the expense of length (eg.dimensions of a cake pan are problematic).A wide diameter column, where the eluent exits through a port at the center of the cylinder, provides early elution of protein that happen to migrate down the center of the column.Protein (of the same type) that migrates near the circumference of the column will exit significantly later.This differential elution (side vs center of the cylinder) produces smearing of a band that might otherwise be sharper (if the column had more "normal" dimensions).An exceedingly long and thin column is not desirable either.Flow rates will be very slow, especially if the gel is soft.If, in an attempt to speed up flow, pressure is increased, the gel may compress and flow will slow down.In extreme cases, flow may stop altogether.Even if the adsorbent is rigid and non-compressible (as in size exclusion HPLC) a column with a small cross-sectional area may over-pressure if particles collect on the surface.It is common to use long, narrow columns for gel filtration, but here as well, columns need not have such extreme dimensions.The problem of particles collecting on the surface of the column may still occur.But even if the sample is particle free, flow rate may be much slower than desired.A 50 cm column, with a diameter of 2.5 cm, can give excellent resolution in gel filtration as well as in adsorption chromatography.But, to achieve maximum resolution in gel filtration (in columns of such dimensions), the conditions listed in Table 3 must still be observed.
HPLC
The term HPLC stands for high performance liquid chromatography.Those with limited budgets prefer to substitute the word, "price," for "performance."Some use the word, "pressure."But, pressure is not what distinguishes HPLC from other forms of column chromatography.The fundamental difference between HPLC and columns containing relatively soft gels (Sephadex, BioGel, agarose, cellulose, etc.), is that the beads in HPLC columns are considerably smaller.HPLC beads are usually 5 micrometers in diameter.Columns with such small beads will not flow by gravitational pressure, nor will they flow with the pressure generated by a peristaltic pump.So, as a consequence of small beads, mechanical pumps capable of pressures as high as 7000 psi are needed.But, in practice, pressures greater than 2000 psi are seldom used.Even pressures of 2000 psi require very strong columns, usually of stainless steel.Tubing down-stream from the pump must also tolerate very high pressures.Very rigid gels are required or the beads will collapse under the high pressures generated in HPLC.The most common of the rigid HPLC beads are made of porous silica.
More than 90% of HPLC columns in use are reverse phase columns (RPC).Reverse phase media is made of porous silica, but is functionally similar to low pressure hydrophobic interaction media made from soft gels.The greatest difference between RPC and HIC (other than tolerance for high pressure) is that reverse phase beads are much more hydrophobic than HIC beads.RPC beads have long aliphatic chains or aromatic groups bonded to the silicon dioxide media.The relative hydrophobicity of RPC columns is related to both carbon chain length and the carbon load.Carbon load (that can reach 20%) reflects the density of hydrocarbon substitution on the silica beads.The reported percentage is the ratio of the weight of bound hydrocarbon to the weight of silica.So, not only are the bonded phase hydrocarbons more hydrophobic than the ligands in HIC, but the density of hydrophobic ligands is also greater in RPC.The name "reverse phase" comes from the fact that the polarity of the mobile phase (the solvent) and that of the stationary phase (the beads) have been reversed.The original silica-based columns used unmodified silica that is polar and highly charged.So, this "normal phase" chromatographic method used a polar stationary phase and a non-polar mobile phase.RPC reverses the phases.
Typically, samples are introduced into an HPLC column through an injection valve that maintains atmospheric pressure on the outside and high pressure down-stream.So, a standard syringe can be used to load a sample while solvent continues flowing at very high pressure.RPC is more appropriate for small polar molecules (amino acids, peptides, oligonucleodes and polar lipids) than for native proteins.Most proteins bind too strongly and may bind irreversibly or become denatured.
Batch Methods for Protein Purification
Occasionally one finds a batch method that works as well in purifying a particular protein as a variety of chromatographic methods.Batch methods are particularly useful in early stages when a sample is highly viscous or full of fine colloidal material.Such batch methods include ammonium sulfate precipitation, precipitation from other salt solutions, from aqueous solutions at low pH, or from organic solvents (usually acetone, ethanol, ethylene glycol, or polyethylene glycol).In some cases, recrystallization from salt solutions is possible.Even if crystals do not form, differential precipitation can be an effective purification method.A particularly effective batch method is isoelectric precipitation in which the pH of a dilute aqueous buffer is adjusted to the isoelectric point (pI) of the POI.The protein-of-interest, or contaminants in the POI mixture, can be adsorbed to Celite, alumina gels, calcium phosphate gels (hydroxyapatite), and other media.If antibodies are available, the protein of interest can be selectively bound to those antibodies.If aggregates form upon such treatments, the aggregate can be collected by centrifugation and then dissociated into free antibody and free POI by a variety of methods including application of low pH buffers.
A-Free IgG
Recently, I have been exploring, with repeated rounds of ammonium sulfate precipitation, the purification of rabbit-derived antibodies, goat anti-rabbit IgG, and chicken IgY.Because this process does not utilize Protein-A, I call the method "A-Free."For rabbit-derived antibodies, the "A-free IgG" procedure works at least as well as chromatography on columns of Protein-A.Goat-derived antibodies, that are not as amenable to purification on Protein-A columns, and chicken-derived IgY, that cannot be purified on Protein-A at all, respond equally well to the "A-Free" method.Although very commonly used in purifying therapeutic monoclonal antibodies, Protein-A is quite expensive.Despite its being covalently bound to the affinity column matrix, Protein-A is able to leach from the column matrix during the elution phase.Traces of Protein-A in therapeutic monoclonals could present a health hazard, as Protein-A may bind to other essential antibodies in the patient.We have not found formal regulations limiting the use of Protein-A in purifying therapeutic monoclonals, but manufacturers might prefer a safer, more cost-effective method.We have a satisfactory replacement for Protein-A in the method we call "A-Free IgG."This method has been submitted, through Rutgers University, as a provisional patent application.
As often occurs in experimental science, the "A-Free IgG" method of antibody purification arose from an accident.I am primarily a bench scientist.But, with all the other things I must do, I get too little bench time to satisfy my urges to discover and create.As a consequence, when I have a bit of research time, I tend to rush through projects, sometime binging well into the night.Often this means cutting corners to save time.Such was the case with developing the "A-Free IgG" method-an accident created by my hasty experimentation.
In the course of purifying IgG from rabbit serum by a traditional single round of ammonium sulfate fractionation, I made a mistake that was picked up by size exclusion HPLC.The SEC profile showed more contaminants than I had seen previously.So, to remove those additional contaminants, I repeated the entire process.To my surprise, the second round of ammonium sulfate precipitation produced a cleaner IgG sample than I had previously seen with just one round of precipitation.But the redissolved pellet was still slightly pink (not all the transferrin was removed), and the HPLC profile still showed a tiny shoulder of albumin.So, I did the same precipitation process a third time.This time, the HPLC profile showed 99% pure IgG-virtually no high molecular weight contaminant and no indication of any albumin (Fig. 5 a, b, c, and d).The SDS gel profile showed strongly stained heavy chain and more weakly stained light chain (normal for IgG) and a very weakly staining contaminant or two (Fig. 5 e).These side-by-side experiments show that the "A-Free IgG" method actually out-performs Protein-A affinity chromatography.The time involvement is similar and the price is much lower.On occasion we perform a 4 th ammonium sulfate precipitation, obtaining a sample marginally cleaner than that resulting from three rounds of precipitation.
The method works equally well with goat anti-rabbit IgG, an antibody less amenable to Protein-A purification.We have a large supply of chicken egg yolk containing anti-GFP antibodies (IgY) for which Protein-A is totally ineffective.The A-Free method is suitable with IgY so long as the large amount of lipid has been removed by a freeze-thaw method.
Three-Phase Partitioning (TPP)
The most exciting method we have used for protein purification is three-phase partitioning (TPP).TPP was developed in the 1990's (Dennison and Louvrien, 1997) our Rutgers University lab in 1998.We happened upon this method by accident in 1998not by reading the paper, but by experiencing the method ourselves.The process is so elegant that we can purify recombinant or native jellyfish GFP to 80% purity in less than half a day.In the early years of our research, we purified GFP from jellyfish extracts by traditional methods, spending 6 months to reach 80% purity.TPP provides about a 3000fold savings in time and significant savings in equipment use and materials expenses.What is the magic?
Our adaptation of the TPP method for purifying recombinant GFP begins with whole, unlysed E. coli cells transformed with the gene for GFP.Three-phase partitioning works very well with GFP-containing cell extracts, but it works even better if the process begins with unlysed cells.Entire companies are built around releasing recombinant proteins from whole
22
E. coli cells (Glens Mills, for example).Huge French presses, sonication baths, day-long, repeated freeze-thaw cycles, treatment with lysozyme, or use of a bead mill are some of the standard methods for rupturing E. coli cells (Scopes, 1994).Fig. 6 shows an SDS gel electrophoretic profile for a sample prepared by TPP as compared to identical samples extracted by three other standard methods.TPP accomplishes the release of recombinant proteins in seconds, using the simplest of standard equipment.Described below are three stages in the process: Fig. 6.SDS-PAGE gel showing the released E.coli proteins resulting from routine lysis methods compared with the non-lysis Three-Phase Partitioning method as applied to the same amount of starting material.
Stage I. To release GFP and to perform the first stage of TPP purification, we treat whole E. coli cells with 1.6 M ammonium sulfate with shaking.Then we add one volume of tertiary butanol.If we do this in a 50 ml Falcon tube, we pour a suspension of the cells (in 25 ml of 1.6 M ammonium sulfate, pH 8.0) into the tube and then we add 25 ml of t-butanol.After about 1 minute of vigorous shaking, the Falcon tube is centrifuged in a moderate speed (3000 rpm) table-top centrifuge for 15 minutes at room temperature.Although t-butanol is completely miscible with water, it is quite insoluble in aqueous solutions having high concentrations of salt, especially ammonium sulfate solutions.Three phases separate during centrifugation (Fig. 7) (or, for large scale operations, by settling in a tank by gravity alone).
The upper phase contains t-butanol which expands to 30 ml, having taken up 5 ml of water.
Release of 5 ml of water from the lower aqueous phase raises the ammonium sulfate concentration from 1.6 M to 2.0 M. Meanwhile, membrane phospholipids, triacylglycerols, pigments, dyes, cholesterol and other steroids, fats, oils, and miscellaneous lipids become dissolved in the upper layer.Exposure of a complex macromolecular mixture to both t-butanol and the now higher salt concentration causes massive precipitation.The precipitate settles below the organic layer as a thick "pancake" of congealed protein, nucleic acids, cell walls, and other unwanted materials (Fig. 8).While the t-butanol, under the influence of high salt, has dissolved the cell membrane, it has not affected the cell wall.Normally, with membrane dissolved, nearly every macromolecule in the cell can escape to the outside through the cell wall.But, "stressed" by the high concentration of ammonium sulfate, t-butanol binds to anything that is even slightly hydrophobic, causing massive precipitation of most of the proteins, and virtually all chromosomal DNA.These aggregates are too large to exit through the cell wall pores, so they remain entombed inside the cell, behind the cell wall barrier.The binding of tbutanol to these macromolecules (whether they have remained within the cell or escaped to the outside) lowers the density of the precipitated macromolecules to such a point that the still intact cells, with their entombed macromolecules, easily float above the ammonium sulfate solution below.The whole cell mass forms a thick rubbery mat that floats above the salt solution (Fig. 8).As centrifugation simply speeds up the formation of three layers, the process can be scaled up to almost any volume by simple gravitational settling in a large tank.There is no limit to the scale-up potential.After loss of 20 % of the water to the overlying alcohol layer, GFP is still soluble in the aqueous ammonium sulfate (now at a concentration of 2.0 M. Because GFP remains soluble, it escapes easily through the pores in the cell wall and enters the aqueous layer.Stage II.The alcohol layer is removed by aspiration and the floating disk of precipitated cells and macromolecules is also removed (almost as easily as flipping a pancake with a spatula).
To the 20 ml of aqueous solution remaining in the tube, we add 30 ml of fresh t-butanol, (with vigorous shaking once again).Fresh t-butanol causes further dehydration of the lower liquid phase, creating a saltier aqueous solution.The saltier solution now favors precipitation of the GFP, already coated with many molecules of t-butanol.So, like the aggregated molecules in stage one, the now precipitated GFP (with its bound cage of tbutanol) moves into the organic-aqueous interface as a fine disk, compressed by centrifugal force between the alcohol layer and the aqueous ammonium sulfate layer.Both liquid layers are then carefully removed.
Stage III.The GFP disk, remaining after the liquid phases have been removed, is taken up in a series of very small volumes of 25% saturated ammonium sulfate which, when added to the very salty GFP disk, raises the ammonium sulfate concentration to 1.6 M..One at a time, these suspensions of GFP are serially transferred into one or more microfuge tubes.Serial transfer allows virtually 100% of the GFP to be transferred in a minimum volume.Volume is kept at a minimum because GFP is incredibly soluble, even in 1.6 M ammonium sulfate.
When GFP has just barely gone into solution, the tube(s) is spun in a microcentrifuge.Those remaining contaminants, having lower solubility than GFP, now collect as a pellet at the bottom of the tube(s).There is usually a tiny floating disk of contaminant and a small volume of overlying alcohol.In a sense, Stage I of TPP has been repeated in Stage III.The final GFP product of TPP is pipetted from the microfuge(s) as a bright, crystal-clear green liquid.On average, the GFP has been purified from its original milieu by a factor of 100-fold and concentrated by a factor of 50.The amazing effectiveness of TPP is also shown in the before and after absorption spectra seen in Fig. 9.A Rutgers University patent was issued in 2008 for a protein mini-prep kit (based upon our work with TPP).The patent calls for a mixture of two organic solvents (a mixture of t-butanol and isopropanol).Included in the patent description is the use of a microbiological dye, previously added to a very concentrated ammonium sulfate stock solution.The purpose of the dye is to facilitate detection of the solvent interface, as all of the dye leaves the aqueous layer and travels into the alcohol layer.We have explored sixteen water-soluble microbiological dyes, each of which partitions effectively into the organic phase.The boundary between the colored organic layer and the colorless aqueous layer provides a visible means for separation of the two layers.Visualization of this boundary is especially useful when tiny quantities of protein are being prepared.Surprisingly, TPP works almost equally well when small concentrations of protein are processed.Such very dilute protein solutions of protein are almost never amenable to ordinary ammonium sulfate precipitation.What's more, TPP works, not only on crude extracts and whole E. coli cells, but it works well as a polishing step.Even 90% pure protein can be taken to near homogeneity by a second round of TPP.
Other proteins may be purified by TPP, but often the initial ammonium sulfate concentration must be adjusted on a protein-by-protein basis so as to maximize both recovery and purity of the protein-of-interest.The salt concentration over which TPP is effective ranges from about 0.6 M to 2.6 M ammonium sulfate.Below 0.6 M, the two solvents are miscible.Above 2.6 M, the salt begins to precipitate.
Criteria for Protein Purity
Demonstrating purity of a given protein is not an easy task.But, without achieving protein homogeneity, serious errors and experimental artifacts may arise.Even a 1% contaminant may contribute to erroneous observations.A minor contaminant (protein or otherwise) could significantly raise or lower an enzyme's apparent activity level.If an impure proteinof-interest is used to generate antibodies, a very immunogenic contaminant could induce more antibody than the POI.Some biochemists and some journals will accept, as the sole criterion of purity, a photograph or a densitometry trace of a Coomassie-stained SDS polyacrylamide gel that shows one stained band.But, I know of a case in which a "single band" on an SDS gel, accepted by a prominent journal as proof of purity, turned out to be a 97% contaminant of the protein-of-interest.The actual POI represented only 1% of the total "pure protein" (Karkanis and Cormier, 1971).Errors of this magnitude can be avoided by using a variety of different criteria for evaluating protein purity.
1 Constant specific activity across a broad portion of the peak in the final preparative chromatography column. 2 Single, symmetric band by size exclusion HPLC. 3 Single band, in the correct MW region, on an SDS gel (or, for hetero-oligomers, the appropriate number of bands in the correct positions.4 Unambiguous, single amino acid detected in N-terminal amino acid analysis.Inability to detect an N-terminal amino acid may also be taken as evidence of purity-not a very strong criterion as many other proteins have blocked N-terminal amino acids.5 Single band on a native polyacrylamide gradient gel (or appropriate number of bands of correct MW for heterodimers, heterotetramers, etc).6 Single, sharp band by isoelectric focusing in an acrylamide gel or in a capillary isoelectric focusing system (or the appropriate number of bands for hetero-oligomeric proteins).7 Unambiguous N-terminal peptide sequence by Edman degradation.8 Single band by Western blot, if antibodies are available.9 Single MW form by Maldi TOF (matrix assisted laser desorption time-of-flight mass spectrometry).
Table 4. Criteria of purity
Acknowledgment
The author would like to acknowledge Ms. Sujata Charuvu for her technical assistance.
Fig. 3 .
Fig. 3. Underwater photograph of the jellyfish Aequorea victoria.Photograph is courtesy of R. Shimek of the University of Washington's Friday Harbor Laboratories.
Fig. 4 .
Fig. 4. P-100 Biogel profile of crude jellyfish extract.P marks the absorbance profile of total protein at 280nm.A marks the activity of Aequorin protein.G marks the GFP fluorescence.Soybean peroxidase crude extracts are fairly low in viscosity, but the hull extracts present a very significant problem with particulates.The crude extracts include large particles (millimeter size) as well as tiny particles in the micrometer range-some as colloidal suspensions.Large fragments of hulls are easily filtered away with fine mesh nylon nets, but this leaves a very cloudy suspension of fine to very fine particles.Centrifugation has been ruled out because of the large volumes of extract produced and the high centrifugal forces needed to pellet the finest particles.Even continuous flow centrifugation trials have failed, repeatedly, because most of the particulates, including colloidal materials, have failed to sediment during the short interval of time it takes for liquid to traverse the centrifugation path.After trying everything we could imagine and after investing money in a variety of expensive filter devices (G.Swiatek and M. Browning, personal communication), we suspended this project for several years.Then we happened upon an ion exchange method normally applied to water purification.We found a company called ResinTech that provides, at very low cost, a high capacity polystyrene-based anion exchanger.The beads are large (1 mm) and dense, so, after stirring, they quickly settle to the bottom of a large container.Binding kinetics, however, are slow, because of the large size of the beads and relatively small pore size (access to the interior is slow and limited to proteins of MW 50 kdal or lower.So, notwithstanding the slow kinetics of binding and elution, these beads are useful for batch ion exchange applications-in our case, to trap the highly anionic soybean peroxidase (C.Holman, manuscript in progress,Ward, 2012).A provisional patent for our unique SBP purification method has been filed with Rutgers University.The fine particles of soybean hull extract (much too fine to settle on their own) are, however, too large to enter
Fig. 7 .
Fig. 7. Tube showing Stage-I of Three-Phase Partitioning.Three phases are formed after centrifugation.Layer A contains t-butanol, B is a thick "pancake" layer of precipitated material and C is aqueous ammonium sulfate solution containing GFP.
Fig. 8 .
Fig. 8. Recovered "plugs" of precipitated material from Stage I of Three-Phase Partitioning.
Table 2 .
Physical and chemical properties of a pure sample that may be needed to effectively design a purification strategy. www.intechopen.com See the section: "Viscosity Reduction and Particle Removal."We found that one volume of soybean hull powder requires 5 volumes of water for efficient extraction.For 2000 lbs. of hulls, the amount of water required for extraction has been determined to be 16,000 liters (G.Swiatek, personal communication).Even if scaled down to 20 lbs. of hulls per batch, 160 liters of water would be required.Volume reduction is accomplished very effectively by trapping the SBP on ResinTech anion exchange beads.When we compared binding capacity of ResinTech beads with that of DEAE Sepharose Fast Flow, both exchangers bound the same amount of pure GFP (38 mg of protein per milliliter of swollen gel).Binding capacity of ResinTech beads with larger proteins, such as rabbit IgG, is considerably lower, as the ResinTech pores are much smaller than those of DEAE Sepharose. | 2017-08-15T05:35:11.988Z | 2012-01-20T00:00:00.000 | {
"year": 2012,
"sha1": "2d76e2e593cbac0418a21df671c5bc7413918181",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/26594",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2d76e2e593cbac0418a21df671c5bc7413918181",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Art"
]
} |
119192197 | pes2o/s2orc | v3-fos-license | Is very high energy emission from the BL Lac 1ES 0806+524 centrifugally driven?
We investigate the role of centrifugal acceleration of electrons in producing the very high energy (VHE) radiation from the BL Lac object 1ES 0806+524, recently detected by VERITAS. The efficiency of the inverse Compton scattering (ICS) of the accretion disk thermal photons against rotationally accelerated electrons is examined. By studying the dynamics of centrifugally induced outflows and by taking into account a cooling process due to the ICS, we estimate the maximum attainable Lorentz factors of particles and derive corresponding energetic characteristics of the emission. Examining physically reasonable parameters, by considering the narrow interval of inclination angles (0.7^o-0.95^o) of magnetic field lines with respect to the rotation axis, it is shown that the centrifugally accelerated electrons may lead to the observational pattern of the VHE emission, if the density of electrons is in a certain interval.
Introduction
In physics of active galactic nuclei (AGNs) one of the major problems is related to the understanding of origin of the high energy radiation. One prominent class of AGNs is the so-called BL Lac objects -supermassive black holes characterized by rapid and large amplitude flux variability. By using the radio observations from the Green Bank 91-m telescope [1], the AGN, 1ES 0806+524, was identified as a BL Lac object [2].
Recently, by VERITAS was found that the blazar, 1ES 0806+524 reveals VHE spectra in the TeV domain [3]. According to the standard model of BL Lacs, VHE radiation originates from the ISC of soft photons against ultrarelativistic electrons [4,5]. However, the origin of efficient acceleration of particles up to highly relativistic energies still remains uncertain and needs to be revealed. Proposed mechanisms based on the Fermi-type acceleration process [6] may be applied successfully for the TeV emission, only, if the initial Lorentz factors of electrons are considerably high (γ ≥ 10 2 ) [7].
It is clear that in the rotating magnetospheres (the innermost region of AGN jets and pulsar magnetospheres) the Email address: z.osmanov@astro-ge.org (Osmanov Z.). centrifugal effect should play a significant role in the overall dynamics of corresponding plasmas. For example, the rotationally driven parametric plasma instabilities have been studied for pulsars [8,9] and AGNs [10,11] respectively, and was shown, that under certain conditions, the relativistic effects of rotation may efficiently induce plasma instabilities, parametrically pumping the rotational energy directly into the plasma waves. The centrifugally induced outflows have been discussed in a series of works. Blandford & Payne in the pioneering paper [12] considered the angular momentum and energy pumping process from the accretion disk, emphasizing a special role of the centrifugal force in dynamical processes governing the acceleration of plasmas. It was shown that the outflows from accretion disks occurred if the magnetic field lines are inclined at a certain angle to the equatorial plane of the disk. In the context of studying the nonthermal radiation from pulsars, the centrifugal effect has been examined in [13,14,15], where the curvature emission of accelerated particles was studied. By applying the similar approach, Gangadhara & Lesch considered the role of centrifugal acceleration on the energetics of electrons moving along the magnetic field lines of spinning AGNs [16]. This work was reconsidered in a series of papers [7,17,18] and the method was applied to a special class of AGNs -TeV AGNs. It was shown that considera-tion of straight field lines is a good approximation and was found that the centrifugal force may accelerate electrons up to very high Lorentz factors (∼ 10 8 ) providing the TeV energy emission via the ICS.
In the present paper we investigate a role of rotational effects in the VHE flare from the blazar 1ES 0806+524, by applying the method of centrifugal outflows, developed in [7,17,18,19]. We show that, for a certain set of parameters, due to the ICS in the Thomson regime, photons, when upscattered against centrifugally accelerated ultra-relativistic electrons, produce the VHE radiation in the TeV domain. We show that a resulting luminosity output is in a good agreement with the observed data.
The paper is arranged as follows. In §2 we consider our model and derive expressions of the luminosity output and the energy of photons respectively. In §3 we present the results for the blazar 1ES 0806+524 and in §4 we summarize our results.
Main consideration
Let us consider the typical parameters of 1ES 0806+524: the black hole mass, M BH ≈ 5×10 8 M ⊙ [21], (M ⊙ is the solar mass) and the bolometric luminosity, L ≈ 7 × 10 44 erg/s [22]. We examine particles originating from the accretion disc at the distance ∼ 10 × R g from the central object, where R g ≡ 2GM BH /c 2 is the gravitational radius of the black hole. Then, by taking the value of the equipartition magnetic field, into account, one can show that for typical parameters, r ≈ 10×R g , n ∈ (0.0001−1)cm −3 , γ 0 ≈ 1, the value of the ratio, B 2 /γ 0 mnc 2 , is in the following interval ∼ 10 9 − 10 13 (γ 0 , n and m are electrons' initial Lorentz factor, the density and the rest mass respectively). Therefore, the magnetic field energy density exceeds the plasma energy density by many orders of magnitude, which indicates that the plasma corotates with the angular velocity, corresponding to the Keplerian motion at r 0 ≈ 10 × R g . We see that due to the frozen-in condition the particles follow the co-rotating magnetic field lines and accelerate centrifugally. Therefore, it is reasonable to consider dynamics of the electron, sliding along the rotating magnetic field lines. We apply the method developed for AGNs in [17,18] and assume that the straight field lines co-rotate. Then, if we take an angle α between the magnetic field, B, and the angular velocity of rotation, ω, into account, after the transformation of coordinates: x = rsinαcosωt, y = rsinαsinωt and z = rcosα of the Minkowskian metric, x, y, z)), the metric in the co-moving frame of reference is given by [17,19] For the equation of motion we get: where Ω = ω sin α,x µ ≡ (ct; r).
Then, by taking the four velocity identity,ḡ αβ dx α dχ dx β dχ = −1, into account, one can derive from Eq. (4) the radial equation of motion [19]: Solving Eq. (7), it is straightforward to show that the Lorentz factor of the particle changes radially as [7]: where r 0 and υ 0 are the initial position and the initial radial velocity of the particle, respectively and R lc is the radius of the light cylinder -a hypothetical zone, where the linear velocity of rigid rotation exactly equals the speed of light, c.
As is clear from Eq. (8), in due course of time the Lorentz factors of electrons become very high in the vicinity of the light cylinder (r ∼ R lc ). On the other hand, it is clear that acceleration lasts until the electron encounters a photon, which in turn inevitably limits the Lorentz factor of the particle. During the ICS an electron will lose energy, whereas a photon will gain energy. This mechanism is characterized by the so-called cooling timescale [20] where U rad = L/4πcr 2 is the energy density of the radiation. The acceleration process is characterized by the acceleration timescale, t acc ≡ γ/(dγ/dt), which after applying Eq. (8) can be presented by Generally speaking, initially the electrons accelerate, but in due course of time the role of the inverse Compton losses increase and the acceleration becomes less efficient. The maximum energy attainable by electrons is achieved at a moment when the energy gain is balanced by the energy losses due to ICS. Mathematically this means that the following condition t acc ≈ t cool has to be satisfied. After applying Eqs. (9,10) the aforementioned condition leads to the expression of the maximum Lorentz factor [7] where R l ≈ R lc / sin α. If electrons with such high kinetic energies encounter soft photons having energy, ǫ s , then, photons' energy after scattering is given by As we have already mentioned, the particles reach maximum kinetic energy almost on the LC surface. Let us assume that a layer where the ICS takes place and the high energy photons are produced has a thickness, ∆r. Then, for the corresponding infinitesimal volume of a cylindrical layer we get: dV ≈ πR lc (2R lc + ∆r)∆r sin α dα .
If we take a single particle Thomson power into account, then the total power emitted from the radiation zone can be expressed as follows where σ T ≈ 6.65 × 10 −25 cm 2 is the Thomson cross-section.
The corresponding luminosity above 0.3T eV up to 1T eV can be estimated as follows: where ǫ 1 = 0.3T eV and ǫ 2 = 1T eV . We assume that the high energy emission originates from the jet, having an opening angle, 2α m (where α m ≥ α 2 ). ∆S ≈ πD 2 sin 2 α 2 − sin 2 α 1 and D ≈ 630M pc is the distance to the blazar. By taking the parameters into account, one can see from Eq. (17) that the luminosity in the energy interval (0.3 − 1)T eV is given by According to the standard theory, it is well known that the accretion disks thermally radiate and the corresponding temperature is expressed as in the following way [24]: where is the dimensionless mass accretion rate, M 8 ≡ M BH /10 8 M ⊙ and d ≡ r * /3R g . For the given luminosity, the mass accretion rate can be estimated as: then, combining Eqs. (19,21) one can show that energy, ǫ s = kT , of accretion disc's thermal photons emitted in the area from r * = 15 × R g to r * = R lc is of order ∼ 10eV . Therefore, as we see from Eq. (12), for producing energies from thousands of GeV to T eV domain, one requires very high Lorentz factors (1 − 3) × 10 5 .
One can see that the aforementioned values of Lorentz factors are achieved for very low inclination angles. In Fig. 1 we show γ max as a function of the inclination angle. The set of parameters is υ 0 = 0.4c, r 0 = 10×R g , ω = 4.5×10 −6 s −1 and L = 7 × 10 44 erg/s. As is clear from the figure, the electrons reach high values of the Lorentz factor for small angles, 0.7 o − 0.95 o . This is a natural result because, one can straightforwardly show from Eq. (11), that γ max (α) behaves as 1/ sin 2 α and therefore, provides higher kinetic energies for lower inclinations.
The present model is based on an assumption that maximum kinetic energy of particles is determined by the balance of energy gain due to the acceleration and energy losses due to the ICS. Generally speaking, this approach is valid only if the energy losses is dominated by the ICS. On the other hand, apart from the inverse Compton scattering, also the curvature radiation could impose significant limitations [25]. The centrifugal acceleration mainly happens close to the light cylinder, and since the power of a single particle curvature radiation behaves as ∼ γ 4 , one has to check the constraint imposed by this mechanism on relativistic particle dynamics. A total power radiated by a single particle is given by where by R c we denote the curvature radius. Then, the timescale of curvature emission can be defined by the following way: To find the limitation imposed on the maximum Lorentz factor let us note that electrons initially accelerate efficiently, and this process lasts until the energy gain is balanced by the curvature losses. This happens when t acc ≈ t c . By taking Eqs. (10,23) into account and assuming R c ∼ R lc , it is straightforward to show From Eqs. (11,24) we see that for γ 0 ∼ 1 one has the following inequality γ c max ≫ γ max . This indicates that the curvature radiation does not impose a significant limitation on the maximum attainable Lorentz factors. Therefore we conclude that maximum attainable kinetic energies are determined only by the ICS.
In Fig. 2 we show the behavior of the emission energy versus the inclination angle. As is clear from the figure, acceleration of electrons inside the region, 0.7 o ≤ α o ≤ 0.95 o , provides the photon energies from 0.3T ev up to 1T eV .
For plotting our graphs and getting the results we used the parameters with the best fitting, although the values from the following ranges are also applicable: On the other hand, the high energy photons may undergo the γγ absorbtion. It is well known that gamma rays interact most effectively with the background photons of energy [26] ǫ b = 4 (mc 2 ) 2 ǫ ≈ 1 T eV ǫ eV (25) and the corresponding cross section has a peak at σ 0 ≈ σ T /5. The optical depth of high energy photons, then becomes where λ is the mean-free path of infrared photons, is the corresponding photon density and L(ǫ b ) -the infrared luminosity. After substituting Eq. (27) into Eq. (26), one can derive an expression of the optical depth of high energy photons [27] where L Edd ≈ 6.5 × 10 46 erg/s is the Eddington luminosity of 1ES 0806+524.
As we see from the figures, for producing radiation in the 1T eV domain, one has to accelerate the electrons up to γ ≈ 2.8 × 10 5 . For 1ES 0806+524 no infrared data are published so far, on the other hand, since the TeV emission is detected, therefore the infrared luminosity of 1ES 0806+524 must be less than 5.6 × 10 −7 L Edd ≈ 3.6 × 10 40 erg/s (see Eq. (28)). One has to note that the centrifugal acceleration leads to the TeV variability timescale of the order ∼ (1−2)days [18]. It is worth noting that TeV blazars exhibit the variability on hour to minute timescales, but this particular feature is not detected for 1ES 0806+524.
As our investigation shows, the centrifugally accelerated outflows may provide the detected VHE emission of 1ES 0806+524 via the ICS if the parameters are chosen appropriately. From the aforementioned set of parameters very important one is the density of relativistic electrons, which has to be in the following interval [0.7 − 1.4] × 10 −3 cm −3 in order for the centrifugal acceleration to explain the detected high energy emission. This we consider as a certain test to check if the mentioned mechanism is feasible.
Summary
(i) For explaining the observed TeV energy radiation from 1ES 0806+624 detected by VERITAS, we have considered the inverse Compton scattering of disk thermal photons against centrifugally accelerated ultra-high energy electrons. (ii) We have shown that due to very strong magnetic field, the electrons are in the frozen-in condition, which leads to the co-rotation of particles. Due to the corotation, electrons centrifugally accelerate almost up to the light cylinder surface, and in the nearby zone of it the electrons upscatter against soft thermal photons, causing the limitation of particles' kinetic energy. We also have shown that the role of the curvature radiation in limiting the maximum kinetic energy is negligible with respect to the inverse Compton losses. (iii) We considered the γγ absorbtion of high energy photons in the background field of soft infrared photons. From the observationally evident fact that the TeV radiation escapes the central source, we estimated the maximum value of infrared luminosity which has not been detected so far. (iv) We have found that for physically reasonable parameters the ICS occurs in the Thomson regime. It has been shown that for the following interval of inclination angles, 0.7 o −0.95 o , with the best fitting parameters the resulting emission energies, (0.3T eV −1T eV ) and the luminosity output, 10 39 erg/s are in a good agreement with the observed data. On the other hand, if the parameters are chosen in certain physically reasonable intervals, the high energy emission is also possible. Therefore we offer the following test: if one indirectly measures the density of the relativistic electrons, and finds its value to be in the following range, [0.7 − 1.4] × 10 −3 cm −3 , then the centrifugal acceleration is a feasible mechanism in producing the TeV photons via the ICS. In the paper we have done several approximations. The first limitation is that we studied the straight magnetic field lines, although, especially in the very vicinity of the light cylinder the curvature of field lines becomes significant. Therefore, the generalization of the present approach will be the next objective of our future work.
The next approximation concerns the fact that according to our model magnetic field is not influenced by plasma kinematics. On the other hand, for real astrophysical scenarios, it is obvious that plasmas may undergo the overall configuration of the magnetic field. For this reason, it is very important to generalize the approach presented here and see how the collective phenomena change the results. | 2009-10-13T18:20:47.000Z | 2009-01-09T00:00:00.000 | {
"year": 2010,
"sha1": "87b624773691e543397f21d3aa5ab0b59eace9c5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0901.1235",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "87b624773691e543397f21d3aa5ab0b59eace9c5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
243482778 | pes2o/s2orc | v3-fos-license | Seasonal Changes in Essential Oil Constituents of Cystoseira compressa: First Report
Marine macroalgae are well known to release a wide spectrum of volatile organic components, the release of which is affected by environmental factors. This paper aimed to identify the essential oil (EO) compounds of the brown algae Cystoseira compressa collected in the Adriatic Sea monthly, from May until August. EOs were isolated by hydrodistillation using a Clavenger-type apparatus and analyzed by gas chromatography coupled with mass spectrometry (GC–MS). One hundred four compounds were identified in the volatile fraction of C. compressa, accounting for 84.37–89.43% of the total oil. Samples from May, June, and July were characterized by a high share of fatty acids (56, 69, and 34% respectively) with palmitic acid being the dominant one, while in the August sample, a high content of alcohols (mainly phytol and oleyl alcohol) was found. Changes in the other minor components, which could be important for the overall aroma and biological activities of the algal samples, have also been noted during the vegetation periods. The results of this paper contribute to studies of algal EOs and present the first report on C. compressa EOs.
Introduction
More than 70% of the Earth's surface is covered with oceans and seas, so it is not surprising that marine ecosystems are extremely complex with tremendous biodiversity. Recently, there is a growing trend in the investigation of new, inexpensive, and valuable sources of biologically active compounds, and marine origin products, like algae, are one of the most interesting sources, due to their production of a great variety of unique secondary metabolites [1]. Algae are vegetative organisms widely distributed throughout the world. Although many of them are of commercial importance in some parts of the world due to their nutritional, biological, and functional properties, only a small number of species are currently exploited for industrial food applications [2]. Studies on marine algae are usually focused on the isolation of structurally different bioactive compounds like polysaccharides (e.g., fucoidan, alginate, and laminarin), photosynthetic pigments (carotenoids, chlorophylls, and phycobilins), sterols, polyphenolics, etc. [3][4][5][6][7][8][9]. In comparison to the research on these non-volatile compounds, studies on volatiles of marine origin are still scarce.
Essential oils (EOs), as a special chemical group of algal metabolites, play an important role in communication in marine ecosystems, both interspecies and intraspecies, as well as in interactions with the surrounding environment. These compounds are involved in various algal ecological functions: they are defenses against predators and herbivores; they act as pheromones (allelochemicals; take part in the adaptation to abiotic stresses; and are important for the inhibition of bacterial and/or fungal fouling [1,[10][11][12]. The essential oil metabolites present in marine algae species contain a mixture of different chemical classes such as hydrocarbons, fatty acids, esters, alcohols, carboxylic acids, aldehydes, ketones, terpenes, polyphenols, furans, pyrazines, pyridines, halogenated amines, and sulphur compounds [1,2]. The production of algal EOs is closely related to the physiology of the species [11,12]. Studies on EOs of green and red algae mainly report the presence of monoterpenoids, halogenated compounds, and sulphur compounds that have a low impact on their aroma perception. In contrast to those species, brown algae is responsible for strong and pleasant marine odors (the so-called "beach note"), which is usually related to the presence of C11-hydrocarbons. Among other aroma compounds, these species contain a wide range of monoterpenoids and sesquiterpenoids [13]. Although the functions of algal EOs are similar to those in terrestrial plants, studies dealing with algal EOs and their role are still in the primary stage, and there is a lack of reports on this subject [12]. EO profiles differ between species, but they are also influenced by various factors as age, geographical origin, growth and nutrition conditions, season, temperature, light, salinity, and processing/extraction parameters [2,12].
There are about 40 species of algae from the genera Cystoseira (Phaeophyta), which are widely distributed along the Eastern Atlantic and Mediterranean coasts [14], and C. compressa, is one of the most widespread brown algae in the Adriatic Sea. C. compressa is attached to the substratum by a small disc and its thallus shows morphological plasticity. Changes are most evident in the spring/summer period, when the winter rosette shape of the branches shifts to dense and ramified branches with aerocysts [15]. These changes might be related to the length of the photoperiod and sea temperature, and their effect on the EOs or other chemical components of the algae (phenolic profile, pigments, etc.) is unknown.
Compounds from C. compressa were characterized from extracts and associated with various biological activities, e.g., polysaccharides and phlorotannins with antioxidant activity [16,17], phlorotannins with antidiabetic activity [17], and phenolic compounds with antibacterial activity [5]. Furthermore, a connection between total phenolic content and the seawater temperature was observed, showing that the amount of phenolics is influenced by the temperature [18]. However, characterization of EO components has been done for C. sedoides [13], C. barbata [19,20], C. crinita [19], and C. tamariscifolia [21], but to our knowledge, there are no reports on compounds of C. compressa and their comparison over the spring/summer period, when the algae are under the influence of the thallus change, a rise in sea temperature, and an intensive photoperiod. For these reasons, this work aimed to study the EO profiles of C. compressa, collected in the Adriatic Sea monthly from May until August, to identify the molecules characterizing this species.
Results and Discussion
Seaweeds are widespread around the world, being of commercial importance in some parts, where they are consumed fresh, dry, or as an ingredient. Although in some regions they are widely used in the human diet, only a small number of species are currently exploited for food applications. One of the main limitations of the use of algal materials in the food industry is their flavor, which is the main parameter of quality directly related to consumers' acceptance of food [2]. In comparison to the terrestrial odoriferous plants, only some algae possess an attractive, pleasant odor and characteristic marine flavor, and, therefore, great potential to be used in various food and cosmetic preparations [1,13].
Different extraction methods like hydrodistillation, solvent extraction, microwaveassisted extraction, supercritical fluid extraction, headspace extraction, etc., are commonly used for the isolation of volatile analytes from algal materials. In recent times, the conventional extraction procedures are usually being replaced by novel techniques that are less time-consuming, often (fully) automated, more environmentally friendly, require less solvent, and are more efficient [8]. However, despite all its disadvantages (duration, high temperatures, low efficiency, potential degradation of compounds, etc.), hydrodistillation is still the most used method. On the other hand, identification of the EO components is usually performed using capillary gas chromatography coupled with mass spectrometry (GC-MS), as this method of characterization covers a wide spectrum of compounds, from non-polar to polar ones [11,13].
The chemical profile of volatile fractions and the relative content of detected components obtained by hydrodistillation of C. compressa are reported in Table 1. One hundred four compounds were identified, accounting for 84-89% of the total chemical composition. Figure 1 presents the relative share of the sum of compounds from the same chemical class to get better insight into the algal EOs profile. The GC-MS chromatograms of the essential oils obtained from C. compressa collected in different months are shown in Figure 2. Samples from May and June were characterized by a high share of fatty acids, while in the July and August samples the dominant chemical class of compounds were alcohols (34 and 48%, respectively). EOs from May and June were characterized by an extremely high content of fatty acids, 56 and 69%, respectively, while almost two-fold lower results were obtained for the July extract. The major acid in all samples was palmitic acid (C16:0), with the highest amount found in the May extract (40.15%), and shares of 31.92%, 26.81%, and 18.62%, in the June, July, and August samples, respectively. It is interesting to note that this saturated fatty acid was present in high amounts in all samples and followed a regular trend characterized by a continued decrease in content during the collecting months. In comparison to the May samples, there was a more than two-fold lower amount detected in the August samples. This compound was also previously reported as an abundant fatty acid in different Cystoseira species [3,4,8,14,21,22]. The May extract also contained the highest share of eicosanoic acid (2.58%). Significant amounts of this acid were also found in June (0.51%) and July (1.14%), while it was not detected in the August sample. The content of all other fatty acids was the highest in the June fraction: palmitoleic acid (11.94%) > myristic acid (7.60%) > lauric acid (3.78%) > (Z)-dodec-5-enoic acid (2.79%) > oleic acid (1.64%) > arachidonic acid (1.45%) > stearic acid (0.36%). It is well known that fatty acids with >12 carbon atoms are odorless, so although present in high amounts they do not affect significantly the flavor of the samples [2].
Among monounsaturated fatty acids, the presence of (Z)-5-dodecenoic acid was confirmed only in the June sample, where the content of oleic acid was also the highest in comparison with the others. Arachidonic acid was the only detected polyunsaturated acid, with the highest amount again found in the June sample, but significant amounts were also detected in May (0.96%). Cvitković et al. [8] reported the domination of total unsaturated fatty acids in the lipid fraction of different Adriatic brown algae species and two Cystoseira species, C. barbata and C. compressa. These authors also reported the domination of oleic acid among unsaturated fatty acids, as well as the presence of arachidonic acid in high amounts in brown algae samples. Similar results were also reported by Oucif et al. [4]. Kord et al. [22] also identified fatty acids (14 to 20 carbon atoms) of which palmitic acid was the major compound in C. sauvageauana lipid fractions, while among polyunsaturated fatty acids, arachidonic acid was the found in highest concentration.
Compounds from the chemical class of hydrocarbons, alkanes, and alkenes are common compounds in the majority of marine macroalgae EOs [1]. Although unsaturated hydrocarbons from C8 to C19 with the presence of 1 to 4 degrees of unsaturation are common, our study mainly reported the presence of compounds with one double bond. From the class of hydrocarbons, the straight chain saturated hydrocarbon 11-pentan-3ylhenicosane was found in high amounts (from 0.43% to 1.41%), as well as hexadecane (from 0.08% to 1.30%). Both of these compounds followed similar trends, with the lowest concentrations found in the July sample, while their content significantly increased in next two collecting months, with the highest concentration in August. Also, pentadec-1-en was found in July (0.14%) and in even higher amounts in August (2.62%), while in the first two collection months this compound was not detected. The presence of squalene, which is the biosynthetic precursor of triterpenes and steroids, was confirmed in all samples, with the highest amounts detected in July.
Previous studies on volatile components from Cystoseira species confirmed the domination of hydrocarbons in C. barbata, while this class of compounds was found only in traces in C. crinite, where the majority of compounds were monoterpenoids [19]. The domination of hydrocarbons in the volatile oil of C. barbata was also reported by Ozdemir et al. [20], while Bouzidi et al. [13] reported that the most important class of VOCs obtained by hydrodistillation in C. sedoides were fatty acids and derivatives, with a content of 53.1%. Gressler et al. [11] reported the identification of hexadecane in different algae, among which were two Cystoseira species: C. barbata and C. mediterranea. Furthermore, heneicosan was also detected in C. barbata [20]. In their study, Bouzidi et al. [13] confirmed the presence of hexadecane and pentadec-1-en in samples of the Algerian endemic algae C. sedoides. It is interesting to note that these compounds were found in samples obtained by hydrodistillation, while they were not present in fractions obtained by focused microwave hydrodistillation and supercritical fluid extraction, which could be confirmation that aggressive isolation conditions (e.g. high temperature, long extraction duration, oxidation, and contact with water) cause the degradation of volatiles.
Samples from July and August contained high percentages of alcohols, 34% and 48%, respectively. Phytol, an acyclic diterpene alcohol, also known as a precursor of vitamin E and a degradation product of chlorophyll, was found in all samples at the highest percentage, especially in the August sample, where its content was 14.20% of all detected compounds [1]. This compound was detected in the lowest concentration in the June sample (2.9%), but in the next two months its content was almost 2 and 5-fold greater. El Amrani Zerrifi et al. [21] confirmed the domination of phytol in C. tamariscifolia from their study, as well as Bouzidi et al. [13] in C. sedoides. Other dominant components from the chemical class of alcohols were oleyl alcohol and n-nonadecan-1-ol, for which the regular amount increase during the collecting months was recorded. The presence of oleyl alcohol in the May sample was not confirmed, while its content in June was 0.68%, in July 5.76%, and in August almost 6%. On the other hand, the share of n-nonadecan-1-ol was 1.67% in May, 3.13% in June, 4.13% in July, and 4.34% in August, and an increase in its concentration during the collection periods could be noted. The great impact of unsaturated alcohols on the overall aroma and sensory perception of food has been previously reported [2].
The share of ketones was 9% in the May sample, 13% in the August sample and 17% in the July sample, while the lowest amount was found in the June sample (only 2%). Among detected compounds, (E)-4-(2,6,6-trimethyl-1-cyclohexen-1-yl)-3-buten-2-one (ranging from 0.53% to 5.41%) and 6,10,14-trimethyl-pentadecan-2-one (ranging from 0.75% to 5.98%), were found in the highest amounts. It is interesting to note that the amounts and the variations in their content among samples for both compounds followed the same trend: July > August (5.72% and 5.41%, respectively) > May (2.76% and 2.58%, respectively) > June. Bouzidi et al. [13] also reported the identification of 6,10,14-trimethyl-pentadecan-2one in C. sedoides. Among other detected ketones, significant amounts of tridecan-2-one and dec-1-en-3-one were found. The first component was detected in the highest amount in the July sample (0.67%), while the other one was found in the May sample (0.42%). The July sample was also rich in monoterpene ketone geranyl acetone (0.70%).
Among all detected compounds, aldehydes, which are important odor compounds, were detected in the lowest percentages in all samples (1-2%), with only a few compounds present at a percentage above 0.10%. Aldehydes with low molecular weight are associated with unpleasant aroma, while those with higher molecular weight are responsible for sweet and fruity notes [2]. Tridecanal was dominant in all samples ranging from 0.37% in June to 0.81% in July. Tetradecanal was found in the highest amount in August (0.20%), while its presence in June was not confirmed. On the other hand, (Z)-undec-4-enal was found in the May sample at a percentage of 0.36%, while in other samples it was not detected.
The share of esters in the first two collecting months was equal (10%), while in July and August it was significantly lower, at 6% and 4%, respectively. The dominant ester was methyl arachidonate, with the highest amount found in the May sample (4%). Its content was significantly lower in June (2.49%), July (1.55%), and August (1.74%). Other benzoic acid esters were also found in high amounts in all samples, especially tetradecyl ester in the June sample (4.37%). The highest content of other esters, namely pentadecyl and tridecyl benzoate, were detected in samples harvested in June. Finally, it is interesting to note that all these compounds-tri, tetra, and penta-decyl esters-showed a similar trend across the collecting months: June > July > May > August.
Terpenes are a class of compounds that play an important role as chemical defense agents, but are also involved in some metabolic processes and functions, like the stability of cell membranes and photosynthesis [1]. It has been reported that terpenes are responsible for the distinctive ocean smell of algae, particularly acyclic and cyclic non-isoprenoid C11-hydrocarbons, while the disagreeable odor is related to amines and halogenated, sulphurous, and other specific compounds [1]. However, for the detection of polycyclic aromatic hydrocarbons, substituted phenols, and sulphur compounds, liquid chromatography is required, as they are semi-volatile [11]. From the group of terpenes, a terpene ketone farnesyl acetone (6,10,14-trimethylpentadeca-5,9,13-trien-2-one) was found in the highest amount in all samples (from 0.57% in June to 1.28% in July). The joint FAO/WHO Expert Committee on Food Additives put this compound on its list of flavoring agents, as it is characterized by an intensely sweet and floral odor, which makes it interesting for further applications [23]. Among others, alpha-cadinol was dominant in the May sample at 1.24%; its content was significantly lower in June, while in samples from other to collecting months it was not detected. Bouzidi et al. [13] also reported the presence of this compound in their study, though again, only in samples prepared by hydrodistillation.
Previous studies on Cystoseira species confirmed the potential health benefits of algae extracts and present individual compounds. Bruno de Sousa et al. (2017) in their review paper reported various biological activities of the Cystoseira algae samples, among which properties like antioxidant, antimicrobial (antibacterial, antifungal, antiviral), cytotoxic, antiproliferative, anticancer, antifouling, anti-inflammatory, antileishmanial, cholinesterase inhibitory, anti-diabetic, anti-obesity, hepatoprotective, etc. were confirmed by different studies.
Among recent studies, Hentati et al. [16] detected good antioxidant activity of watersoluble polysaccharides (fucoidan and a sodium alginate), while antidiabetic and antioxidant activity of phlorotannins extracted from C. compressa were reported by Gheda et al. [17]. Abu-Khudir et al. [24] investigated and confirmed the good free radical scavenging activity of the C. crinita extracts, antimicrobial activity against various pathogenic microorganisms, and strong cytotoxic effects against a panel of cancer cells. The authors, using GC-MS analysis, also confirmed the presence of a vast array of medicinally valuable phytochemical compounds belonging to various classes. Ahmed et al. [25] investigated the antimicrobial and cytotoxic activity of the extract, fractions, and pure compounds from C. trinodis, and their results pointed out the good activity of the samples.
Although the yield of EOs obtained from algal samples is low, C. compressa could be an interesting subject of further analysis on algae biological activities, due to the results of previous studies and the interesting chemical profiles of isolates (EOs and extracts from our other study).
Algal Material
The wild-growing populations of C. compressa (Phaeophyceae) were collected monthly from May to August 2020 on the coast ofČiovo Island, Central Dalmatia, Croatia (43.493389 • N, 16.272505 • E). Samples were collected throughout a lagoon at 25 points in depth, ranging from 20 to 120 cm. During every sampling, the sea parameters (temperature in • C and salinity in Practical Salinity Unit, PSU) were measured using an YSI Pro2030 probe (YSI Inc., Yellow Springs, OH, USA) and the obtained results are shown in Figure 3. The sea temperature rose during the months of sampling, while the salinity changed under the influence of water springs (typical only in periods with sufficient rainfall, while in periods of drought the springs cease to flow). Pre-treatment of the algal material involved removal of sand, epiphytes, and other organisms from the surface by washing it with tap water. The algal materials were air-dried (for 7 days at room temperature in a shaded and aerated place) and dried algal materials were used for the isolation of the volatile organic compounds.
Extraction of Essential Oils
C. compressa essential oils were obtained by hydrodistillation of dried algal material (100 g) that was immersed in a flask with distilled water (1000 mL). The extraction process was performed in a Clavenger apparatus (Deotto Lab, Zagreb, Croatia) for 3 h. Pentane and diethyl ether (1:1, v/v) in the inner tube of the apparatus were used for trapping the volatile compounds carried through the system by vapor. Finally, after hydrodistillation, the distillate was dried over anhydrous sodium sulphate while nitrogen was used to evaporate the organic solvent. The samples of essential oils were stored at +4 • C in the dark until analysis [21,26,27].
GC-MS Analysis of Volatiles
The seaweed EOs were analyzed by GC-MS (Shimadzu QP2010, Shimadzu, Kyoto, Japan) using an autosampler and a DB-5 60 m × 0.25 mm × 0.25 µm column (Agilent Technologies Italia Spa, Milano, Italy). The EOs were resuspended in hexane and 1 µL was injected in the following gas chromatographic conditions: injection temperature 260 • C, interface temperature 280 • C, ion source 220 • C, carrier gas (He) flow rate 30 cm/s, splitting ratio 1:10. The oven temperature was programmed as follows: 40 • C for 4 min, from 40 • C to 175 • C with a 3 • C/min rate of increase, from 175 • C to 300 • C with a 7 • C/min increase, then holding for 10 min. EO constituents were identified by comparing their mass spectra with those reported in literature and the NIST Mass Spectral Database (NIST 08, National Institute of Standards and Technology, Gaithersburg, MD, USA). For each sample, the volatile profile composition was expressed as the relative percentage of each single peak area with respect to the total peak area.
Conclusions
This paper is the first report that provides information about the influence of the harvest period on essential oil aromatic compounds in C. compressa, and to obtain insight into the impact of individual components on the general sensory perception of the algae. According to the results obtained, C. compressa could be considered as a source of novel chemical entities with great potential to be used as an ingredient in different industrial applications such as functional foods, pharmaceuticals, and/or cosmeceuticals. The increase in the content of some of the key aroma compounds during the vegetation periods has been noted, while some detected compounds are probably products of degradation or modifications caused by aggressive isolation conditions. As new extraction methods have greatly developed in the last few years and have been widely used in the field of natural compounds due to their numerous benefits in comparison to conventional ones, this scientific research is still ongoing and opens a wide spectrum of possibilities for future research. | 2021-11-05T15:13:32.289Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "498769d0c1e32359e3091613b8d2106771cf957d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/21/6649/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5792742b91dabee3f2bcbeb11a17063c0352dbb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158798852 | pes2o/s2orc | v3-fos-license | Concentrated Phosphorus Recovery from Food Grade Animal Bones
Disrupted nutrient recycling is a significant problem for Europe, while phosphorus and nitrogen are wasted instead of being used for plant nutrition. Mineral phosphate is a critical raw material, which may contain environmentally hazardous elements such as cadmium and uranium. Therefore, phosphorus recovery from agricultural and food industrial by-product streams is a critically important key priority. Phosphorus recovery from food grade animal bone by-products have been researched since 2002 and a specific zero emission autothermal carbonization system, called 3R, has been developed in economical industrial scale, providing the animal bone char product (ABC) as output. Different animal bone by-products were tested under different conditions at 400 kg/h throughput capacity in the continuously operated 3R system. Different material core treatment temperatures (between >300 ◦C and <850 ◦C) were combined with different residence times under industrial productive processing conditions. It was demonstrated that material core treatment temperature <850 ◦C with 20 min residence time is necessary to achieve high quality ABC with useful agronomic value. The output ABC product has concentrated >30% phosphorus pentoxide (P2O5), making it a high quality innovative fertilizer.
Introduction
Disrupted nutrient recycling is a serious problem for Europe and all over the world.Phosphorus (P) and nitrogen (N) are lost across environmental media during food production or are wasted instead of being used for plant nutrition [1].
Phosphorus occurs in many minerals, of which apatite, Ca 5 (F,Cl,OH) (PO 4 ) 3 , is the most abundant and by far the most important group [2].Apatite, a group of phosphate minerals, has two major natural forms with concentrated P-content: mined mineral phosphate and biological origin animal bones.The term of phosphate rock (PR) refers to rock containing phosphate minerals, usually apatite, which can be commercially exploited, either directly or after processing, for commercial applications [2].Phosphate rocks of sedimentary origin typically have 30-35% phosphorus pentoxide (P 2 O 5 ), whereas those of igneous origin contain marginally higher P 2 O 5 , typically 35-40% [3].
Phosphate rocks by their geological and mineralogical nature contain a host of environmentally hazardous chemical elements such as cadmium (Cd), uranium (U), lead (Pb), mercury (Hg) and arsenic (As), among others.
Superphosphate fertilizers are particularly abundant in these hazardous elements and they contaminate the agricultural soils when used as fertilizer [4].
U is an accompanying element of PR, particularly that of sedimentary origin.Depending on the geographical and biogenic origin, the uranium concentrations of PR may be as high as 150 mg kg −1 in sedimentary and 220 mg kg −1 in igneous PR [5].In Germany, the use of P-fertilizer from 1951 to 2011 has resulted in a cumulative application of approximately 14,000 t of U on agricultural land, corresponding to an average cumulative loading of 1 kg U per hectare [6].
Reserves of PR used to make such fertilizers are finite, especially those ones with low Cd and U content, and concerns have been raised that they are in danger of exhaustion.Long term global food security requires the sustainable supply of P, a key resource for soil fertilization that cannot be substituted [7].Phosphate rock, which is the main fertilizer constituent, has been identified by the European Commission (EC) as a critical raw material in 2014 and upgraded in 2017 [8].
The estimated yearly consumption of manufactured phosphorus mineral fertilizers in the European Union (EU) 27 member states (MS) was 1.11 Mt P in 2014 based on data provided by Fertiliser Europe [9].This is equivalent with 2.55 Mt/year mineral phosphorus fertilizer expressed in phosphorus pentoxide (P 2 O 5 ).
For phosphate fertilizers, the EU is currently almost entirely dependent on import of PR mined outside of the EU (more than 90% of the phosphate rock used in the EU are imported, mainly from Morocco, Tunisia and Russia) [10].Concentration of phosphorus mines and gas fields outside the EU makes the EU fertilizing product industry and the European society dependent on, and vulnerable to, imports, high prices of raw materials and the political situation in supplying countries [1].Therefore, P recycling is one of the key priorities of the sustainable agricultural systems.Trends and developments on the global PR market are putting the EU's security of supply of PR under increasing pressure [11].
The environmental, economic, and social implications of food waste are of increasing public concern worldwide [12].The EU alone wastes 90 million tons of food every year or 180 kg per person.Much of this is food that is still suitable for human consumption [7].Losses from food processing mainly originate from the slaughtering of animals and the subsequent removal of P-rich waste materials (e.g., animal bones) from the biogeochemical P cycles.This loss flow equals 294 kt P year −1 [13].The cattle, fish, and poultry industries are the largest sources of animal food industry waste [14].Animal-derived food waste contains rather high amounts of protein and cannot be disposed into the environment without proper treatment [14].
According to the Eurostat databases, more than 51 million tons of carcass weight animals (bovine, poultries and pigs) are slaughtered every year in the EU 28 countries [9].According to Meeker and Hamilton [15], approximately 49% of the live weight of cattle, 44% of the live weight of pigs, and 37% of the live weight of broilers are materials not consumed by humans.According to the European Fat Processors and Renderers Association (EFPRA), the proportion of each animal is not used for human consumption and rendered is the highest for bovine animals (42%), followed by pig (34%) and poultry (25%) [16].The European rendering industry (35 EFPRA members, 26 EU countries) processed more than 17 million t of raw materials in 2014, from which the category 3 processed products were 12 million t.EFPRA members process the majority of the total animal by-products in the EU and additionally a significant amount of material streams is produced by non-member organizations [17].The skeletal system can be up to 20% of the carcass weight, which mean that over 4 million tons of animal bone biomass re produced in the EU annually.
Biological apatite is an inorganic calcium phosphate salt.It is also a main inorganic component of biological hard tissues such as bones [18].The majority of P (85-88%) exists as bone P in the body of vertebrates [19].Animal bone by-product is characterized by very high P content compared to other animal waste.The P content of bovine and poultry bone is >10.5% on dry weight basis [20,21].Other animal by-products have far lower phosphorus content than bone grist.For example, the phosphorus content of liquid pig manure, with 2-10% dry matter content, is 0.20-1.25%while the solid pig manure with 20-30% dry matter content has 1.6-5.08%P-content [22].
Since 1870, the age of technological revolution, and through the 21st century, the carbon related technologies and products have been one of the most comprehensively researched sectors for energetic, steel industrial, activated carbon adsorbent, pharmaceutical, biotechnological and other applications.However, in the modern age, new environmental, climate protection and output product safety aspects require significantly improved and advanced pyrolysis technology performances to better protect the environment and human health.In this context, pyrolysis technology opens new technical, economic, environmental and legal opportunities for advanced production and use of safe Animal Bone Char (ABC) materials.
Pyrolysis (or carbonization process under true value reductive processing conditions) is the chemical decomposition of an organic substance by heating in the absence of oxygen.The process of pyrolysis transforms organic materials into three different components, i.e. solid, gas and liquid, in different proportions depending upon both the feedstock and the pyrolysis conditions used [23].
The key objective of the pyrolysis process is to produce different types of carbon products.The organic carbon content of the pyrolyzed chars fluctuates between 5% and 95% of the dry mass, dependent on the feedstock and process temperature used.For instance, the carbon (C) content of pyrolyzed beech wood is around 85% while that of poultry manure is around 25% and that of bone is less than 10% [24].Different pyrolysis technology designs have highly variating quality performances to carbonize organic materials with different heat transfer efficiencies under reductive processing conditions, which is directly reflected in the remaining organic residual toxic content in the output char product, most importantly polycyclic aromatic hydrocarbons (PAHs).
Pyrolysis materials are different types of reductive processed stable carbon materials that are specifically made for different functional applications in designed quality, in which a chemically modified substance is produced from eligible input biomass materials via carbonization thermochemical treatment production process that fully meets the EU quality, safety, environmental and climate protection requirements.Biochar products are plant or animal bone biomass originating stable carbon pyrolysis materials with specific quality and safety parameters for explicit soil functional applications.The nutrient content of biochar mainly depends on the source: plant biochar is a high carbon composition soil improver with no or limited nutrient content, while ABC (Animal Bone Char) is a high phosphorus and calcium concentrated innovative organic fertilizer with high agronomic efficiency and low carbon content.
ABC is an innovative phosphorus natural fertilizer made of food grade (category 3) animal bones with concentrated >30% P 2 O 5 content and specific quality for agronomical efficient organic and low input farming applications, also known as Bio-Phosphate.
Thus far, bone char has proven to be efficient in the remediation of heavy metal-contaminated soil and water [25,26] and to be suitable for agronomical applications.In previous studies, bone char (15% P, 28% Ca, 0.7% Mg) provided sufficient P and was also able to immobilize Cd in moderately contaminated soils [27].Meat and bone meal biochar showed potential for soil amendment, as liming agent, and for the remediation of Pb in contaminated waters [28].In highly Cd-contaminated soil with sufficient P supply, bone char could increase the yields of lettuce, wheat and potatoes, and at the same time decrease Cd contamination of potato [29].ABC is also suitable as a carrier for microorganisms, mainly P-solubilizing, acting as plant beneficial and biocontrol agents [30,31].However, these studies used lab-scale pyrolysis processes, while an industrial scale pyrolysis system processing all types of category 3 and category 2 animal bones and converting them into ABC has only been recently developed.
Directive 2008/105/EC [32] lists PAHs as identified priority hazardous substances and persistent organic pollutants which are generated from natural or anthropogenic processes, such as carbonization process.The occurrence of contaminants, such as polycyclic aromatic hydrocarbons (PAHs), Potential toxic elements (PTEs) in pyrolysis may derive either from contaminated feedstocks or from pyrolysis conditions which favor their production [23].Limits of these contaminants in biochar are under discussion and planned to enter into force in EU regulations and voluntary standards [33].It has been indicated that low temperatures are unable to remove micro pollutants that were originally present in contaminated feedstocks or created during the thermal process [34][35][36].During industrialized pyrolysis process, PAHs are the key target and performance indicator contaminants.Generally, it is considered that adequate pyrolysis methods allow a significant reduction of the PAHs contamination and that high PAHs levels indicate substandard production conditions [37].For example, if the process conditions do not separate solid residues and volatile tar components during cooling phases, a high PAHs content may eventually result [38].
For slow-pyrolysis processes (at least 20 min reaction time), most of the weight loss in plant based pyrolysis materials derived from contaminated input materials occurs over the temperature range from 250 • C to 550 • C due to burning out of organics [39][40][41], at least under laboratory conditions.At 500 • C, the pyrolysis reaction time to remove >90% of the organic micro pollutants was less than 5 min [36].However, animal bone based pyrolysis materials, due to their specific character, require far higher processing temperatures, up to 850 • C material core temperature and longer residence time, under true value industrial production conditions.In all types of pyrolysis materials, it is important to highlight, that there is a significant difference between the processing results from laboratory tests and from true value industrial and market competitive production conditions.
There is a substantial risk for the accumulation of non-volatile pollutants such as inorganic metals and metalloids in the pyrolysis materials as these mostly remain in the solid phase and become concentrated during the production process.
When pyrolysis material is irrevocably applied to open and complex soil ecological systems, there is also a direct interlink to subsurface water systems.Therefore, only qualified and safe biochar products can be applied to avoid both soil and water pollution.Currently, there is lack of harmonized quality and safety standards at European level for pyrolysis material products.However, the complex and strict criteria for safety and quality, functional application efficiency under open environmental and ecological conditions, are already unconditionally valid for all types of biochar products according to the Member State regulations.Nevertheless, since there is not yet a harmonized law on EU level, there are Member State differences.Industrial pyrolysis technology, pyrolysis material production and commercial applications, above 1 tons/year capacity, require Member State Authority permits that conform according to the European Union regulations.Less than 1 ton/year pyrolysis processing capacity is counted as research quantity.
The list of 16 polycyclic aromatic hydrocarbons (PAHs), issued by the U.S. Environmental Protection Agency (EPA) in 1976 with a view to use chemical analysis for assessing risks to human health from drinking water, has gained a tremendous role as a standardized set of compounds to be analyzed, especially in environmental studies [42,43].Although not mandated by law in most countries, it appears that the list has attained the authority of a legal document and that the 16 priority PAHs compounds are routinely investigated in many environmental situations [43].
The new scientific recognitions and developed analytical methods expanded the list to PAH19, which might be further expanded in the future.As an example, some Member State have required Authority accredited long termed agronomic efficiency tests and maximized potential organic contamination levels <1 mg kg −1 for sum of 19 PAHs congeners for soil improvers since 2005, such as Hungary [44], while other Member States do not perform agronomic efficiency testing for novel soil improver products and apply up to <6 mg kg −1 for sum of US EPA 16 PAHs congeners.For example, the German Federal Soil Protection and Contaminated Sites Ordinance gives precautionary values for soil with low (≤8%) and high humus content (>8%) regarding the total content of 16 priority PAHs as defined by the Environmental Protection Agency of the United States (EPA 16 PAHs), namely 3 mg kg −1 soil and 10 mg kg −1 , respectively [33].
The Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) is mandatory for all types of pyrolysis materials for import, manufacturing and placing on the market on its own or in preparation above 1 ton/year capacity, which is to be applied from 1 June 2018 in the EU.REACH is to be applied above 1 ton/year capacity for all types of pyrolysis material cases, i.e. all products that are thermochemically modified substances.
The conditions for access to the fertilizer market are only partially harmonized at EU level.The fragmentation of the non-harmonized part of the market is seriously hindering trade opportunities [45].Around 50% of the fertilizers currently on the market, however, are left out of the scope of the Regulation.This is true for a few inorganic fertilizers and for virtually all fertilizers produced from organic materials, such as animal or other agricultural by products, or recycled bio-waste from the food chain [10].
Many Member States have detailed national rules and standards in place for such non-harmonized fertilizers, with environmental requirements, such as potential toxic elements (PTEs) contaminant limits, that do not apply to EC-fertilizers [10].The EU Member States have long regulated the agricultural use of soil improvers and organic fertilizers, such as ABC and other organic products.However, the regulations are different in each EU Member State, so the Mutual Recognition concept is difficult to be applied in practice.Therefore, the new EU Fertilizers Regulation revision under the Circular Economy incentive will soon, hopefully by 2019-2020, open full and EU wide law harmonization opportunity for many agricultural, food, and industrial by-products and organic material streams, including biochar as well as its formulated products.
The recent initiative on EU fertilizing products (COM (2016) 157 final) is expected to create a level playing field for all fertilizing products at EU level, thereby increase the industry's opportunities to have access to the Internal Market while maintaining the national regulations in place for products limited to national markets, hence avoiding any market disruption [45].
The improved and safe output pyrolysis material products enhance the environmental, ecological and economic sustainability of the food crop production, while reducing the negative footprint and contributing to climate change mitigation.Terra Humana Ltd. has been the science and technology coordinator and key technology designer for EU Commission co-financed biochar applied research projects since 2002, with prime specialization in ABC recovered bio-phosphate production full industrial engineering, economic field applications and market uptake evaluations.The core competence of Terra Humana Ltd. is zero emission pyrolysis and carbon refinery science and technology developments, objective driven to the added value recycling and recovery of phosphorus and other valuable nutrient materials [46].In this context, Terra Humana Ltd. is the EU and international knowledge center for ABC Bio-Phosphate matured research, science, technology and industrial engineering.
The recently closed EU project of Terra Humana Ltd. is REFERTIL (EU contract Number 289785 contracted in 2011, www.refertil.info),the complex development works of which cover the fields of applied biochar, most importantly ABC, science, economical full scale industrialization and commercialization.REFERTIL is a biochar policy support specific project for conversion of biochar applied science into economical industrial practice, for which a comprehensive biochar law harmonization proposal has been reported to the Commission.
The objective of the present paper is to describe the Recycle-Reduce-Reuse (3R) zero emission pyrolysis technology designed for phosphorus recovery within the REFERTIL project as a case study for industrial scale production of animal bone char.
Origin of the Pyrolysis Materials
All ABC pyrolysis materials were processed by Terra Humana Ltd.Different pyrolysis treatment conditions (treatment temperature and residence time) were used in material treatability tests, considering different food grade animal bone meal and bone grist by-products with 400 kg/h (3200 tons/year) throughput capacity continuously operated 3R zero emission industrial pyrolysis equipment.
The origin of the category 3 food grade bone materials (cattle, chicken and big bones) was from the local animal by-product rendering and fat processing industrial factory.The rendering factory processes fresh raw material animal by-products from the meat and livestock industry into usable materials under heat treatment of 133 • C for 20 min at 3 bars of pressure and operating according to the EU animal by-product regulations 1069/2009 and 142/2011.
Different material core treatment temperatures (300, 450, 600 and 850 • C) were combined with different residence times (15,20,30,40, 50 and 60 min) under industrial productive processing conditions.The temperature instruments were calibrated by accredited calibration laboratory, which is specialized for measuring temperature up to +1200 • C of different solid materials, liquids, gases and air under ISO 17025 standards.Standard Honeywell ceramic thermocouples were used with IP67 head for measurements up to +1200 • C.
Representative plant based pyrolysis material samples were received from the UK, Italy, France and Denmark and comparative tested for PAH16 and PAH19.
Table 1 shows the description of pyrolysis condition (treatment temperature (T), residence time (tres) and samples ID of different EU industrial reference plant based pyrolysis material samples which have been collected and analyzed.Careful material specific consideration is needed for all analytical items, as well as which standards should be applied for investigation of the quality and safety of the pyrolysis materials, especially when open ecological soil applications are targeted.The Environmental Testing Laboratory of WESSLING Group is the first laboratory in Europe to have obtained accredited status for different analyses of the plant based and animal based pyrolysis materials.The accredited analysis of the different samples has been done by WESSLING Hungary Ltd.For sampling, EN-12079 standard was used, while, for sample pre-treatment, Method CEN/TC400-EN 16179:2012 was used.
Pyrolysis Material Yield
The yield of ABC was calculated as the proportion of the weight of pyrolysis product to the original material.
Determination of Total Carbon and Total Organic Carbon
Total Carbon was determined according to EN 13137:2001 standard, while the total organic carbon was measured according to EN 13039:2012 standard.
Determination of Total Nutrient Content of Pyrolysis Material
Total N was measured according to ISO 13878:1998-11 standard.Regarding sample preparation for determination of Total P, K, Ca, Mg, Na and S, the EN 13650:2002 standard was applied, with extraction of aqua regia soluble elements for sample preparation.Total P, K and S were measured by EPA Method 6010C (ICP-OES), while total Mg, Ca, Na were measured by EPA Method 6020A (ICP-MS).
Calculation of Nutrient Content of Pyrolysis Material Expressed in Oxide Form:
P 2 O 5 (Phosphorus pentoxide) was calculated from the directly measured total P.The following chemical conversion factor was applied: P 2 O 5 = Total P/0.436.K 2 O was calculated from the directly measured total K.The following chemical conversion factor was applied: K 2 O = Total K/0.83.
MgO was calculated from the directly measured total Mg.The following chemical conversion factor was applied: MgO = Total Mg/0.603.
CaO was calculated from the directly measured total Ca.The following chemical conversion factor was applied: CaO = Total Ca/0.715.SO 3 was calculated from the directly measured total S. The following chemical conversion factor was applied: SO 3 = Total S/0.4.
Na 2 O was calculated from the directly measured total Na.The following chemical conversion factor was applied: Na 2 O = Total Na/0.742.
Determination of Phosphorus Soluble in 2% Citric Acid
The EN 15920:2012 standard was used for sample preparation.The Phosphorus soluble in 2% citric acid was measured according to EPA Method 6010C (ICP-OES).
Determination of Organic Contaminants
PAH16 and PAH19 were measured according to CEN/TS 16181:2013 standard by gas chromatography (GC).
PCDD/F were measured according to CEN/TS 16190:2012 by gas chromatography with high resolution mass selective detection (HR GC-MS).
Measurement of Potential Toxic Elements
Regarding Hg, sample preparation was done according to EN 13650:2002 and extraction of aqua regia soluble elements.Analysis was done with EPA Method 6020A using ICP-MS.
Cr (VI) was measured by CEN/TS 16318:2012.The determination was done by ion chromatography with spectrophotometric detection (method B).
As, Cd, Total Cr, Cu, Pb, Ni and Zn were measured according to EN 13650:2002 by extraction of aqua regia soluble elements and ICP-MS: EPA Method 6020A.
ABC Yields
Table 2 shows the percentage amount of ABC product and gas/vapor phase when food grade bone grist (category 3, pig origin) was treated at different (850, 600, 450 and 300 • C) material core temperatures (T), with 20, 50 and 60 min residence time (tres), in continuously operated reductive environment under industrial conditions, when pressure (P) was under −50 Pa.The total processing and residence time was longer than stated residence time (tres) at elevated temperature, such as 850 • C 20min , which was the final achieved material core temperature with associated tres.All these factors are key performance design quality parameters and specific for each pyrolysis technology designs.
Under industrial production conditions, both the material core temperature of thermal treatment and residence time have been significantly affected the yield of ABC solid products and in parallel the amount of gas/vapor phase.The lowest yield (46 w/w%) of solid ABC product was achieved at material core treatment temperature of 850 • C with tres 20 min.The highest yield (71 w/w%) was achieved at low treatment temperature (300 • C), even under as long as 60 min residence time.The yield of solid char phase decreased by the increasing treatment temperature, while the gas/vapor phase increased.As the majority of ABC is produced from cattle bones, having compact and dense character, it is demonstrated, for the animal bone feed stream material case, that high temperature tres, such as material core temperature 850 • C under at least tres 20 min, is needed during industrial conditions.The results indicated that lower material core treatment temperatures (around 450 • C) generally favor ABC production, but were still insufficient to get high quality products.Higher material core temperatures (600 • C-850 • C) produced lower amount of ABC.In other words, ABC yield decreases with increasing pyrolysis temperature.The material core temperature highly affects product quality.Choosing the optimal final material core process temperature under industrial production conditions is highly dependent on the pyrolysis processing design quality and performance, and finally reflected in the economic viability of the commercial production operations.
Total Carbon and Total Organic Carbon Content
Table 3 shows the total carbon and the total organic carbon content of different ABC materials.The total carbon and total organic carbon content of ABC materials produced from different animal bone feedstock was below 10%.
Total Primary and Secondary Nutrient Content of Different ABC Bio-Phosphate Products
The nutrient contents and its availability from the ABC recovered bio-phosphate products can be used for the evaluation of the agronomic properties.The quality parameters and the agronomic value of all types of ABC products that characterize the usefulness in agricultural applications (such as the nutrient content) should be declared as total.The information concerning nutrient content should also be communicated with the product.In all cases, the nutrient specification should be considered according to the characteristics and the application performance of the product.The mineral nutrient content of the feedstock is largely retained in the resulting ABC, where it concentrates due to the gradual loss of C, hydrogen (H) and oxygen (O) during processing.
Table 4 shows the primary nutrient (N,P,K) content of different category 3 animal by-products and pyrolyzed ABC samples.The phosphorus content of ABC recovered bio-phosphate materials is expressed both in the form of element and in oxide form (phosphorus pentoxide percentage by weight, P 2 O 5 %).In all cases, the total phosphorus content of output ABC recovered bio-phosphate products were higher compared to the relevant feed materials.The phosphorus content of animal bone varied 19.5-23.9%P 2 O 5 content, while the final ABC product was more concentrated, with 28-31.9%P 2 O 5 content.The total nitrogen (N) content is expressed in percentages of dry weight.The potassium (K) content of all ABC samples is expressed both in the form of element and in oxide form (potassium oxide percentage by weight, K 2 O%).The low nitrogen content of ABC (below 1.5%) results from the nitrogen loss during the pyrolysis process.
Table 5 shows the citric acid soluble P 2 O 5 content of different ABC products (pyrolysis condition: T = 850 • C, tres = 20 min, P = −50 Pa) compared to NPK 15:15:15 mineral EC-fertilizer.In the case of ABC products, 39-43% of the total phosphorus content was citric acid soluble, comparing to the rapid release mineral fertilizer where 70% of the total P was citric acid soluble.In this context, the ABC is controlled and/or slow release fertilizer, with a solubility intermediate between phosphate rock and triple superphosphate [47].In all cases, the total potassium content of output ABC recovered bio-phosphate products were higher than the relevant feed materials.While the volatile organic compounds were removed during the reductive thermal pyrolysis process under negative pressure conditions, the inorganic elements (having higher boiling point) were enriched in the final ABC products.
Table 6 shows the secondary nutrient content of different ABC samples, such as calcium (Ca), magnesium (Mg), Sodium (Na) and sulfur (S).The results also demonstrated that ABC recovered P also had valuable calcium content, expressed in calcium oxide (CaO) (38.7-43.6%CaO).
PAHs Content of Animal Bone Chars and Different Industrial Available Plant Based Pyrolysis Material
PAH16 and PAH19 content of 41 different ABC samples were carefully investigated and compared.The most probable components were naphthalenes (all PAHs are naphthalanes in 24% of the samples), including most dominantly naphthalene (present in 95% of the samples at an average concentration of 1.93 mg kg −1 ).1-and 2-methylnaphthalenes, which are not listed under US EPA PAH16, were present in 70% of the samples at an average concentration of 0.7-0.8mg kg −1 .Phenanthrene showed similar values.Anthracene, fluoranthene and pyrene were present in 36-38% of the samples (over the benchmark), but only at an average concentration of 3 mg kg −1 .A summary of these most probable PAHs is shown in Table 7.Other PAHs were negligible.Summarizing the PAHs content of all pyrolysis material cases and the results, the scientific evidence fully supports that analysis of PAH19 is a key target for contamination compounds and is very important and justified, as 1-and 2-methylnaphthalenes (measured only under PAH19) are very common.In both the ABCs and the plant based pyrolysis materials cases, naphthalenes were target PAH contaminations.Naphthalanes were present in 83% of the plant based samples at an average concentration of 1.2 mg kg −1 , while 1-and 2-methylnaphthalenes probability was 55-66%.Sometimes PAH19 concentration can be double the PAH16 concentration, so it is also an important point to be considered during the definition of limit values, especially in environmentally sensitive areas.The limit value for PAH16 can be exceeded when measuring PAH19.Table 8 shows examples of the different PAH16 and PAH19 results in different industrial available plant based pyrolysis materials.Despite the medium temperature processing and the long residence time, the output product PAH content was still too high from these different pyrolysis technologies.The results clearly indicate that industrial production technology performance design is one of the most important and critical factors that ultimately impacts all types of pyrolysis material product quality, including low end product quality, related to biochar soil applications.It is general experience that carbonization of plant materials in industrial scale is faster and less energy transfer demanding than carbonization of animal bone materials, due to the significant character difference of the organic content.
Table 9 shows the sum of EPA PAH16 and of PAH19 contents of the 3R technology produced ABC samples, produced at low, medium and high temperature at equal tres conditions, in industrial scale carbonization process, at 400 kg/h throughput capacity.The results clearly indicate that both the major types of economically interesting animal bone types, but especially cattle bone, require higher processing temperatures down to the material core.The PAH16 and PAH19 concentrations had a decreasing tendency in all ABC samples produced from 300 • C to 850 • C material core temperature at same tres.It is demonstrated that the high heat transfer efficiency and thermodynamics of the 3R pyrolysis process do not support formation of PAHs, while the targeted rapid tres in higher material core temperature is a safe and economically productive solution to process ABC.PAHs content of any biochar primarily depends on the carbonization processing technology performance design quality, which ultimately defines the processing conditions.Within the REFERTIL project, 41 ABC recovered P samples were investigated, together with many different types of plant based pyrolysis materials.The results clearly justified that all high quality ABC contains less than 1 mg kg −1 PAH19.In this context, it has been demonstrated that the advanced thermodynamics of the modern high quality designed pyrolysis process performance do not support the formation of PAHs and dioxins.
The 1 mg kg −1 maximum allowable limit of PAH19 is a key performance indicator, which under commercial production driven industrial processing conditions can be reached only at high material core temperatures, especially in the cattle bone case.The advanced processing condition requirements for plant based pyrolysis materials are far less than for the animal bone case.Therefore, manufacturing and application of ABC Animal Bone Char recovered phosphorus fertilizer require far higher technological level than plant based biochar soil improver.In all pyrolysis materials under industrial production conditions, the analytical characteristic of any biochar products quality performance is the identified fingerprint of the pyrolysis/carbonization processing technology engineering design quality performance and also reflects the feed material characteristics.
PCBs Content of Different Animal Bone Chars
Table 8 shows the PCB7 content of different ABC samples.PCBs were not detected from any ABC case, but high chlorine content of the input material was also not expected.As in no case have been dioxins detected, we have concluded that PCBs presence is a good and under any circumstances safe indicator of these persistent and bio-accumulative chemicals.
PTEs Content of Different Animal Bone Chars
Certain Potential Toxic Elements (PTEs) such as mercury, cadmium, nickel, and lead are included in the list of priority substances.The Directive 2008/105/EC lists cadmium and mercury as priority hazardous substance.
Measuring PTEs in pyrolysis materials is very important, because of the 3-5 times re-concentration tendencies during the phase separated processing, thus even higher re-concentration of the PTEs in the final products compared to feed material is common.This results in a much higher PTEs concentration in solid output products than in the original input average.PTE content of 41 different ABC samples has been carefully investigated.Table 10 shows the potential toxic elements (PTEs) contents of three different ABC samples.All 41 different ABCs samples were well below strict member state regulations and REFERTIL recommended safety limit value.
Discussion
For each type of pyrolysis (carbonization) processing technology, at full industrial production scale, the engineering design quality and efficiency performance is a critically important element.The pyrolysis technology design performance and quality will be reflected in all cases as a unique and recognized fingerprint in the output pyrolysis product quality and safety performance characteristics.In this context, the application of low quality pyrolysis production technology under market competitive production conditions result low quality and safety pyrolysis material output products with low market value, if any at all.Another important impact factor is the input material characteristics, which are also reflected in the output product characteristic.
The residence time is an important factor to maintain the economical industrial productivity during a short processing time, while it is also unconditionally important to assure equal quality for the processed carbon products.The Extended Producer Responsibility certification, the product quality and safety labeling documentation as of specified EU regulations, and Customer's "right to know" information, are all important parts for the commercialization of biochar products.
All biological materials might have variations in their natural compositions and character, which is diverse by nature.The advanced carbonization processing must be able to fully compensate these variations and assure equal quality for the output ABC products.The animal by-product rendering pre-processing sterilization ofthe the input material animal bone by-products at 133 • C 20min,3bars is upgraded into 3R carbonization final processing 850 • C 20min safe performance.This system provides a safe and constant quality ABC product stream, while excluding any biological reand trans-contamination risks at later agricultural applications, under any variating climatic and soil conditions.
The rendering industrial origin, food grade category 3 and industrial grade category 2 animal bone grist, is processed into ABC Bio-Phosphate.ABC is a macro-porous bio-based fertilizer, having as high as 92% pure calcium phosphate, 8% carbon content, and high nutrient density (30% P).
ABC provides multiple product functionalities in the organic and low input farming sectors, such as organic fertilizer (soil improver, growing medium and/or fertilizing product blends).The substitution of mineral phosphate import by recovered phosphorus is an important goal for European agriculture already in the short term, where ABC is a highly efficient and safe alternative in large extent in European industrial dimension.The fully safe ABC is used at low doses (100-600 kg/ha, on average 300 kg/ha) and in few cases when justified even up to 1000 kg/ha.ABC bio-phosphate Phosphorus Fertilizer Replacement Value (PFRV) substitution potential in European dimension is estimated already at >5% (>125,000 t/year P) in short term (<2025) for all agricultural applications.The ABC PFRV for the organic farming and low input farming sectors, it is estimated at 100% in medium term (<2030).ABC overall European agriculture PFRV in the long term (>2030) is estimated at over >20% (>500,000 t/year P).
The REFERTIL consortium integrated the pyrolysis applied scientific and high maturity research, the industrial engineering, the legal and market competitive economical aspects and user demands from the horticultural sector.Harmonized and standardized analytical measurements have been developed for the determination of the physico-chemical properties, potential toxic element content and organic pollutants in all types of pyrolysis materials.A proposed quality and safety criterion system has also been set up which sets maximum inorganic and organic pollutant contents for safe application (Tables 10 and 11).The most important PTEs are Cd, Cr (Cr total and/or Cr(VI)), Cu, Zn, Hg, Ni, Pb and As, while the key organic parameters are polyclorinated dibenzodioxins and furans (PCDD/Fs), the sum of seven polychlorinated biphenyls (PCB7) and the sum of 16 US EPA priority PAHs (PAH16) congeners.
PCB7 is the sum of seven PCBs, PCB 28, 52, 101, 118, 138, 153 and 180.PAH16 is the sum of the following 16 US EPA congeners: naphthalene, acenaphthylene, acenaphthene, fluorene, phenanthrene, anthracene, fluoranthene, pyrene, benzo All proposed parameters are maximum allowable limits on EU level, which in justified environmental cases may be MS amended to lower limits.PAHs are key performance indicators, while PCDD/F and PCBs are not potential contamination risks.In some MS, 1 mg kg −1 for the sum of 19 PAHs congeners is permitted since 2005 as maximum limit for soil improvers.This low limit value requirement is already applied with special concern in environmentally sensitive regions.In general, 4 mg kg −1 PAH16 limit value is proposed.With various pyrolysis processing conditions, it has been verified that the technology critically influences the quality of the product.
Extended producers' responsibility and liability for product safety are to be applied for all types of pyrolysis material cases.
Table 11 summarizes the proposed safety criteria for organic pollutants and Table 12 for potential toxic elements by REFERTIL and the ongoing EU Fertilizers Regulation revision law harmonization.
Conclusions
Disrupted nutrient recycling is a significant problem for Europe, while phosphorus and nitrogen are wasted instead of being used for plant nutrition.Mineral phosphate is a critical raw material, in particular for Europe, which may contain environmentally hazardous elements such as cadmium and uranium.Therefore, phosphorus recovery from agricultural and food industrial by-product streams is a critically important key priority.
A specific zero emission autothermal carbonization system, called 3R, has been developed in economical industrial scale within the EU project REFERTIL, providing animal bone char product (ABC) as output.This system is the first industrial scale pyrolysis process for phosphorus recovery from food grade animal bone by-products.It has been demonstrated that material core treatment temperature <850 • C with 20 min residence time is necessary to achieve high quality and safe ABC with useful agronomic value.At the same time, PAHs have been identified as key performance indicators, and a limit of 1 mg kg −1 is recommended for all types of pyrolysis material cases.
Table 1 .
List of industrial reference plant based pyrolysis material sample.
Table 3 .
Total carbon and total organic carbon content of different Animal Bone Char samples.
Table 4 .
Comparison of primary nutrient contents of different animal by-products and ABC samples.
Table 5 .
Comparison of total and P soluble in 2% citric acid content of different ABC samples.
Table 6 .
Comparison of secondary nutrient contents of different Animal Bone Char samples.
Table 7 .
Average concentration and occurrence of PAHs compounds in ABC (PAH19 components marked with bold).
1Average concentration of 41 different ABC samples.
Table 8 .
Examples for the difference of PAH16 and PAH19 results in different industrial available plant based pyrolysis material.
Table 9 .
PAH16, PAH19 and PCB7 contents of different Animal Bone Char samples.
Table 10 .
Potential toxic elements (PTEs) contents of different Animal Bone Char samples.
Table 11 .
Proposed safety criteria for organic pollutants by the ongoing EU Fertilizers Regulation revision law harmonization.
Table 12 .
Proposed potential toxic elements (PTEs) safety criteria for pyrolysis material by the ongoing EU Fertilizers Regulation revision law harmonization. | 2019-01-01T18:25:59.628Z | 2018-07-06T00:00:00.000 | {
"year": 2018,
"sha1": "8e6722d2746811cf6201d718c227dd5394c24432",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/7/2349/pdf?version=1530863113",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8e6722d2746811cf6201d718c227dd5394c24432",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Economics"
]
} |
8208445 | pes2o/s2orc | v3-fos-license | Localisation of monoclonal antibodies reacting with different epitopes on carcinoembryonic antigen (CEA)--implications for targeted therapy.
Antibody targeting has potential for selective delivery of cancer therapy. However, there is a wide variation in the degree of antibody localisation in individual patients with colorectal adenocarcinoma. Colorectal adenocarcinomas are composed of glandular structures separated from fibrovascular stroma by a basal lamina which may represent a significant barrier to extravasated antibody. Basement membrane-associated CEA epitopes may be more accessible to antibodies than those which are cytoplasmic or lumenal. We have investigated by immunohistochemistry and in vivo localisation, the extent to which distribution of antigen epitopes influences targeting. Two monoclonal antibodies (A5B7 and EA77) recognising non-overlapping CEA epitopes were reacted immunohistochemically with samples of 39 tumours. Intensity and site of reaction were assessed for basement membrane, cytoplasmic or lumenal surface association. 125I-labelled antibodies were injected into nude mice bearing LS174T tumour. Per cent injected activity per gram was measured in tumour and normal tissues, 24, 72 and 168 h later. Tissues reacted immunohistochemically for CEA were autoradiographed to assess the relationship of injected antibody to target antigen. Immunohistochemistry showed that A5B7 antibody favours basement membrane aspects of malignant glands; in contrast, EA77 concentrated generally on lumenal surfaces. In vivo localisation showed that per cent inj.act g-1 in tumour for A5B7 reached 36.5% at 24 h. EA77 localised to a lesser extent (9.1% at 24 h), despite a longer circulatory half-life. Autoradiography combined with immunohistochemistry showed A5B7 reacting with antigen close to vasculature after 24 h, slowly penetrating deeper parts of the tumour by 72 h. In contrast, EA77 was confined mainly to fibrovascular stroma, showing little labelling of antigen-positive tumour cells. Localisation differences between A5B7 and EA77 may partly be due to accessibility of epitopes on tumour cells.
The administration of radiolabelled antibodies against tumour antigens is of value in the management of colorectal adenocarcinomas for both tumour localisation in diagnosis using external scintigraphy (Begent, 1985) and treatment using radioimmunoguided surgery (Blair et al., 1990). The rationale of those techniques depends upon the selectivity of antibody for the target antigen expressed at the tumour site. However the success of antibody-targeted therapy depends not only upon the specificity of targeting but also on the ability to deliver tumoricidal amounts of therapy to the whole tumour (Humm & Cobb, 1990). Although partial and complete responses have been reported with antibodies directed against lymphomas (Grossbard et al., 1992), reports of responses in colorectal cancer patients are limited (Begent el al., 1989). This has been attributed, in part, to the heterogeneity of antigen distribution (Edwards, 1985) and is illustrated by the wide variation between patients in amount of antibody (per cent injected activity per kg) localising in tumour. Attempts to investigate which parameters are responsible for this variation have implicated a number of factors (Shockley et al., 1991). There is some evidence that the cellular organisation in colonic adenocarcinomas may influence the efficiency with which antibody penetrates and is retained at the tumour site (Boxer et al., 1992) and that a critically important factor is the inaccessibility of tumour antigen sites (Pervez et al., 1988). Furthermore, Pervez et al. (1989) demonstrated that two antibodies directed against different antigens on colonic adenocarcinoma cells (one present lumenally and the other basolaterally associated), have different distributions in vivo. Colorectal adenocarcinomas are composed of complex glandular structures separated from fibrovascular stroma by a basal lamina. Although this basal lamina can be thin, interrupted or almost absent (Ghadhially, 1985), it may still represent a significant barrier to extravasated antibody molecules (Poznansky & Juliano, 1984;Dvorak et al., 1991). Tight junctions separating apical and basolateral surfaces (Farquhar & Palade, 1963) are thought to play an important role in the maintenance of cell polarity (Herzlinger & Ojakian, 1984), and desmosomal intercellular junctions are present on lateral membranes. In malignant epithelium both these structures may hinder passage of antibody molecules. Ahnen et al. (1982), demonstrated the localisation of CEA in normal intestine and colon cancer at the ultrastructural level using polyclonal antibodies. The results showed the association of CEA with basement membranes and basolateral surfaces of malignant colonic epithelium in tumours, in contrast to the apical distribution in normal colon, suggesting that the polarity of surface membrane components is disturbed in neoplasia. Antibodies which bind epitopes that are preferentially expressed on the lumenal surfaces of malignant acini or cytoplasmically may not be capable of reaching their target in vivo. Antibodies which bind to the basal and basolateral aspect of these glandular structures (Abassi, 1993) may have an advantage since the target is readily accessible to molecules diffusing through fibrovascular stroma after extravasation from the blood vessels. This paper reports differences in the immunohistochemical distribution of two intact mouse monoclonal antibodies directed against non-overlapping epitopes on CEA. We have compared the relative efficiency of localisation (per cent injected activity in tumour and tumour to normal tissue ratios) of these antibodies, both as single agents and as a mixture, in the human tumour xenograft model LS 1 74T. Their microdistribution has been studied autoradiographically.
IgG antibodies which react with non-overlapping epitopes on CEA. A5B7 (group 4) and EA.77 (group 2) have been characterised under the Gold classification, by Nap et al. (1992). Antibody IDIO (obtained from the CRC Targeting Group) is directed against fetal microvillous membrane antigen and has been used clinically by Blair et al. (1990). B72.3 antibody to TAG-72 antigen (Nuti et al., 1982) and an anti-colon carcinoma antigen antibody, A33 (Welt et al., 1990), were obtained from Celltech. Control sections were processed without the primary antibody and with substitution of the primary antibody with mouse IgG.
Immunofluorescence
Studies were performed on samples of colonic adenocarcinoma and normal flanking tissue from ten patients (eight primary colonic adenocarcinomas and two metastatic adenocarcinomas). A case of squamous carcinoma from the anal margin was used as a cohtrol. Tissues were taken fresh from resection specimens and snap frozen in isopentane, cooled in liquid nitrogen. Cryostat sections (6 rim) were cut, air dried and then fixed in cold acetone for 5 min. Preliminary immunofluorescence studies were carried out using intact anti-CEA antibodies A5B7 and EA.77. Sections were reacted for 30 min with primary antibody (15 tsg ml-') and washed in Tris-buffered saline (pH 7.4). They were then incubated for 30 min with FITC-labelled rabbit anti-mouse immunoglobulins (Vector Laboratories) diluted in 10% normal human serum, washed as before and then mounted in aqueous media (Vectashield). Sections were examined under fluorescence using the Zeiss Axiophot microscope and photographed.
Immunohistochemistry A subsequent, more detailed characterisation of antibody binding was performed using an immunohistochemical study in a series of 39 additional samples of primary colorectal adenocarcinoma and five samples of non-neoplastic mucosa. Samples were snap frozen as before and cryostat sections prepared. An avidin-biotin-peroxidase technique was used as previously described . Sections were incubated with primary antibody at a concentration of 15 Lgml-'. Assessment of immunohistochemical reactivity Antibody reactivity was scored on an arbitrary scale of intensity ranging from + weak, + + moderate to + + + intense. Equivocal reactions were scored as ±. Immunohistochemical reactivity was assessed by recording the intensity of reaction on both the lumenal surface and basement membrane aspect of malignant glands. Intensity of reaction of the cytoplasm of the cells was noted (data not shown). For each antibody, at both the basal and lumenal aspects, the number of cases which showed + + + reactions were recorded. Similar data was generated for + +, +, ± and negative reactions.
For each antibody preparation, the number of tumours in which antibody binding showed a significant increase or gradient of reactivity from the basement membrane aspect to the lumenal surface was recorded. The number of cases where the direction of polarisation of binding was in the opposite directiontowards the basement membrane aspect was also recorded. Statistical analysis Using the scoring system of + to + + +, values were attributed to the immunohistochemical reaction at the basement membrane aspect and at the lumenal surface of glandular structures, for each antibody. Negative or equivocal reactions scored 0, + scored 1, + + scored 2 and + + + scored 3. From these data we compared reactions at the basement membrane and lumenal surface using a Mann-Whitney Utest, in the 39 cases, to assess whether there was any significant difference for each antibody. Differences in the intensity of reaction between EA77 and A5B7, both at the basement membrane aspect and at the lumenal surface, were tested for significance using the same statistical test.
Antibody localisation
A human colon adenocarcinoma cell line LS174T (Tom et al., 1976) was used to develop a xenograft model in female nude (nu/nu) mice by subcutaneous cell inoculation into the flank. Subsequent passaging was by continuous subcutaneous implantation of 1 mm3 xenograft fragments. All mice used were 2-3 months old, and weighed between 20 and 25 g at the initiation of experiments. Nude mice (nu/nu) were implanted with human colonic adenocarcinoma xenograft LS174T (Pedley et al., 1991) and used 3 weeks after passaging when the mean tumour volume was approximately 1 cm3.
Mice were injected i.v. with 10 fg of either 251I-labelled A5B7, EA.77 or a mixture of both. Antibodies were radiolabelled by the chloramine-T method over ice, to a specific activity of 1 tLCi rLg-'. After radioiodination anti-CEA antibodies, A5B7 and EA77, bound CEA antigen on a solid-phase radioimmunoassay. LS174T tumour is a moderately differentiated adenocarcinoma which grows as sheets of malignant cells within which numerous small acini are formed. In most tumours there are central areas of necrosis. The viable tumour is supported by fibrovascular stroma and there are some larger vascular spaces containing red cells. The connective tissue fibrovascular stroma is of mouse origin.
Gamma counting of radioactivity Animals were sacrificed at 24, 72 and 168 h after injection (four animals per time point) and samples of tumour, blood, liver, lung, kidney, spleen, colon and muscle were taken. Samples were weighed, dissolved in 2 ml of 7 M potassium hydroxide and counted for gamma radioactivity in a gamma counter (Pharmacia -Wizard). Percentage injected activity per gram (per cent inj.act g-1) of tissue was calculated as a mean of the values in four mice (Pedley et al., 1987). Adjacent pieces of tissue were fixed in 10% formalin and processed for routine histology. Tumour to blood ratios were calculated.
Differences in tumour to blood ratio between EA77 and A5B7 antibodies, and between A5B7 and the mixture of antibodies (A5B7 + EA77), at each time point, were tested for statistical significance using the Mann-Whitney U-test.
Autoradiography
Five-micron sections of tumour and normal tissues were cut, mounted on glass slides pretreated with a 2% solution of 3, amino-triethoxysilane and air dried overnight at 37°C. After dewaxing in Inhibisol they were taken through graded alcohols to distilled water and covered with autoradiographic film. Briefly, in a darkroom slides were dipped for 8 s in a nuclear emulsion (K5, Ilford), diluted 1:1 in 2% glycerol (preheated to 42°C). They were air dried for 1 h and then placed in a darkbox with silica gel and left overnight. Slides were then transferred to lightproof darkboxes containing silica gel and exposed at 4°C for 4 weeks. Serial sections were first reacted immunohistochemically with A5B7 and EA.77 antibodies to CEA and then covered with autoradiographic emulsion. Autoradiographs were developed as previously described (Pedley et al., 1990) and counterstained with haematoxylin and eosin. Immunohistochemically stained sections were counterstained with haematoxylin only.
Immunofluorescence
In normal colonic mucosa, both A5B7 and EA.77 antibodies were reactive with the apical region of epithelial cells in the upper third of the colonic crypt. Some CEA reactivity was observed at the lumenal surfaces of the middle and lower parts of the crypt, but the intensity of binding was weaker than that observed in the upper part of the crypt. Immunofluorescence binding of anti-CEA antibodies A5B7 and EA.77 in the ten primary colorectal adenocarcinomas showed three different distributions. These were (a) strong lumenal staining, (b) strong basal staining and (c) strong cytoplasmic reactions. The last was only observed in poorly differentiated tumours. The intensity of binding was variable. The binding of EA.77 was predominantly, and in some cases exclusively, confined to the lumenal surface of glandular acini, while A5B7 reacted primarily at the basement membrane aspect of glands.
Immunohistochemistry
Immunohistochemical reactivity in sections of non-neoplastic colonic mucosa demonstrated differences in distribution of antibody binding. Antibody binding could be categorised into two groups. EA.77 (anti-CEA), B72.3 and lDIO all showed strong reactions at the lumenal aspect of surface epithelium and with lumenal surfaces of goblet cells lining the crypts, cytoplasmic reactivity was weak. Figure 2a shows the reaction of anti-CEA antibody EA.77. A5B7 (anti-CEA) and A33 showed similar reactions but in addition bound strongly to basal and basolateral cell membranes throughout the crypt epithelium. Figure 2b (A5B7) and c (A33) show the additional reactions of antibody with basal aspects of the crypt epithelium. Cytoplasmic reactions were stronger, especially with A33, which also showed strong reactions with the basal surface epithelium. All of the 39 tumours studied a b c Figure 2 Photomicrographs showing immunohistochemical reactivity of a, EA77, b, A5B7 and c, A33 antibodies with normal colonic mucosa (x 130). contained areas of moderately or well-differentiated tumour with malignant glandular acini. Table I shows the intensity of immunohistochemical binding in each tumour, for each antibody, at the basement membrane aspect and at the lumenal surface of malignant glandular epithelium. There were significant differences in intensity of binding of different antibodies. In all but three tumours A5B7 reacted moderately (+ +) or intensely (+ + +) with both the basal aspects and lumenal surfaces of the malignant glands and there was no significant difference in reaction at each site (P = 0.83) -an example is shown in Figure 3a. EA.77 was more heterogeneous in its reactivity at the basal aspect, being moderately or intensely reactive in only 17/39 tumours, with weak or negative binding in the remainder (Figure 3b). In 38/39 tumours there was moderate or intense positivity with lumenal surfaces. There was a significant difference between the reaction of EA77 at the lumenal surface and that at the basement membrane aspect (P = 0.001). Significant heterogeneous reactions similar to those seen with EA.77 were exhibited by B72.3 (P = 0.001) (Figure 3c) and lDO0 (P = 0.001) antibodies (Figure 3d), 11/39 and 24/39 tumours respectively showing moderate to intense reactivity at the basal aspect. In contrast, A33 showed a more uniform distribution of reaction, similar to that of A5B7, anti-CEA, with all tumours except one moderately or intensely positive in their basal aspects (Figure 3e). There was no significant difference between the reaction at basal or lumenal aspects (P = 0.71). These data showing differences in immunohistochemical binding imply that antibody reactivity with malignant glandular structures can be significantly polarised across the epithelium. Table II shows the direction of polarisation of reactivity for each antibody in the 39 tumours. The number of tumours with significant polarisation of binding towards the lumenal surface was much higher for EA.77 than for A5B7. The values for the other intact antibodies screened were also recorded. B72.3 and lDIO, like EA.77, showed many tumours with polarisation towards the lumenal surface. The polarisation of A33 was similar to A5B7 with only a few tumours showing a preference for antibody binding at the lumenal aspect.
The number of tumours in which polarisation was towards the basement membrane aspect was more limited, A5B7, and to lesser extent A33, showing this trend in a few cases only. In none of the 39 tumours did EA77, B72.3 or IDIO show any polarisation of reactivity towards the basal aspect. Statistical analysis shows that the binding of EA77 and A5B7 at the basal aspect of the glands was significantly different (P = 0.001), but there was no difference in antibody reaction at the lumenal surface (P = 0.28). These results show that the polarisation of binding of EA.77 was in the direction of the lumenal surface. In many cases binding to basal and basolateral margins was either absent or only very weak. There were only three tumours in which ASB7 showed polarisation of binding towards the basement membrane. This was because lumenal surfaces were also reactive in the other 36 cases, so there was homogeneous reactivity throughout malignant glandular epithelium and this is reflected in the lack of significance using the Mann-Whitney U-test. However, in all but three cases there was moderate or intense reactivity at the basal aspect. Similar observations were made in the analysis of A33 binding (only one tumour showing polarisation to basal aspect).
Of the antibodies studied, A33 antibody and A5B7 anti-CEA showed the strongest binding with tumour cell cytoplasm. However, in all but two tumours (for A33) or in all tumours (for A5B7), reactions were always equivalent or weaker than those observed at the basal aspect and lumenally. In contrast, B72.3 antibody, lDIO antibody and EA77 anti-CEA antibody all showed cytoplasmic reactions which varied in intensity from tumour to tumour. However, for B72.3 and EA77, in all 39 cases the intensity of cytoplasmic reactions was always equivalent or stronger than that observed at the basal aspect. Figure 4 shows the biodistribution of anti-CEA antibodies, A5B7 and EA.77, in the nude mouse xenograft model at (a) 24 h, (b) 72 h and (c) 168 h after injection. A5B7 gave consistently higher concentrations in the tumour than EA77 in spite of being cleared more rapidly from the blood and therefore being less available for tumour binding. The mean tumour to blood ratios for EA.77 at 24, 72 and 168 h after injection were 0.36:1, 0.43:1 and 0.76:1 respectively. A5B7 ratios were higher at 3.2:1, 4.97:1 and 9:1. At each time point, analysis of the tumour to blood ratios for individual mice showed that they were significantly increased for A5B7 at 24 h (P = 0.02) and at 168 h (P = 0.02). The tumour to blood ratios were also higher for A5B7 at 72 h. These results demonstrate the superior localisation of A5B7 antibody compared with EA.77 in terms of hoth absolute dose to tumour and tumour to blood ratio.
Xenograft localisation
The per cent inj.act g' measured in the tumour for the mixture at the three time points was 18.3%, 14.88% and 7.31 %, with corresponding levels in the bloodstream of 14.4%, 9.6% and 5.09%. Combining A5B7 and EA.77 antibodies decreased the activity in tumour compared with A5B7 alone and by prolonging the half-life of radioactivity in the bloodstream the tumour to blood ratios at 24, 72 and 168 h (1.27: 1, 1.55: 1, and 1.43:1 respectively) were significantly decreased (P = <0.05).
Autoradiography
Sections of LS174T, from animals receiving 125I-labelled antibody, were reacted immunohistochemically with A5B7 and EA.77 prior to autoradiography, to show the relationship between the site of target epitope and injected radiolabelled antibody. A5B7 immunohistochemistry showed that reactivity in the human tumour xenograft was confined mainly to the surfaces close to vascular spaces and blood vessels with some binding to cytoplasm of tumour cells and to a much lesser extent at the lumenal surfaces of small glandular acini. At 24 h after injection of A5B7 antibody, accumulations of grains indicative of bound radiolabelled antibody were strongly associated with areas of antigen positivity close to blood vessels and vascular spaces and could also be seen at less dense concentrations in adjacent cells (Figure 5a). Further away from vascular spaces there were very few grains. By 72 h there were still grains associated with areas of antigen but antibody could be detected further from blood vessels in underlying tumour cells. By 168 h overall grain density was reduced, consistent with the lower per cent inj.act g1' measured in tumour, although there was still evidence of localisation in cells away from vessels. By comparison, few grains were observed overlying tumour cells in any of the autoradiographed tumour Id Ie b Figure 3 High-power photomicrographs showing immunohistochemical reactivity of a, A5B7, b, EA77, c, B72.3, d, IDIO and e, A33 antibodies with serial sections from an adenocarcinoma of the colon (x 260). In a, with A5B7 and e, with A33, note the brown reaction product at the basement membrane aspect of the malignant gland. LS, lumenal surface; BM, basement membrane. sections prestained with EA77, even in regions which were reactive immunohistochemically. The immunohistochemical reactivity of EA.77 was heterogeneous and mainly confined to the cytoplasm of tumour cells and small glandular acini and is shown in Figure 5b. Only occasional reactivity could be demonstrated at the basal aspect of the tumour associated with the interface between the fibrovascular stroma and the tumour cells. Where EA.77 antigen epitope was demonstrable adjacent to vessels there were accumulations in grains, but this was rare and there was little evidence of any labelling of deeper tumour cells at 24, 72 or 168 h after injection. Autoradiographs from mice that received radiolabelled EA77 demonstrated, at the 24 h and 72 h (Figure 5c) time points, that most grains were overlying vascular spaces and in some areas were associated with red cells or areas of haemorrhage. Grains were also evident in the fibrous stromal compartment. By 168 h there was little or no labelling of grains in sections.
Serial autoradiograph sections which had not been pretreated immunohistochemically were counterstained with haematoxylin and eosin. These showed similar grain distributions to those seen in sections that had been reacted with antibody prior to autoradiography and confirmed the results reported above.
Discussion
This paper demonstrates that the immunohistochemical distribution of A5B7 antibody is strongly associated with the basement membrane aspect of malignant glands within adenocarcinomas of the colon and rectum. The reactivity of EA.77 in general is concentrated on the lumenal surface of the acini. These polarised distributions, while not mutually a h exclusive, show a consistent trend over the majority of carcinoma samples investigated as well as in the non-neoplastic samples of colonic mucosa. CEA on the basement membrane aspect of malignant glandular structures may represent a more accessible target for antibodies administered into the circulation than that present cytoplasmically or on lumenal surfaces. Intense immunohistochemical reactivity at the lumenal surface of normal colonic epithelium by antibodies to CEA is not mirrored in localisation studies in patients. Tumour to normal bowel ratios are invariably higher than some other tumour to organ ratios in organs (Boxer et al., 1992). Also, the microdistribution of radiolabelled antibodies to CEA in patients suggests that antibodies do not always penetrate malignant glands, with isolated cells often targeted, while there is heterogeneous or no uptake by more complex epithelial structures (Boxer et al., 1992). The relatively superior localisation of A5B7 antibody compared with EA.77 and many other antibodies to colorectal tumour antigens may in part be due to the location and accessibility of the antigen on tumour cells in vivo. Whether A5B7 binds to basement membrane aspects of malignant glands solely because of presence of antigen or whether there are additional factors influencing binding is unclear.
Recently, Yokota et al. (1992) compared the relative efficiency of localisation of genetically engineered antibody fragments (Fab' and scFv of CC-49) with their intact relatives in the LS1 74T tumour model and have found different penetration rates, the scFv molecules having the fastest rate but also the lowest percentage injected activity in the tumour. However, the absolute depth of penetration into tumour xenografts was similar for all antibody types if enough time was allowed. This suggests that there is a limit to the penetration of these molecules in tumours. Such molecules are too large even as Fab' (50,000 Da) and scFv (27,000 Da) fragments to pass through intact cellular junctions which will exclude molecules above a molecular weight of 2,000 Da (Jain, 1989). Kyriakos et al. (1992) have demonstrated that binding to the surface of viable tumour cells by intact antibody is irreversible and suggest that the concept of affinity may not be applicable. They have postulated that intact immunoglobulin bound to the surface of tumour cells may be gradually internalised as a result of non-clathrindependent endocytosis during the normal turnover of cellsurface molecules.
Our autoradiographic results show that A5B7 localises to antigen which is accessible to extravasated antibody and can be shown to penetrate to more distant cells. These may have been reached and targeted via internalisation. In contrast, EA.77 antibody is observed in the vascular spaces and fibrovascular stroma, yet is either not present or only detected at low levels in association with malignant tissues. Whether this is simply because of the absence of the CEA epitope recognised by EA.77 on the basal aspect of tumour cells or whether there are physical barriers associated with basement membrane structures is unclear. Several groups have shown that significant amounts of radiolabelled antibody accumulate in necrotic areas of tumour (Steis et al., 1990). Where glandular structures have necrosed the basal and basolateral epithelial membranes will be breached, thus facilitating the diffusion of antibody to tumour cells and CEA antigen which would otherwise be inaccessible.
In this study immunohistochemical reactivity of antibodies lDIO and B72.3, like that of EA.77, has been shown to be polarised towards the lumenal surface aspect of malignant glands. The relative success of these antibodies in the clinic has been limited compared with that of A5B7 (Blair et al., 1990;D.M. Lane et al., in preparation). In contrast, A33 antibody, which shows strong reactions at the basal aspect of malignant glandular epithelium, is well localised in patients (Welt et al., 1990) and gives similar tumour uptake to A5B7 in the human tumour xenograft LS174T (R.B. Pedley, personal observation).
In the LS1 74T xenograft A5B7 immunohistochemistry shows strong reactivity at the basal surface of tumour cells adjacent to the fibrovasculature. In addition, there is cytoplasmic reactivity with some tumour cells and lumenal surfaces of some acini. In contrast, EA77 reacts heterogeneously with little evidence of binding to CEA epitopes at the basal aspect of the tumour masses and much of the immunohistochemical reactivity is cytoplasmic and lumenal. While the histological structure of the LS 174T xenograft does not exactly model that of most colorectal ad-enocarcinomas in patients, it is sufficiently differentiated to demonstrate epithelial polarisation. Differences in epitope distribution recognised by A5B7 and EA77 are shown by immunohistochemistry in both the tumour specimens and the xenograft model. These differences may, but do not necessarily, account for the difference in in vivo localisation. The question of whether EA77 has a lower binding affinity for its epitope than A5B7 has for its own epitope has yet to be answered. Evidence from antibody affinity column chromatography demonstrates differences in epitope specificity between EA77 and A5B7. A5B7 reacts with an epitope on all or most CEA molecules, whereas EA77 binds to an epitope available on only a minority of molecules. It has been shown that EA77 binds with greater affinity to EA.77-purified CEA than to A5B7-purified CEA (P. Keep, personal communication).
Whatever the reasons for the poor localisation of EA.77 in this human tumour xenograft, these experiments highlight the need to investigate critically the reactivity of antibodies immunohistochemically. EA.77 has been shown to be highly specific for CEA with no cross-reactions, and A5B7 has some cross-reactivity with NCA (non-specific cross-reacting antigen). These studies demonstrate that highly selective antibodies with better specificity need not be superior targeting agents.
Our observations suggest that the immunohistochemical distribution of antibodies against colorectal tumour antigens may give an indication of their potential for efficient localisation in patients. | 2014-10-01T00:00:00.000Z | 1994-02-01T00:00:00.000 | {
"year": 1994,
"sha1": "979a9e644c6fcbbf65ab1c1ec52552c9c8fa6ade",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/bjc199456.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "979a9e644c6fcbbf65ab1c1ec52552c9c8fa6ade",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
216388372 | pes2o/s2orc | v3-fos-license | Washout period for pregnancy post isotretinoin therapy
Isotretinoin is an oral derivative of vitamin A. Oral isotretinoin (13‐cis‐retinoic acid) was first approved by the US Food and Drug Administration (FDA) for treatment for severe acne in 1982.[1] Isotretinoin is a pro‐drug that is converted intracellularly to metabolites that are agonists for Retinoic acid receptor (RAR) and retinoid X receptor (RXR) nuclear receptors.[2‐5] Isotretinoin influences all of the major etiological factors implicated in acne by affecting the cellular differentiation, cell‐cycle progression, cell survival, and apoptosis.[2‐8] As a consequence there is a remarkable reduction in sebum production, comedogenesis, surface and ductal Propionibacterium acnes population, and inflammation. A dose of 0.5‐1.0 mg/kg/day dramatically reduces sebum excretion by 90% within a period of 6 weeks. The average course of treatment is 4‐6 months.[1] Isotretinoin is category X drug and one of the most well established and potential serious adverse effect of isotretinoin is teratogenic if not taken under proper guidance.[9] If used in first trimester, it may lead to increased fetal loss and specific malformations like cleft palate, stenosis of the external ear canal, microtia, and hydrocephalus. Cardiac Address for correspondence: Dr. Venkataram Mysore, The Venkat Center for Skin and Plastic Surgery, Post Graduate Training Center (Affiliated to RGUHS), 3437, 1st ‘G’ Cross, 7th Main, Next to BTS Bus Depot, Subbanna Garden, Bengaluru ‐ 560 040, Karnataka, India. E‐mail: mnvenkataram@gmail. com
Introduction
Isotretinoin is an oral derivative of vitamin A. Oral isotretinoin (13-cis-retinoic acid) was first approved by the US Food and Drug Administration (FDA) for treatment for severe acne in 1982. [1] Isotretinoin is a pro-drug that is converted intracellularly to metabolites that are agonists for Retinoic acid receptor (RAR) and retinoid X receptor (RXR) nuclear receptors. [2][3][4][5] Isotretinoin influences all of the major etiological factors implicated in acne by affecting the cellular differentiation, cell-cycle progression, cell survival, and apoptosis. [2][3][4][5][6][7][8] As a consequence there is a remarkable reduction in sebum production, comedogenesis, surface and ductal Propionibacterium acnes population, and inflammation. A dose of 0.5-1.0 mg/kg/day dramatically reduces sebum excretion by 90% within a period of 6 weeks. The average course of treatment is 4-6 months. [1] Isotretinoin is category X drug and one of the most well established and potential serious adverse effect of isotretinoin is teratogenic if not taken under proper guidance. [9] If used in first trimester, it may lead to increased fetal loss and specific malformations like cleft palate, stenosis of the external ear canal, microtia, and hydrocephalus. Cardiac This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com outflow tract defects may occur when consumed later in pregnancy. [10] Therefore, is it contraindicated during pregnancy or in patients who are trying to conceive. [11] For this reason, strict contraception is advised to all the sexually active female patients. The routinely followed recommendation for contraception is for 1 month prior to initiation of isotretinoin therapy, during the treatment period, and 1 month after discontinuation of the treatment. [1] However, the recommendation regarding contraception period post isotretinoin therapy is a topic of debate. This issue is further discussed in detail.
Methods
PubMed research was executed to gather the relevant data needed using the keywords -isotretinoin, pregnancy, contraception, pharmacokinetics, and guidelines. Suitable information regarding isotretinoin was taken from all the publications and guidelines available as well as from the dermatological textbooks. A total of 23 publications could be found on PubMed on the subject and were analysed.
Analysis of data
In the US, isotretinoin-based drugs are sold through a special restricted distribution programme approved by the US Food and Drug Administration (FDA). In 2005 the FDA introduced 'iPLEDGE', a risk management distributed program. The program was initiated to prevent pregnant women from being prescribed or exposed to the medication.It mandated that both male and female users of oral isotretinoin would have to enrol into the National Registry. If this is not achieved, patients will no longer be able to receive the drug. Women of childbearing age have to provide two negative pregnancy tests before their initial prescription, show evidence of another negative pregnancy test before each monthly repeat prescription. Unless continuously abstinent, patient has to comply with the iPLEDGE necessity to use two forms of contraception 1 month before, during, and for 1 month after completion of treatment. [1,12] The European Directive concerned with the prescribing of oral isotretinoin and the European FDA have also implemented a pregnancy prevention programme for females on isotretinoin. According to this programme, female patients are advised to use at least one but ideally TWO methods of contraception for 1 month before starting treatment, including a barrier method, and to continue to use effective contraception throughout the treatment period and for at least 1 month after cessation of treatment. Mandatory pregnancy testing is performed pre-therapy, during and 5 weeks post-therapy. [13] Kanelleas et al. suggest that patients should agree to at least one and preferably two complementary methods of contraception, including a barrier method, before the initiation of therapy, during treatment and for 5 weeks after the conclusion of it. [14] Boucher and Beaulac-Baillargeon conclude that patients should be advised to begin using two effective contraceptive methods 1 month before starting isotretinoin and continue using the same during the treatment and 1 month after the last dose. [15] According to Abroms et al., pregnancy prevention is needed for 1month interval after the isotretinoin therapy ends as 32% of isotretinoin exposed pregnancies occurred during this post-therapy period. [16,17] Thus most recommendations are for 1 month period after stopping the drug. However, there is one paper by Choi et al., [18] which suggested for contraception longer than one month, taking into account the variability in the pharmacokinetics of isotretinoin. It recommends a 3-month window for the use of contraception post isotretinoin therapy to provide an adequate safety margin to prevent fetal exposure, based on more than five elimination half-lives of the drug. [19] There was also a report of suspected isotretinoin-induced ear malformations in a newborn whose mother had taken isotretinoin for 2 years until one month prior to the time when she became pregnant. [20] As stated by several pharmacokinetics studies of isotretinoin and its metabolites, the harmonic mean elimination half-life of isotretinoin and 4-oxo-isotretinoin following the oral administration of isotretinoin range from 10 to 20 hours and 24 to 29 hours, respectively. [21][22][23] There is thus evidence of variability in isotretinoin elimination half-life (from 5.3 hours to 7 days) and such variability in the pharmacokinetics may also lead to exposure during pregnancy. Hence, 1 month may not be sufficient for clearing of the drug in all women. [24] According to a study by Nulman et al., the t 1/2 of isotretinoin and its metabolite 4-oxo-isotretinoin is generally short, but they observed a prolonged t 1/2 in two female patients. This may have happened because of hepatic recirculation. Therefore, in the worst case scenario with t 1/2 of approximately 1 week, 5 half-lives will be needed to allow levels to return to baseline. [24] In the United Kingdom, the Pharmacovigilance Risk Assessment Committee of European medicines agency has adopted recommendations for isotretinoin based on the pharmacovigilance data. It suggests the requirement for two forms of contraception during the treatment and 1 month following the end of treatment, which is determined based on the metabolism of the product. [25] Another paper on pharmacokinetics of oral isotretinoin by Wiegandand Chou stated that the metabolite of isotretinoin with the longest elimination half-life (oxoisotretinoin), returns nonteratogenic plasma (endogenous) retinoid concentrations within 2 weeks after the end of isotretinoin treatment. Also, retinoic acid, which is thought to be partly responsible for the teratogenic effect ofisotretinoin, takes only 2 days to return to physiologic levels. Hence, the post-therapy contraceptive period of 1 month has been experimentally verified as an adequate safety margin for isotretinoin. [19] Dai et al. have done an analysis of 88 case reports from pregnant patients in whom conception occurred after discontinuing isotretinoin treatment. Among these, 90% pregnancies occurred within 2 months and 64% occurred within 1 month of discontinuing isotretinoin. Moreover, three women obtained their last dose of isotretinoin within 2 days before the estimated date of conception and eventually delivered normal full-term infants. In these 88 case reports, it was found that there was no increased risk of congenital malformations or spontaneous abortions among these women who completed or discontinued isotretinoin therapy before conception. Also, the incidence rates of congenital malformations and spontaneous or missed abortions in these patients did not show variation from the incidence rates reported in normal females of reproductive age group who had not been exposed to isotretinoin. [26]
Discussion
As it can be seen from the above analysis, that isotretinoin should be prescribed with caution to women of child bearing age and patients should be provided with detailed information regarding the teratogenic effects of isotretinoin on fetus and contraception while prescribing the drug. Prescribers should counsel sexually active women to select and use two forms of effective contraception simultaneously for at least 1 month prior to initiation of isotretinoin therapy, during therapy, and for 1 month following discontinuation of therapy. Effective contraception should consist of concurrently using both a primary (tubal ligation, partner's vasectomy, intrauterine device, estrogen-containing birth control pills, or topical, injectable, implantable, or insertable hormonal birth control products) and a secondary method of birth control (diaphragm, latex condom, or cervical cap, each to be used with spermicide). [27] Recently the Central Drugs Standard Control Organisation (CDSCO) from India has issued safety guidelines and labelling rules for isotretinoin, citing the harmful side effects and adverse reactions. These guidelines advocate to avoid pregnancy in patients for 6 months after stopping the treatment. [28] However, all the recommendations quoted above and the standard dermatological text book suggest only 1month's period of contraception post isotretinoin therapy, [29][30][31] with European guidelines recommending a 35 day period. [13] No textbook, guideline or publication has mentioned contraception for more than 3 months after conclusion of treatment. In view of the above-stated studies, we feel the Indian recommendation is not justified.
Conclusion
Considering the variability of isotretinoin pharmacokinetics and taking into account a maximum t1/2 of approximately 1 week, elimination period (five times of t1/2) would be a maximum period of 35 days. This means that the time needed to allow levels to return to baseline would be a 35-day period before safe conception. [20,25] We therefore feel a 35 day recommendation as stated in recommendation by European directive and Kamellan would be suitable. [13,14] Thus based on pharmacokinetic studies of isotretinoin, recommendation for a wash out period of 35 days post isotretinoin therapy would be adequate and appropriate.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2020-04-27T20:40:51.893Z | 2020-03-09T00:00:00.000 | {
"year": 2020,
"sha1": "e6a432fcefa4d9b7132ea991d0e65041767aaee8",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/idoj.idoj_101_19",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "36eb98776ea801f86aaa6c8805238f11498be140",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228064220 | pes2o/s2orc | v3-fos-license | A Discrete Model of Collective Marching on Rings
We study the collective motion of autonomous mobile agents on a ringlike environment. The agents' dynamics is inspired by known laboratory experiments on the dynamics of locust swarms. In these experiments, locusts placed at arbitrary locations and initial orientations on a ring-shaped arena are observed to eventually all march in the same direction. In this work we ask whether, and how fast, a similar phenomenon occurs in a stochastic swarm of simple agents whose goal is to maintain the same direction of motion for as long as possible. The agents are randomly initiated as marching either clockwise or counterclockwise on a wide ring-shaped region, which we model as $k$"narrow"concentric tracks on a cylinder. Collisions cause agents to change their direction of motion. To avoid this, agents may decide to switch tracks so as to merge with platoons of agents marching in their direction. We prove that such agents must eventually converge to a local consensus about their direction of motion, meaning that all agents on each narrow track must eventually march in the same direction. We give asymptotic bounds for the expected amount of time it takes for such convergence or"stabilization"to occur, which depends on the number of agents, the length of the tracks, and the number of tracks. We show that when agents also have a small probability of"erratic", random track-jumping behaviour, a global consensus on the direction of motion across all tracks will eventually be reached. Finally, we verify our theoretical findings in numerical simulations.
Introduction
Birds, locusts, human crowds and swarm-robotic systems exhibit interesting collective motion patterns. The underlying autonomous agentic behaviours from which these patterns emerge have attracted a great deal of academic interest over the last several decades [2,5,17,18]. In particular, the formal analysis of models of swarm dynamics has led to varied and deep mathematical results [7,9,14,28]. Rigorous mathematical results are necessary for understanding swarms and for designing predictable and provably effective swarm-robotic systems. However, multi-agent swarms have a uniquely complex and "mesoscopic" nature [11], and relatively few standard techniques for the analysis of such systems have been established. Consequently, the analysis of new models of swarm dynamics is important for advancing our understanding of the subject.
In this work, we study the dynamics of "locust-like" agents moving on a discrete ringlike surface. The model we study is inspired by the following well-documented experiment [3]: place many locusts on a ringlike arena at random positions and orientations. They start to move around and bump into the arena's walls and into each other, and as they do so, remarkably, over time, they begin to collectively march in the same direction-either clockwise or counterclockwise (see Figure 1). Inspired by observing these experiments, we asked the following question: what are simple and reasonable myopic rules of behaviour that might lead to this phenomenon? Our goal is to study this question from an algorithmic perspective, by considering a model of discretized mobile agents that act upon a local algorithm. As with much of the literature on swarm dynamics [10,7,4], our goal is not to study an exact mathematical model of locusts in particular (the precise mechanisms underlying locusts' behaviours are very complex and subject to intense ongoing research, e.g. [3,5]), but to study the kinds of algorithmic local interactions that lead to collective marching and related phenomena. The resulting model is idealized and simple to describe, but the patterns of motion that emerge while the locusts progress towards a "stabilized" state of collective marching are surprisingly complex.
The starting point for this work is the following postulated "rationalization" of what a locust-like agent wants to do: it wants to keep moving in the same direction of motion (clockwise or counterclockwise) for as long as possible. We can therefore consider a model of locust-like agents that never change their heading unless they collide, heads-on, with agents marching in the opposite direction, and are forced to do so due to the pressure which is exerted on them. When possible, these agents prefer to bypass agents that are headed towards them, rather than collide with those agents. This is done by changing lanes: moving in an orthogonal manner between concentric narrow tracks which partition the ringlike arena. The formal description of this "rationalized" model is given in Section 2, and will be our subject of study.
Contribution. We describe and study a stochastic model of locust-like agents in a discretized ringlike arena which is modelled as multiple tracks that wrap around a cylinder. We show that our agents eventually reach a "local consensus" about the direction of marching, meaning that all agents on the same track will march in the same direction. We give asymptotic bounds for the amount of time this takes based on the number of agents and the physical dimensions of the arena. Because of the idealized "precise" nature of our model, a global consensus where all locusts walk in the same direction is not guaranteed, since locusts in different tracks might never meet. However, we show that, when a small probability of "erratic", random behaviour is added to the model, such a global consensus must occur. We verify our claims via simulations and make some further empirical observations that may inspire future investigations into the model.
Despite being simple to describe, analyzing the model proved tricky in several respects. Our analysis strategy is to show that the model repeatedly passes between two phases: one in which it is "chaotic", such that locusts are arbitrarily moving about, and one in which it is "orderly", such that all locusts are in a kind of dense deadlock situation and collisions are frequent. We derive our asymptotic bounds from studying the well-behaved phase while bounding the amount of time the locusts can spend in the chaotic phase.
Related work. The experiments inspiring our work are discussed in [5,3]. The mathematical modelling of the collective motion of natural organisms such as birds, locusts and ants, and the convergence of such systems of agents to stable formations, has been discussed in numerous works including [14,18,28,29].
The central focus of this work regards consensus: do the agents eventually converge to the same direction of motion, and how long does it take? Similar questions are often asked in the field of opinion dynamics. Mathematically, if the agents' direction of motion (clockwise or counterclockwise) is considered an "opinion", we can compare our model to models in this field. When there are no empty locations at all in the environment, our model is fairly close to the voter model on a ring network with two distinct opinions, the main difference being that, unlike in the voter model, our agents' direction of motion determines which agents' opinions can influence them (an excellent survey on this topic is [15]). The comparison to the voter model breaks when we introduce empty locations and multiple ringlike tracks, at which point we must take into account the physical location of every agent when considering which agents can influence its opinion. Several works have explored models of opinion dynamics in a ring environment where the agents' physical location is taken into account [8,20]. Our model is distinct from these in several respects: first, in our model, an agents' internal state-its direction of motion-plays an active part in the algorithm that determines which locations an agent may move to. Second, we partition our ring topology into several narrow rings ("tracks") that agents may switch between, and an agents' decision to switch tracks is influenced by the presence of platoons of agents moving in its direction in the track that it wants to switch to. In other words, we model agents that actively attempt to "swarm" together with agents moving in their direction of motion Protocols for achieving consensus about a value, location or the collective direction of motion have also been investigated in swarm robotics and distributed algorithms [6,13,26,27]. However, in this work, we are not searching for a protocol that is designed to efficiently bring about consensus; we are investigating a protocol that is inspired by natural phenomena and want to see whether it leads to consensus and how long this takes on average.
Broadly speaking, some mathematical similarities may be drawn between our model and interacting particle systems such as the simple exclusion process, which have been used to understand biological transport and traffic phenomena [12]. Such particle systems have been studied on rings [21]. In these discrete models, as in our model, agents possess a physical dimension, which constrains the locations they might move to in their environment. These are not typically multi-agent models where agents have an internal state (such as a persistent direction of motion), but rather models of particle motion and diffusion, and the research focus is quite different; the main point of similarity to our model is in the way that a given discrete location can only be occupied by a single agent, and in the random occurrence of "traffic shocks" wherein agents line up one after the other and are prevented from moving for a long time.
Model and definitions
We postulate a locust-inspired model of marching in a wide ringlike arena which is divided into narrow concentric rings. For simplicity, we map the arena to the surface of a discretized cylinder of height k partitioned into k narrow rings of length n, which are called tracks. For example, the environment of Figure 2 corresponds to k = 3, n = 8 (3 tracks of length 8). The coordinate (x, y) refers to the xth location on the yth track (which can also be seen as the xth location of a ring of length n wrapped around the cylinder at height y). Since we are on a cylinder, we have that ∀x, (x + n, y) ≡ (x, y).
A swarm of m identical agents, or "locusts," which we label A 1 , . . . , A m , are dispersed at arbitrary locations on the cylinder and move autonomously at discrete time steps t = 0, 1, . . .. A given location (x, y) on the cylinder can contain at most one locust. Each locust A i is initiated with either a "clockwise" or "counterclockwise" heading, which determines their present direction of motion. We define b(A i ) = 1 when A i has clockwise heading, and b(A i ) = −1 when A i has counterclockwise heading.
The locusts move synchronously at discrete time steps t = 0, 1, . . .. At every time step, locusts try to take a step in their direction of motion: if a locust A is at (x, y), it will attempt to move to (x + b(A), y). A clockwise movement corresponds to adding 1 to x, and a counterclockwise movement corresponds to subtracting 1. The locusts have physical dimension, so if the location a locust attempts to move to already contains another locust at the beginning of the time step, the locust instead stays put. If A i and A j are both attempting to move to the same location, one of them is chosen uniformly at random to move to the location and the other stays put.
Locusts that are adjacent exert pressure on each other to change their heading: if A i has a clockwise heading and A j has a counterclockwise heading, and they lie on the coordinates (x, y) and (x + 1, y) respectively, then at the end of the current time step, one locust (chosen uniformly at random) will flip its heading to the other locust's heading. Such an event is called a conflict between A i and A j . A conflict is "won" by the locust that successfully converts the other locust to their heading.
Let A be a locust at (x, y). If the locust A has clockwise heading, then the front of A is the first locust after A in the clockwise direction, and the back of A is the first locust in the counterclockwise direction. The reverse is true when A has counterclockwise heading. Formally, let i > 0 be the smallest positive integer such that (x + b(A)i, y) contains a locust, and let j > 0 be the smallest positive integer such that (x − b(A)j, y) contains a locust. The front of A is the locust in (x+b(A)i, y) and the back of A is the locust in (x−b(A)j, y). The locusts in the front and back of A are denoted A → and A ← respectively, and are called A's neighbours; these are the locusts that are directly in front of and behind A. Note that when a track has two or less locusts, A → = A ← . When a track has one locust, i = j = n and so A = A → = A ← . At any given time step, besides moving in the direction of their heading within their track, a locust A at (x, y) can switch tracks, moving vertically from (x, y) to (x, y + 1) or (x, y − 1) (unless this would cause it to go above track k or below track 1). Such vertical movements occur after the horizontal movements of locusts along the tracks, but on the same time step where those horizontal movements took place. Locusts are incentivized to move vertically when this enables them to avoid changing their heading ("inertia"). Specifically, A may move to the location E = (x, y ± 1) at time t when: 1. At the beginning of time t, A and A → are not adjacent to each other and b(A) = b(A → ). 2. Once A moves to E, the updated A ← and A → in the new track will have heading b(A). 3. No locust will attempt to move horizontally to E at time t + 1.
Condition (1) states that there is an imminent conflict between A and A → which is bound to occur. Condition (2) guarantees that, by changing tracks to avoid this conflict, A is not immediately advancing towards another collision; A's new neighbours will have the same heading as A. Condition (3) guarantees that the location A wants to move to on the new track isn't being contested by another locust already on that track. Together, these conditions mean that locusts only change tracks if this results in avoiding collisions and in "swarming" together with other locusts marching in the same direction of motion. If a locust cannot sense that all three conditions (1), (2) and (3) are fulfilled, it does not switch tracks.
Besides these conditions, we make no assumptions about when locusts move vertically. In other words, locusts do not always need to change tracks when they are allowed to by rules (1)-(3); they may do so arbitrarily, say with some probability q or according to any internal scheduler or algorithm. We do not determine in any sense the times when locusts move tracks-but only determine the preconditions required for such movements; our results in the following sections remain true regardless. This makes our results general in the sense that they hold for many different track-switching "swarming" rules, so long as those rules do not break the conditions (1)-(3). Figure 2 illustrates one time step of the model, split into horizontal and vertical movement phases. In order to slightly simplify our analysis of the model, we assume that every track has at least 2 locusts at all times, although our results remain true without this assumption.
Everywhere in this work, the beginning of a time step refers to the configuration of the swarm at that time step before any locusts moved, and the end of a time step refers to the configuration at that time step after all locust movements are complete. By default and unless stated otherwise, the words "time step t" refer to the beginning of that time step.
Stabilization analysis
We will mainly be interested studying the stability of the headings of the locusts over time. Does the model reach a point where the locusts stabilize stop changing their heading? If so, are their headings all identical? How long does it take?
In the case of a single track (k = 1), we shall see that the locusts all eventually stabilize with identical heading, and bound the expected time for this to happen in terms of m and n. In the multi-track case, we shall see that the locusts stabilize and agree on a heading locally (i.e., all locusts on the same track eventually have identical heading and thereafter never change their heading), and bound the expected time to stabilization in terms of m, n, k. In the multi-track case, we show further that adding a small probability of "erratic" track-switching behaviour to the model induces global consensus: all locusts across all tracks eventually have identical heading.
Locusts on narrow ringlike arenas (k = 1)
We start by studying the case k = 1, that is, we study a swarm of m locusts marching on a single track of length n. Throughout this section, we assume this is the case, except in Definition 2, which is also used in later sections.
For the rest of this section, let us call the swarm non-stable at time t if there are two locusts A i and A j such that b(A i ) = b(A j ); otherwise, the swarm is stable. A swarm which is stable at time t remains stable thereafter. We wish to bound the number of time steps it takes for the system to become stable, which we denote T stable . Our goal is to prove Theorem 1, which tells us that the expected time to stabilization grows quadratically in the number of locusts m, and linearly in the track length n. In particular, Theorem 1 tells us that all locusts must have identical bias within finite expected time. This fact in isolation (without the time bounds in the statement of the theorem) is relatively straightforward to prove, by noting that the evolution of the locusts' headings and locations can be modelled as a finite Markov chain, and the only absorbing classes in this Markov chain are ones in which all locusts have the same heading (see [19]).
Next we define segments: sets of consecutive locusts on the same track which all have the same heading. This will allow us to partition the swarm into segments, such that every locust belongs to a unique segment (see Figure 3). Although this section focuses on the case of a single track (and claims in this section are made under the assumption that there is only a single track), the definition is general, and we will use it in subsequent sections.
The locust B q−1 is called the segment head, and A is called the segment tail of this segment. Only locusts which are segment heads at the beginning of a time step can change their heading by the end of that time step. When the heads of two segments are adjacent to each other, the resulting conflict causes one to change its heading, leave its previous segment, and instead become part of the other segment. If the head of a segment is also the tail of a segment, the segment is eliminated when it changes heading. Two segments separated by a segment of opposite heading merge if the opposite-heading segment is eliminated, which decreases the number of segments by 2. No other action by a locust can change the segments. Hence, the number of segments and segment tails can only decrease.
Since our model is stochastic, different sequences of events may occur and result in different segments. However, by the above argument we can conclude that in any such sequence of events, there must always exist at least one locust which remains a segment tail at all times t < T stable and never changes its heading (since at least one segment must exist as long as t < T stable ). Arbitrarily denote one such segment tail "A W ". Definition 3. The segment of A W at the beginning of time t is called the winning segment at time t, and is denoted SW (t). The head of SW (t) is labelled H W (t). For convenience, if at time t 0 the swarm is stable (i.e. t 0 ≥ T stable ), then we define SW (t 0 ) as the set that contains all m locusts.
Lemma 4
The expected number of time steps t < T stable in which |SW (t)| changes is bounded by m 2 .
Proof. Let C m denote the number of changes to the size of SW (t) that occur before time T stable . Note that T stable is the first time step where |SW (t)| = m. |SW (t)| can only decrease, by 1 locust at a time, if H W (t) conflicts with another locust and loses. |SW (t)| can increase in several ways, for example when it merges with other segments. In particular, |SW (t)| increases by at least 1 whenever H W (t) conflicts with a locust and wins, which happens with probability at least 1 2 . Hence, whenever SW (t) changes in size, it is more likely to grow than to shrink. We can bound E[C m ] by comparing the growth of |SW (t)| to a random walk with absorbing boundaries at 0 and m: Consider a random walk on the integers which starts at |SW (0)|. At any time step t, the walker takes a step left with probability 1 2 , otherwise it takes a step right. If the walker reaches either 0 or m, the walk ends. Denote by C * m the time it takes the walk to end. Using coupling (cf. [25]), we see that m |the walker never reaches 0], since per the previous paragraph, |SW (t)| clearly grows at least as fast as the position of the random walker (note that |SW (t)| > 0 is always true, which is analogous to the walker never reaching 0).
Let us show how to bound E[C * m |the walker never reaches 0]. Since the walk is memoryless, we can think of this quantity as the number of steps the random walker takes to get to m, assuming it must move right when it is at 0, and assuming the step count restarts whenever it moves from 0 to 1. If we count the steps without resetting the count, we get that this is simply the expected number of steps it takes a random walker walled at 0 to reach position m, which is at most m 2 (cf. [1]). Hence E[C * m |the walker never reaches 0] ≤ m 2 .
Lemma 5 The expected number of time steps t < T stable in which |SW (t)| does not change is bounded by 2(n − m).
Lemma 5 will require other lemmas, and some new definitions to prove.
Definition 6. Let A and B be two locusts or two locations which lie on the same track. The clockwise distance from A to B at time t is the number of clockwise steps required to get from A's location to B's location, and is denoted dist For the rest of this section, let us assume without loss of generality that the winning segment's tail A W has clockwise heading. Label the empty locations in the ring at time t = 0 (i.e., the locations not containing locusts at time t = 0) as E 1 , E 2 , . . . E n−m , sorted by their counterclockwise distance to A W at time t = 0, such that E 1 minimizes dist cc (E i , A W ), E 2 has the second smallest distance, and so on. We will treat these empty locations as having persistent identities: whenever a locust A moves from its current location to E i , we will instead say that A and E i swapped, and so E i 's new location is A's old location.
We say a location E i is inside the segment SW (t) at time t if the two locusts which have the smallest clockwise and counterclockwise distance to E i respectively are both in SW (t). Otherwise, we say that E i is outside SW (t). A locust or location A is said to be between E i and E j , Definition 7. All empty locations are initially blocked. A location E i becomes unblocked at time t + 1 if all empty locations E j such that j < i are unblocked at time t, and a locust from SW (t) swapped locations with E i at time t. Once a location becomes unblocked, it remains that way forever.
Lemma 8 There is some time step t * ≤ n − m such that: 1. Every blocked empty location E is outside SW (t * ) (if any exist) 2. At least t * empty locations are unblocked.
Proof. If E 1 is outside SW (0), then the same must be true for all other empty locations, so t * = 0 and we are done. Otherwise, E 1 becomes unblocked at time t = 1. If E i becomes unblocked at time t, then at time t, it cannot be adjacent to E i+1 , since the locust that swapped with E i in the previous time step is now between E i and E i+1 . By definition, there are no empty locations E j between E i and E i+1 . Consequently, if E i+1 is inside SW (t) at time t, it will swap with a locust of SW (t) at time t, and become unblocked at time t + 1. If E i+1 is outside the segment at time t, it will become unblocked at the first time step t > t that begins with E i+1 inside SW (t ). Hence, if E i becomes unblocked at time t, then E i+1 becomes unblocked at time t + 1 or E i+1 is outside SW (t + 1) at time t + 1.
Let t * be the smallest time where there are no blocked empty locations inside SW (t * ). By the above, at every time step t ≤ t * an empty location becomes unblocked, hence there are at least t * unblocked empty locations at time t * . Also, since there are n − m empty locations, this implies t * ≤ n − m.
Lemma 9 There is no time t < T stable where an unblocked location is clockwise-adjacent to H W (t) (i.e., there is no time t where an unblocked empty location E is located one step clockwise from H W (t)).
Proof. First consider what happens when E 1 becomes unblocked: it swaps its location with a locust in SW (t), and since E 1 is the clockwise-closest empty location to A W , the entire counterclockwise path from E 1 to A W consist only of locusts from SW (t). Hence E 1 will move counterclockwise at every time step, until it swaps with A W . Once it swaps with A W , E 1 will not swap with another locust at all times t < T stable , since for that to occur we must have that b(A ← W ) = b(A W ), which is impossible since by definition A W remains a segment tail until t = T stable . E 1 does not swap with H W (t) while E 1 moves counterclockwise towards A W nor after E 1 and A W swap as long as the swarm is unstable, hence there is no time step t < T stable when E 1 is unblocked and swaps with H W (t).
Now consider E 2 . E 2 becomes unblocked at least one time step after E 1 , and there is at least one locust in SW (t) which is between E 1 and E 2 at the time step E 1 becomes unblocked (in particular, the locust in SW (t) that swapped with E 1 must be between E 1 and E 2 at that time). Since E 1 subsequently moves towards A W at every time step until they swap, E 2 cannot become adjacent to E 1 until they both swap with A W . Hence the location one step counterclockwise to E 2 must always be a locust until E 2 swaps with A W , meaning that similar to E 1 , E 2 also moves counterclockwise towards A W at every time step after E 2 becomes unblocked until they swap locations. Consequently, just like E 1 , there is no time step t < T stable when E 2 is unblocked and swaps with H W (t).
More generally, by a straightforward inductive argument, the exact same thing is true of E i : once it becomes unblocked, it moves counterclockwise towards A W at every time step until it swaps with A W . Thus, upon becoming unblocked, E i does not swap with H W (t) as long as t < T stable .
Proof. If, at the beginning of time step t, H W (t) is adjacent to a locust from a different segment, then |SW (t)| will change at the end of this time step due to the locusts' conflict. Hence, to prove Lemma 5, it suffices to show that out of all the time steps before time T stable , H W (t) is not adjacent to the head of a different segment in at most 2(n − m) different steps in expectation.
If all empty locations are unblocked at time n − m, then by Lemma 9, H W (t) conflicts with the head of another segment at all times t ≥ n−m. Therefore, |SW (t)| will change at every time step n−m < t < T stable , which is what we wanted to prove.
If all empty locations are not unblocked by time n − m, then by Lemma 5, there must be some time t * ≤ n − m where at least t * empty locations are unblocked and all blocked empty locations are outside SW (t * ). Let E j be the minimal-index blocked location which is outside SW (t * ) at time t * . Since there are no blocked empty locations inside SW (t * ), all locations E i with i < j are unblocked. Hence, E j will become unblocked as soon as it swaps with the head of the winning segment. Since (by the clockwise sorting order of E 1 , E 2 , . . .) E j+1 cannot swap with the winning segment head before E j is unblocked, E j+1 will also become unblocked after the first time step where it swaps the winning segment head. The same is true for E j+2 , . . . E n−m . Hence, every empty location that H W (t) swaps with after time t * becomes unblocked in the subsequent time step. By Lemma 5, the total swaps H W (t) could have made before time T stable is thus most t * + (n − m − j) ≤ n − m. Whenever an empty location is one step clockwise from H W (t), they will swap with probability at least 0.5 (the swap is not guaranteed, since it is possible the location is also adjacent to the head of another segment, and hence a tiebreaker will occur in regards to which segment head occupies the empty location in the next time step). Consequently, the expected number of time steps H W (t) is not adjacent to the head of another segment is bounded by 2(n − m).
The proof of Theorem 1 now follows.
Proof. Lemma 5 tells us that before time T stable , |SW (t)| does not change in at most 2(n − m) time steps in expectation, whereas Lemma 4 tells us that the expected number of changes to |SW (t)| before time T stable is at most m 2 . Hence, for any configuration of m locusts on a ring of track length n, E[T stable ] ≤ m 2 + 2(n − m).
Let us now show a locust configuration for which E[T stable ] = Ω(m 2 + n), so as to asymptotically match the upper bound we found. Consider a ring with k = 1, m divisible by 2, and an initial locust configuration where locusts are found at coordinates (0, 1), (1, 1), . . . (m/2, 1) with clockwise heading and at (−1, 1), (−2, 1), . . . (−m/2 − 1, 1) with counterclockwise heading, and the rest of the ring is empty. This is a ring with exactly two segments, each of size m/2. Since after every conflict, the segment sizes are offset by 1 in either direction, the expected number of conflicts between the heads of the segments that is necessary for stabilization is equal to the expected number of steps a random walk with absorbing boundaries at m/2 and −m/2 takes to end, which is m 2 /4 (see [16]). Since the heads of the segments start at distance n − m from each other, it takes Ω(n − m) steps for them to reach each other. Hence the expected time for this ring to stabilize is Ω(m 2 + n − m).
Locusts on wide ringlike arenas (k > 1)
Let us now investigate the case where m locusts are marching on a cylinder of height k > 1 partitioned into k tracks of length n. The first question we should ask is whether, just as in the case of the k = 1 setting, there exists some time T where all locusts have identical heading. The answer is "not necessarily": consider for example the case k = 2 where on the k = 1 track, all locusts march clockwise, and on the k = 2 track, all locusts march counterclockwise. According to the track-switching conditions (Section 2), no locust will ever switch tracks in this configuration, hence the locusts will perpetually have opposing headings. As we shall prove in this section, on the cylinder, swarms stabilize locally-meaning that eventually, all locusts on the same track have identical heading, but this heading may be different between tracks.
Let us say that the yth track is stable if all locusts whose location is (·, y) have identical heading. Note that once a track becomes stable, it remains this way forever, as by the model, the only locusts that may move into the track must have the same heading as its locusts. Let T stable be the first time when every all the k tracks are stable. Our goal will be to prove the following asymptotic bounds on T stable : The bound O(mn + m 2 ) is more accurate when m is small (m log(k)n), and the bound O(log(k)n 2 ) is more accurate when m is large.
Recalling Definition 2, each locust in the system belongs to some segment. Each track has its own segments. Locusts leave and join segments due to conflicts, or when they pass from their current segment to a track on a different segment. In this section, we will treat segments as having persistent identities, similar to SW in the previous section. We introduce the following notation: Definition 11. Let S be a segment whose tail is A at some time t 0 . We define S(t) to be the segment whose tail is A at the beginning of time t. If A is not a segment tail at time t, then we will say S(t) = ∅ (this can happen once A changes its heading or moves to another track, or due to another segment merging with S(t) which might cause b(A ← ) to equal b(A), thus making A no longer the tail).
Furthermore, define S 1 to be the segment tail of S and S i+1 = S → i .
Let us give a few examples of the notation in Definition 11. Suppose at time t 1 we have some segment S. Then the tail of S is S 1 , and the head is S |S| . S(t) is the segment whose tail is S 1 at time t, hence S(t 1 ) = S. Finally, S(t) |S(t)| is the head of the segment S(t).
In the k > 1 setting, locusts can frequently move between tracks, which complicates our study of T stable . Crucially, however, the number of segments on any individual track is non-increasing. This is because, first, as shown in the previous section, locusts moving and conflicting on the same track can never create new segments. Second, by the locust model, locusts can only move into another track when this places them between two locusts that already belong to some (clockwise or counterclockwise) segment.
That being said, locusts moving in and out of a given track makes the technique we used in the previous section unfeasible. In the following definitions of compact and deadlocked locust sets, our goal is to identify configurations of locusts on a given track which locusts cannot enter from another track. Such configurations can be studied locally, focusing only on the track they are in. In the next several lemmas, we will bound the amount of time that can pass without either the number of segments decreasing, or all segments entering into deadlock.
Definition 12.
We call a sequence of locusts X 1 , X 2 , . . . compact if X i+1 = X → i and either: 1. every locust in X has clockwise heading and for every i < |X|, dist c (X i , X i+1 ) ≤ 2, or 2. every locust in X has counterclockwise heading and for every i < |X|, dist cc (X i , X i+1 ) ≤ 2.
An unordered set of locusts is called compact if there exists an ordering of all its locusts that forms a compact sequence.
. . Y k } be two compact sets, such that the locusts of X have clockwise heading and the locusts of Y have counterclockwise heading. X and Y are in deadlock if dist c (X j , Y k ) = 1. (See Figure 4) A compact set of locusts X is essentially a platoon of locusts all on the same track which are heading in one direction, and are all jammed together with at most one empty space between each consecutive pair. As long as X remains compact, no new locusts can enter the track between any two locusts of X, because the model states that locusts do not move vertically into empty locations to which a locust is attempting to move horizontally, and the locusts in a compact set are always attempting to move horizontally to the empty location in front of them.
Definition 14.
A maximal compact set is a set X such that for any locust A / ∈ X, X ∪ A is not compact.
A straightforward observation is that locusts can only belong to one maximal compact set: Observation 15 Let A be a locust. If X and Y are maximal compact sets containing A, then X = Y .
Lemma 16 Let X and Y be two sets of locusts in deadlock at the beginning of time t. Then at every subsequent time step, the locusts in X ∪ Y can be separated into sets X and Y that are in deadlock, or the locusts in X ∪ Y all have identical heading.
It suffices to show that if X and Y are in deadlock at time t, they will remain that way at time t + 1, unless X ∪ Y 's locusts all have identical heading. Let us assume without loss of generality ("w.l.o.g.") that X has clockwise heading, and therefore Y has counterclockwise heading. By the definition of deadlock, at time t, X j and Y k conflict, and the locust that loses joins the other set. Suppose w.l.o.g. that X j is the locust that lost. If |X| = 1, then the locusts all have identical heading, and we are done. Otherwise, set Note that since X and Y are compact at time t, no locust could have moved vertically into the empty spaces between pairs of locusts in X ∪ Y . Furthermore the locusts of X and Y all march towards X j and Y k respectively, hence the distance between any consecutive pair X i , X i+1 or Y i , Y i+1 could not have increased. Thus X and Y are compact.
To show that X and Y are deadlocked at time t + 1, we need just to show that dist c (X j−1 , X j ) is 1 at time t + 1. Since the distances do not increase, if dist c (X j−1 , X j ) was 1 at time t, we are done. Otherwise dist c (X j−1 , X j ) = 2 at time t, and since X j did not move (it was in a conflict with Y k ), X j−1 decreased the distance in the last time step, hence it is now 1.
Lemma 17
Suppose P and Q are the only segments on track K at time t 0 , and P 's locusts have clockwise heading. Let d = dist c (P 1 , Q 1 ). After at most 3d time steps, P (t 0 + 3d) and Q(t 0 + 3d) are in deadlock, or the track is stable.
Proof. The track K consists of locations of the form (x, y) for some fixed y and 1 ≤ x ≤ n. For brevity, in this proof we will denote the location (x, y) simply by its horizontal coordinate, i.e., x, by writing (x) = (x, y).
We may assume w.l.o.g. that t 0 = 0, and that P 1 is initially at (0). Note that this means Q 1 is at (d) at time 0. If at any time t ≤ 3d, the track is stable, then we are done, so we assume for contradiction that this is not the case. This means that P 1 and Q 1 do not change their headings before time 3d. This being the case, we get that dist c (P 1 , Q 1 ) is non-increasing before time 3d. As the segments P (t) and Q(t) move towards each other at every time step t ≤ 3d, we can consider only the interval of locations [0, d], i.e., the locations (0), (1), . . . (d). We then define the distance dist(·, ·) between two locusts in this interval whose x-coordinates are x 1 and x 2 as |x 1 − x 2 |.
At any time t ≤ 3d, we may partition the locusts in [0, d] into maximal compact sets of locusts. This partition is unique, by Observation 15. Let us label the maximal compact sets of locusts that belong to P (t) as C t 1 , C t 2 , . . . C t ct , where the segments are indexed from 1 to c t , sorted by increasing x coordinates, such that C t 1 contains the locusts closest to (0). Analogously, we label the maximal compact sets that belong to Q(t) as W t 1 , W t 2 , . . . W t wt , with indices running from 1 to w t , sorted by decreasing x-coordinates such that W t 1 contains the locusts that are closest to (d) (see Figure 5). In this proof, the distance between two sets of locusts X, Y , denoted dist(X, Y ), is defined simply as the minimal distance between two locusts A ∈ X, B ∈ Y . Our proof will utilise the functions: is the sum of distances between consecutive clockwise-facing sets in the partition at time t. L 2 (t) is the sum of distances between the counterclockwise sets. L 3 (t) is the distance between the two closest clockwise and counterclockwise facing sets. The function L(t) is the sum of distances between consecutive compact sets in the partition. When L(t) = 1, there are necessarily only one clockwise and one counterclockwise facing sets in the partition, which must equal P (t) and Q(t) respectively. Furthermore, L(t) = 1 implies that the distance between P (t) and Q(t) is 1. Hence when L(t) = 1, P (t) and Q(t) are both in deadlock. The converse is true as well, hence L(t) = 1 if and only if P (t), Q(t) are in deadlock. We will use L(t) as a potential or "Lyapunov" function [22] and show it must decrease to 1 within 3d time steps. By Lemma 16, once P and Q are in deadlock they will remain in deadlock until one of them is eliminated, which completes the proof.
Let us denote by max(X) the locust with maximum x-coordinate in X, and by min(X) the locust with minimal x-coordinate. We may also use max(X) and min(X) to denote the x coordinate of said locust. Note that dist(C t i , C t i+1 ) is the distance between max(C t i ) and min(C t i+1 ). Recall that in the locust model, every time step is divided into a phase where locusts move horizontally (on their respective tracks), and a phase where they move vertically. First, let us show that the sum of distances L 1 (t) does not increase due to changes in either the horizontal or vertical phase. Since L 1 (t) is the sum of distances between compact partition sets whose locusts move clockwise, and for all C t i except perhaps C t ct , max(C t i ) always moves clockwise, the distance dist(C t i , C t i+1 ) does not increase as a result of locust movements (note that clockwise movements of max(C t i ) do not result in a new compact set because the rest of the locusts in C t i follow it). Furthermore, since conflicts cannot result in a new maximal compact set in the partition, conflicts do not increase L 1 (t). Hence, L 1 (t) does not increase in the horizontal phase. In the vertical phase, clockwise-heading locusts entering the track either create a new set in the partition, which does not affect the sum of distances (as they then merely form a "mid-point" between two other maximal compact sets), or they join an existing compact set, which can never increase L 1 (t). By the locust model, the only locusts that can move tracks are max(C t ct ) and min(W t wt ), since these are the only locusts for which the condition b(A) = b(A → ) is true, so locusts moving tracks cannot increase L 1 (t) either. In conclusion, L 1 (t) is non-increasing at any time step. By analogy, L 2 (t) is non-increasing.
Similar to L 1 and L 2 , the distance L 3 (t) cannot increase as a result of locusts entering the track. It can increase as a result of a locust conflict which eliminates either W t wt or C t ct , but such an increase is compensated for by a comparable decrease in either L 1 (t) or L 2 (t). It is also simple to check that, since P (t) and Q(t) are always moving towards each other when they are not in deadlock (i.e., when L(t) > 1), there will be at least two compact sets in the partition that decrease their distance to each other, hence L 1 , L 2 or L 3 must decrease by at least 1 in the horizontal phase.
To conclude: L 1 (t) and L 2 (t) are non-increasing. L 3 (t) is non-increasing during the horizontal phase and as a result of new locusts entering K. If L(t) > 1, L(t) decreases during each horizontal phase. Hence, L(t) decreases in every time step where L(t) > 1 and no locusts in K move to another track.
What happens when locusts in K do move to another track? As proven, L 1 (t) and L 2 (t) do not increase. However, the distance L 3 (t) will increase, since the only locusts that can move tracks are max(C t wt ) and min(W t ct ). It is straightforward to check that when C t ct contains more than one locust, L 3 (t) will increase by at most 2 as a result of max(C t wt ) moving tracks. When C t ct contains exactly one locust, L 3 (t) can increase significantly (as L 3 (t) then becomes the distance between C t ct−1 and W wt ), but any increase is matched by the decrease in L 1 (t) as a result of C t ct being eliminated. Analogous statement hold for W t wt , and hence L 3 (t) can increase by at most 2 as a result of one locust moving out of the track. We need to bound, then, the number of locusts in K that move tracks before time 3d. We define the potential function F (t): F (t) is the sum of the empty locations between consecutive compact sets in the partition whose locusts have the same heading, plus the number of locusts in K. Note that F (t) ≥ 0 at all times t. We will show F (t) is non-increasing, and that it decreases whenever a locust leaves the track. Hence, at most F (0) locusts can leave the track.
Let us show that F (t) is non-increasing. We already know L 1 and L 2 are non-increasing. In the horizontal phase, |P (t)∪Q(t)| is of course unaffected. c t and w t can decrease as a result of maximal compact sets merging, hence increasing F , but this can only happen when the distance between two such sets has decreased, hence the resulting increase to F is undone by a decrease in L 1 and L 2 . Hence, F (t) does not increase because of locusts' actions during the horizontal phase.
Likewise, locusts leaving K can decrease c t or w t when they cause a maximal compact set to be eliminated, but this is matched by a comparable decrease in L 1 or L 2 which means that F does not increase due to locusts moving out of the track. Furthermore, |P (t) ∪ Q(t)| decreases when this happens. Hence, a locust moving out of the track decreases F (t) by at least 1. Finally, let us show that locusts entering the track does not increase F (t).
At time t, locusts can only enter the track at empty locations that are found in intervals of the form , min(C t i )] for some i. In particular, locusts cannot enter empty locations that are between two locusts belonging to the same compact set (because a locust in that set will always be attempting to move to that location in the next time step, and the model disallows vertical movements to such locations), nor can they enter the track on the empty locations between min(C t ct ) and max(W t wt ). Thus, locusts entering the track at time t decrease the amount of empty locations between two clockwise or counterclockwise compact partition sets (and perhaps cause the sets between which they enter to merge into a single compact set). This will always decrease L 1 (t) + L 2 (t) − c t − w t by at least 1 and increase |P (t) ∪ Q(t)| by 1. On net, we see that new locusts entering K either decreases or does not affect F .
In conclusion, F (t) is non-increasing, and any time a locust moves to another track, F (t) decreases by 1. Thus, at most F (0) locusts can move from K to another track. Recall that locusts moving out of the track can increase L(t) by at most 2. Hence after at most L(0) + 2F (0) ≤ d + 2d = 3d time steps, L(t) = 1.
Lemma 18
Let seg(t) denote the set of segments in all tracks at time t. At time t+3n, either every segment is in deadlock with some other segment, or |seg(t + 3n)| < |seg(t)|.
Proof. Consider some track K and a segment P which is in that track at time t. Let us assume that |seg(t + 3n)| = |seg(t)|, and show that P (t + 3n) must be in deadlock with another segment. At any time t ≥ t, as long as the number of segments on K does not decrease, the locusts of P (t ) will be marching towards locusts of another segment, which we will label Q(t ). They cannot collide or conflict with locusts belonging to any segment other than Q(t ). Hence, other segments in K do not affect the evolution of P (t) and Q(t) before time t + 3n, and we can assume w.l.o.g. that P (t) and Q(t) are the only segments in K at time t. Let d be as in the statement of Lemma 17. Since n ≥ d, Lemma 17 tells us that at some time t ≤ t * ≤ t + 3n, P (t * ) and Q(t * ) must be in deadlock. Since by Lemma 16, P and Q must remain in deadlock until one of them is eliminated, we see that at time t + 3n they must still be in deadlock, since we assumed |seg(t)| = |seg(t + 3n)|. Let us estimate E[T 2i ]. Suppose that at time t, the number of segments is 2i. Then after 3n steps at most, either the number of segments has decreased, or all segments are in deadlock. There are in total i pairs of segments in deadlock, and as there are m locusts, there must be a pair P (t + 3n), Q(t + 3n) that contains at most m/i locusts. By Lemma 16, P (t + 3n), Q(t + 3n) remain in deadlock until either P or Q is eliminated. We can compute precisely how long this takes, since at every time step after time t + 3n, the heads of P and Q conflict, resulting in one of the segments increasing in size and the other decreasing. Hence, the expected time it takes P or Q to be eliminated is precisely the expected time it takes a symmetric random walk starting at 0 to reach either |P (t + 3n)| or −|Q(t + 3n)|, which is |P (t + 3n)| · |Q(t + 3n)| ≤ ( m 2i ) 2 . Hence, Consequently: Where we used the inequality |seg(0)| ≤ m and the identity We prove next that E[T stable ] = O(log(k)n 2 ). For this, we require the following result: Lemma 20 Consider k independent random walks with absorbing barriers at 0 and 2n, i.e., random walks that end once they reach 0 or 2n. The expected time until all k walks end is O(n 2 log(k)).
Proof. First, let us set k = 1 and estimate the probability that the one walk has not ended by time t. Let P be the transition probability matrix of the random walk, and let v be the vector describing the initial probability distribution of the location of the random walker. Then vP t is the probability distribution of its location after t time steps [24]. The evolution of vP t is well-studied and relates to "the discrete heat equation" [23]. The probability that the walk has not ended at time t is the sum Asymptotically, this sum is bounded by O(λ t ) where λ = cos( π 2n ) is the 2nd largest eigenvalue of P (cf. [23]). Returning to general k, let T k be a random variable denoting the time when all k walks end. By looking at the series expansion of cos(1/x), we may verify that for n > 1, cos( π 2n ) < 1 − 1 n 2 . From the previous paragraph, and because the walks are independent, we therefore see that Consequently, for t n 2 , the following asymptotics hold for some constant C: Where we used the fact that (1 + x/n) n → e x as n → ∞. Note that P r(T k ≥ t + n 2 log(C)) < 1 − (1 − e −t/n 2 ) k . Hence: Where we used the equality T stable is the first time when M t = 0. Let us assume n is even for simplicity (the computation will hold regardless, up to rounding). We have that M 0 ≤ n, and M t decreases in leaps of 2 or more (since segments can only be eliminated in pairs). Hence, T stable is bounded by the amount of time it takes M t to decrease at most n/2 times. By linearity of expectation and the previous paragraph, this can be bounded by summing 3n + c log(k) 2n Mt 2 over M t = n, n − 2, n − 4, . . . 2: As claimed.
The proof of Theorem 10 follows immediately from Theorems 19 and 21, by taking the minimum.
Erratic track switching and global consensus Theorem 10 shows that, after finite expected time, all locusts on a track have identical heading. This is a stable local consensus, in the sense that two different tracks may have locusts marching in opposite directions forever. We might ask what modifications to the model would force a global consensus, i.e., make it so that stabilization occurs only when all locusts across all tracks have identical heading. There is in fact a simple change that would force this to occur: let us assume that at time step t any locust has some probability of acting "eratically" in either the vertical or horizontal phases: 1. With probability r, a locust might behave erratically in the horizontal phase, staying in place instead of attempting to move according to its heading. 2. With probability p, a locust may behave erratically in the vertical phase, meaning that even if the vertical movement conditions (1)-(3) of the model (see Section 2) are not fulfilled, the locust attempts to move vertically to an adjacent empty space on the track above or below them (if such empty space exists).
These behaviours are independent, and so a locust may behave erratically in both the vertical and horizontal phases, in just one of them, or in neither.
The next theorem shows that the existence of erratic behaviour forces a global consensus of locust headings. The goal is to prove that there is some finite time after which all locusts must have the same heading. Note that the bound we find for this time is crude, and is not intended to approximate T stable . We study the question of how p affects T stable empirically in the next section.
Theorem 22
Assuming there is at least one empty space (i.e., m < nk), and the probability of erratic track switching is 0 < r, p < 1, the locusts all have identical heading in finite expected time.
Proof. Our goal is to show that all locusts must have identical heading in finite expected time. We will find a crude upper bound for this time. It suffices to show that as long as there are two locusts with different headings in the system (perhaps not on the same track), there is a bounded-above-0 probability q that within a some constant, finite number of time steps C (we will show C = O(log(k)n 2 + nk)), the number of locusts with clockwise heading will increase. This amounts to showing that there is a sequence of events, each individual event happening with non-zero probability, that culminates in a conflict between two locusts occurring (since any conflict has probability 0.5 of increasing the number of clockwise locusts). Since q > 0, the only stable state of locust headings is the state where all locusts have identical heading, as otherwise there is always some probability that all locusts will have clockwise heading after m · C time steps; this completes the proof.
Let us show such a sequence of events. First let us consider the case where there is a track in which two locusts have non-identical headings. In this case, assuming no locusts behave erratically for O(log(k)n 2 ) steps (which occurs with a tiny but bounded-above-0 probability since p, r > 0), Theorem 10 tells us that in expected O(log(k)n 2 ) steps, locusts on the same track will have identical heading. Hence, there is a sequence of events that happens with non-zero probability which leads to local consensus in the tracks.
If any conflict occurs during this sequence, we are done. Otherwise, we need to show a sequence of events that leads to a conflict, assuming all tracks are stable. The only thing that causes locusts in local consensus to move tracks is erratic behaviour. If two adjacent tracks have locusts with non-identical heading, and there is at least one empty space in one of them, then (since r > 0) with some probability within at most n time steps an empty space in one track will be vertically adjacent to a locust in the other track. At this point, with probability p, that locust will move from one track to the other. This creates a situation where in one track there are locusts of different headings again. If the erratic locust moves tracks at the right time, upon moving it will be adjacent to another locust in its new track, whose heading is different. Hence, the erratic locust will enter a conflict in the next time step, which will increase the number of clockwise locusts with probability 0.5. Now let us consider a pair of two adjacent tracks with locusts of different headings such that there no empty space in one of them. We note that since there is at least one empty location in some track, erratic behaviour can cause that empty location to move vertically in an arbitrary fashion until, after at most k movements, it enters a track from the pair. With non-zero probability, this can take at most nk time steps, after which we are reduced to the situation in the previous paragraph.
A pair of adjacent tracks that have locusts with different headings must exist unless there is global consensus. Hence, in every O(log(k)n 2 + nk) time steps where there is no global consensus, there is a some probability q > 0 that the number of clockwise-heading locusts will increase. Let us explore some questions about the expected value of T stable through numerical simulations. Certain aspects of the locusts' dynamics were not studied in our formal analysis: the most interesting of which is the helpful effects of track switching on T stable . Recall that our model allows locusts to switch tracks if this would enable them to avoid a conflict and join a track where locally, locusts are marching in their same direction. At least in principle, this seems like it should help our locusts achieve local stability faster, hence decrease T stable . However, recall also that we do not specify when locusts switch tracks, which means that some locusts might never switch tracks, or they might choose to do so in the worst possible moments. Hence, the positive effect track-switching usually has on T stable cannot be reflected in the bounds we found for E[T stable ], since these bounds must reflect all possible locust behaviours. Under ordinary circumstances, however, it seems as though frequent track switching should noticeably decrease the time to local stabilization. As we shall see numerically, this is indeed the case. This justifies the track-switching behaviour as a mechanism that, despite being highly local, enables the locusts to come to local consensus about the direction of motion sooner.
Simulation and empirical evaluation
In Figure 6, (a) and (b), we measure T stable as it varies with n and k, assuming the probabilities of erratic behaviour are 0 (i.e., r = p = 0). We simulate two different locust configurations: a "dense" configuration, and a "sparse" configuration. In the dense configuration, 50% of locations are initiated with a locust, with the locations chosen at random. In the sparse configuration, 10% of locations are initiated with a locust (or slightly more, to guarantee all tracks start with 2 locusts). The locusts are initiated with random heading. We measure the effect of track switching on T stable : the opaque lines measure T stable when locusts switch tracks as often as they can (while still obeying the rules of the model), and the dotted lines measure T stable when locusts never switch tracks. For every value of n, k, we ran the simulation 1000 to 3000 times and averaged T stable over all simulations.
As we can see, in the sparse configuration, track-switching has a significantly positive effect on time to stabilization. For example, with k = 30, n = 30, T stable is approximately 13.5 when locusts switch tracks as soon as they can, and approximately 25 when they never switch tracks-nearly double. In the dense configuration, we see that enabling locusts to move tracks has little to no effect, since the locust model rarely allows them to do so due to the tracks being overcrowded.
In column (c) of Figure 6, we measure how a non-zero probability p of erratic behaviour affects T stable . We set r = 0. As we proved in the previous section, whenever p > 0, stabilization requires global rather than local consensus. Hence, we cannot directly compare the T stable of these graphs with columns (a) and (b), where T stable measures the time to local consensus. We see that E[T stable ] approaches ∞ as p goes to 0, as one would expect, since when p = 0, global stability can never occur in some initial configurations. E[T stable ] decreases sharply as p goes to some critical point around 0.1, and decreases at a slower rate afterwards. It is interesting to note that low probability of erratic behaviour affects E[T stable ] significantly more in the sparse configuration, where for p = 0.02, if locusts also switch tracks whenever the model allows them, E[T stable ] was measured as being approximately 1974, as opposed to 669 in the dense configuration. One of the core reasons for this seems to be that, in the sparse configuration, when a locust erratically moves to a track with a lot of locusts not sharing its heading, it will often be able to non-erratically move back to its former track, thus preventing locust interactions between tracks of different headings. When we disabled the locusts' ability to switch tracks non-erratically, T stable was significantly smaller in the sparse configuration (E[T stable ] ≈ 232 for p = 0.02).
Based on the above, we make the curious observation that, while non-erratic track switching accelerates local consensus, for some track-switching behaviours, it will in fact decelerate the attainment of global consensus. This is seen by the fact that frequent non-erratic track-switching was helpful in Columns (a) and (b) of Figure 6, but increased time to stabilization in Column (c). This is perhaps a very natural observation, because agents that aggressively switch tracks will attempt to avoid conflict as often as possible, whereas conflict is necessary to create global consensus.
Concluding remarks
We studied collective motion in a model of discrete locust-inspired swarms, and bounded the expected time to stabilization in terms of the number of agents m, the number of tracks k, and the length of the tracks n. We showed that when the swarm stabilizes, there must be a local consensus about the direction of motion. We also showed that, when the model is extended to allow a small probability of erratic behaviour to perturb the system, global consensus eventually occurs.
A direct continuation of our work would be to find upper bounds on time to stabilization when there is some probability of erratic behaviour. Furthermore, our empirical simulations suggest several curious phenomena related to erratic behaviour: first, there seems to be a clash between "erratic" and non-erratic, "rational" track-switching, as when locusts switch tracks non-erratically in order to avoid collisions, this seems to accelerate the attainment of local consensus, but mostly hinder the attainment of global consensus. Second, increasing the probability of erratic track-switching p behaviour was helpful in accelerating global consensus up to a point, but in simulations, its impact seemed to fall off past a small critical value of p. In future work, it would be interesting to investigate these aspects of the model. Although our dynamics model is inspired by experiments on locusts, it can be understood in more abstract terms as a model that describes a situation where many agents that wish to maintain a direction of motion are confined to a small space where they exert pressure on each other. It is natural to ask what kinds of collective dynamics, if any, we should expect when this small space has a different topology; rather than a ringlike arena, we might consider, e.g., a square arena. We believe that rich models of swarm dynamics can be discovered through observing natural organisms exert pressure on each other in such environments. In the introduction, we mentioned points of similarity between our model and models of opinion dynamics. We suspect that these points of similarity will remain in settings with non-ringlike arenas, and might provide a starting point for formally modelling and analysing them. | 2020-12-10T02:15:51.994Z | 2020-12-09T00:00:00.000 | {
"year": 2020,
"sha1": "5af15d82e686db4e232f6dae808ed07adb10dfe4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.04980",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0d35ba7ada67509039f4fda3ce54f60f6a88ceaa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
258466428 | pes2o/s2orc | v3-fos-license | Chronic Suppurative Otitis Media Patient Presenting With Hyperhomocysteinemia in Granulomatosis With Polyangiitis
Granulomatosis with Polyangiitis (GPA) can present with Cerebral Venous Sinus Thrombosis (CVST), Chronic Suppurative Otitis Media, and Lower Motor Neuron (LMN) Facial Palsy. However, an association between CVST and Hyperhomocysteinemia in GPA has not previously been reported. Here, we report a case of CVST and Hyperhomocysteinemia in Proteinase 3 anti-neutrophil cytoplasmic antibody (PR3-ANCA) positive GPA without renal involvement.
Introduction
Granulomatosis with Polyangiitis (GPA), formerly known as Wegner's granulomatosis, is characterized by necrotizing granulomatous inflammation, usually involving the upper and lower respiratory tracts with nodules, alveolar hemorrhage, and necrotizing glomerulonephritis. However, any organ system could be affected during disease progression [1][2][3]. Renal involvement is the most common at 18-77 %, Central Nervous System (CNS) involvement was 1-8 %, and Otitis Media was up to 25-44% % in GPA patients [4]. A recent case report has revealed an association between Cerebral Venous Sinus Thrombosis (CVST) and Hypertrophic Pachymeningitis (HP) in patients with Proteinase 3 anti-neutrophil cytoplasmic antibody (PR3-ANCA) positive GPA [5]. There is a clear relationship between CVST and Hyperhomocysteinemia [6]. A few studies have reported deep venous thrombosis (DVT) as venous involvement in GPA [7].
Case Presentation
A 34-year-old male presented to General Medicine Outpatient Department with c/o low-grade fever without chills for five months, gradually culminating in persistent severe dry cough in the last three months and started having blood in sputum in the last two weeks. He took antibiotics and antipyretics, but there was no relief. He had a history of hospitalization two years ago for severe headaches and a single seizure episode. He was diagnosed with Cerebral Venous Sinus Thrombosis on MRI Brain Venogram (Figure 1-3), and serum Homocysteine was 234.9 µmol/L (4.7 -14.8). After taking Acenocoumarol 2 mg every alternate day, he remained asymptomatic and stopped taking it after four weeks as he felt he was completely alright.
FIGURE 1: Contrast Enhanced MRI Brain -Cerebral Venous Sinus Thrombosis
A non-invasive diagnostic procedure that uses a combination of a large magnet, radio frequencies, and a computer to produce detailed images of organs and structures within the body without the use of damaging ionizing radiation.
There are filling defects within the superior sagittal sinus, left transverse & sigmoid sinuses, and the left jugular bulb. Filling defects are also seen in the right transverse and straight sinus and the Galen region's vein. Cortical veins draining into these sinuses appear engorged, suggesting thrombosis.
FIGURE 2: Contrast Enhanced MRI Brain -Cerebral Venous Sinus Thrombosis
A non-invasive diagnostic procedure that uses a combination of a large magnet, radio frequencies, and a computer to produce detailed images of organs and structures within the body without the use of damaging ionizing radiation.
There are filling defects within the superior sagittal sinus, left transverse & sigmoid sinuses, and the left jugular bulb. Filling defects are also seen in the right transverse and straight sinus and the Galen region's vein. Cortical veins draining into these sinuses appear engorged, suggesting thrombosis.
FIGURE 3: MRI Brain Venogram -Cerebral Venous Sinus Thrombosis
A non-invasive diagnostic procedure that uses a combination of a large magnet, radio frequencies, and a computer to produce detailed images of organs and structures within the body without the use of damaging ionizing radiation.
The mid and posterior parts of the superior sagittal sinus show no flow in venography. There is non-visualization of flow in the right transverse, sigmoid sinus, and jugular bulb. The proximal aspect of the left transverse sinus also shows partial loss of flow. No obvious flow was seen in Galen's straight sinus and vein-these findings suggest cerebral venous sinus thrombosis.
Three months ago, he complained of pain in his right ear with hearing loss and drooping of the right angle of his mouth. He was treated for Right Chronic Suppurative Otitis Media and Lower Motor Neuron Facial Palsy (Figure 4) with Antibiotics and Acyclovir for three weeks. Now he presented with difficulty breathing; on examination, there was pallor, right infra-scapular crepitations with bronchial breath sounds, and drooping of the right side of the mouth with the inability to close the right eye completely. Chest X-ray ( Figure 5) showed right lower zone consolidation and blood investigations revealed neutrophilic leukocytosis, mild liver dysfunction, urine and renal function was normal, serum homocysteine -43.6 µmol/L (4.7 -14.8), erythrocyte sedimentation rate (ESR) -120 mmHr (< 15), Vitamin B12 -192 pg/mL (239 -931), c-reactive protein (CRP) -16 mg/L (< 10), Folic Acid -8 ng/mL (3 -17), D-dimer -2170 ng/ml (< 250) and Mantoux Test were negative. Empirical antibiotic therapy and Vitamin B12 supplementation were started within three days of admission. Chest X-ray ( Figure 6) deteriorated, and the requirement for Oxygen increased. Considering clinical and radiological deterioration, CECT Thorax (Figure 7, 8) was done, which suggested -a large area of consolidation with cavitation in the right lower lobe and multiple cavitary nodules in bilateral lung fields in the mid and lower lobes.
CECT -Contrast Enhanced Computerized Tomography
It's a diagnostic imaging tool to create detailed images of internal organs, bones, soft tissue, and blood vessels. Intravenous contrast dye is ingested into the body, which helps provide a detailed view of the blood vessels.
A large area of consolidation with internal areas of breakdown and cavitation is seen in the right lower lobe with surrounding confluent nodular densities.
On day five of the presentation, our patient complained of hoarseness of voice. We suspected Pulmonary Tuberculosis, Autoimmune Disease, Systemic Vasculitis, Sarcoidosis, Bronchial Carcinoma, Lyme disease, and Nocardiosis. So, Bronchoscopy was performed, which showed severe inflammation of the mucosa, and the right vocal cord was thickened. TB PCR was negative, and a Bronchial Alveolar Lavage (BAL) cell block revealed no hemosiderin-laden macrophages and malignant cells. Bronchial biopsy suggested chronic inflammatory pathology (Figure 9). BAL Nocardia PCR, Serum angiotensin-converting enzyme (ACE) levels, and Lyme Borrelia Burgdorferi IgG were Negative. X-ray paranasal sinus (PNS) suggested Maxillary Sinusitis with Deviated Nasal Septum ( Figure 10). Antinuclear antibody (ANA) and the perinuclear form of antineutrophil cytoplasmic antibody (P-ANCA) were also negative, while PR3 by EIAc-ANCA Serum -200 U/mL (>=5). Considering our patient's clinical and radiological improvement ( Figure 11) from symptomatic treatment and BVAS (Birmingham Vasculitis Activity Score) of 38±3/63, he was initiated on oral Cyclophosphamide 2mg/Kg with oral steroids. The patient has been on follow-up for the last five months and will be tested for methylenetetrahydrofolate reductase (MTHFR) mutation in the next visit.
underrecognized diseases, and early diagnosis leads to a favorable prognosis. At this point, it can only be hypothesized that Hyperhomocysteinemia and etiopathogenesis of GPA have a direct relation, as no other cases have been reported yet. Also, further research is needed to study the relationship between the same.
BVAS -Birmingham Vasculitis Activity Score
It is designed to document new or worsening clinically active vasculitis that would be likely to require treatment, after exclusion of other causes such as infection, hypertension, etc.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2023-05-04T15:09:42.393Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "269f18b6481c589540f96ad9ad015b4698eae8e7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7759/cureus.38412",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15c32d2cf4a868c8ee76b357cc489150d37c2559",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225244087 | pes2o/s2orc | v3-fos-license | Locked Platting for Distal Femur Fractures, Is it a Good Option?
,
Internal fixation with blade plate was the standard recommendation in the 1970s by the AO/ASIF (Association for the Study of Internal Fixation).During the following years other implants were developed as the dynamic condylar screw (DCS) with a 95 degree side plate, condylar buttress plate and intramedullary nails.In the presence of comminution and/or osteoporosis, the goals of stable fixation and early mobilization can be difficult to achieve [6][7][8].In the recent decades, new technologies were introduced for fixation of distal femoral fractures like the less invasive stabilization sys-tem (LISS) and the anatomical distal femoral locked plate (DFLP).
These implants provide multiple points of fixed angle fixation between the plate and the screws.In theory, this should reduce the tendency of varus collapse and failure of fixation [9,10].
Aim of the Study
This study was aimed at evaluation of the use of laterally applied distal femoral locked plate in treating various patterns of distal femoral fractures.
Methods
The study was done in KHUH-Kingdom of Bahrain to review the results of distal femur fractures fixation by DFLP after approval from the ethical committee of the hospital.Inclusion criteria included all adults with distal femoral fractures AO/OTA classification types 3.2 and 3.3 [11], all closed fractures and open fractures grade 1 and 2 according to Gustello-Anderson classification [12].Twenty four patients had abnormal bone mineral density (BMD) at the time of injury: 16 (66.6%)were osteoporotic and 8 patients (33.4%) were osteopenic.
Exclusion
After fracture reduction and restoration of length, comminution at the fracture site was evident in 21 cases (51.2%).Bone substitute in the form of calcium phosphate granules or cancellous bone allograft was added to 10 fractures (47.6%).
Plate lengths of 9 and 11 holes were the commonest to be used, applied to 25 fractures (60.9%).Five holes plate was used to fix 6 uni-condylar fractures (type 3.3.B), 13 holes plate was used in two occasions and 7 holes plate was applied to eight fractures.This study included 12 peri-prosthetic distal femur fractures that were fixed by the same technique (29.2%).
All the patients followed the same postoperative protocol: suction drain was removed 48 hours after surgery, range of knee motion exercises was initiated on the second postoperative day, both passive and active as tolerated, and partial weight bearing using a Zimmer frame was initiated three weeks after the operation except if the patient's neurological or cognitive condition did not permit for safe ambulation, so those patients were kept in bed and their activity was limited to bed-to-chair assisted transfer.In otherwise neurologically normal patients, full weight bearing was allowed only with radiologically evident callus formation.IBM SPSS 25.0 statistics software was used for all statistical analysis.Student t-test and Mann-Whitney U-test were used to compute the differences between the groups.Pearson correlation analysis was performed for all bivariate analysis.A p-value of less than 0.05 was considered statistically significant.
Results
A total of 41 patients were enrolled in this study from May 2012 till May 2018, of these 13 (31.7%)were males and 28 (68.3%) were females.
Age ranged between 18 and 94 years with an average of 62.2 years.One patient died before achievement of union and 4 cases were lost during the follow up period.Also, 2 cases (both were peri-prosthetic fractures) had complications; one lady aged 94 years had deep infection with metal failure and after metal removal and debridement, the knee joint was surgically fused.The other case was also a lady aged 68 years had pulling out of the locked plate system and went to non-union but refused revision surgery.
The remaining 34 patients were followed up till complete union within a time range of 3-9 months with an average of 6.7 months.
Detailed analysis of the results showed that the mean healing time of fractures in patients who had abnormal bone mineral density was 6.7 months while it was 5.1 months for fractures in patients with normal bone.The difference was statistically significant (p = 0.045) (Table 1).The average union time for comminuted fractures (21 cases) was 7.2 months while that for non-comminuted fractures was 5.5 months and also the difference was statistically significant (p = 0.03); ( Ten patients had artificial bone graft substitute added into the fracture gap at the time of fixation due to comminution and 11 were fixed without grafting.The average healing time was 7.7 and 7 months respectively and the difference was statistically insignificant.
Ritchette rating system [13] was used to assess the functional outcome of patients (
Discussion
This study looked at the results of using anatomical locked plate to fix distal femoral fractures.Fractures of the femur in this region need special care to avoid various complications that could happen; mainly varus mal-union and non-union [14].
Introduction of the locked plating systems reduces in general the complications encountered with the use of conventional plates.
Due to the fixed relation between the screws and the plate, the whole design acts as "Internal External Fixator".However, understanding the biomechanical principles of these plates is essential to prevent generation of non-union [15].
We have only one case of aseptic non-union with implant failure (Figure 1).This happened in a lady 68 years old with peri-prosthetic fracture of distal femur.Review of her immediate postoperative X-ray images showed that the plate was not exactly fitting to the bone with a gap of about 2 millimeter.Also, the plate was off the bone at the upper end in both the anteroposterior and the lateral views.All the screw holes close to the fracture region where filled which increased the stiffness of the construct.It has been shown that increasing the plate-bone distance decreases the axial and torsional stiffness [15], this and short working length of the plate could be the reasons for loss of fixation and non-union.This is similar to the rate of aseptic loosening reported by Loosen., et al. [16] and Haidukewych., et al. [17] but much less than the number recorded by Tank., et al. [18] who had 11 implant failures out of 67 patients (16%).
Figure 1d
This character of locked plates, one single stable angular construct, is very advantageous in comminuted fractures with osteoporotic bones.In our study, we fixed 24 distal femur fractures where the BMD was abnormal, all cases united after first intervention (Figure 2).Gardner., et al. [20] reported that non-union of distal femur occurs most often after open and comminuted fractures.It would be expected that if we added autogenous bone graft to fill the gaps in comminuted fractures, the time to union would be shorter than in those cases where nothing was added to supplement healing.However, the use of locked plating saved time and helped us to avoid donor site morbidity associated with harvesting iliac bone graft.
Figure 3e
Our results regarding union of comminuted fractures are consistent with the conclusions of Hierholzer., et al. [21] who found that locked plates had a lower incidence of non-union when used to stabilize distal femur fractures.
Peri-prosthetic distal femoral fractures are more frequently reported recently with the increasing number of knee joint replace-ment surgeries and improved patient's activity post-arthroplasty.
The current review included 12 peri-prosthetic fractures, all fixed by the lateral locked distal femoral plate.All fractures united after the first operation except 2 cases went to non-union, one septic and the other was aseptic with construct failure, with union rate of 83%.This mimics the reported rate of union in the study of Ricci., et al. [22] which reached 86% and also similar to excellent results by Rab and Davis [23].When conventional plates were used, Fig- gie., et al. [24] reported 50% non-union rate in 10 supracondylar femoral peri-prosthetic fractures.
Intramedullary nails (IMN) showed a high rate of malalignment [25].Biomechanical studies proved that IMN can resist varus stress better than locked plates [26], this difference was clinically insignificant [27].The use of IMN for distal femoral peri-prosthetic fractures was restricted by the distal bone stock available with the size and position of the femoral component notch [28].On the contrary, Streubel., et al. [29] showed that the extremely distal periprosthetic supracondylar femoral fractures were successfully fixed by laterally applied locked plates.
Considering the postoperative protocol following the use of locked plates to fix distal femur fractures, we intended to be more careful.Unrestricted active and passive motions were allowed from the second postoperative day, but weight bearing was increased gradually starting after three weeks keeping in mind each patient's general condition.Full weight bearing was allowed only after radiological evidence of bone bridging.Similar protocol was followed by Vink., et al. [30] but they allowed earlier partial weight bearing before three weeks.Also, Loosen., et al. [16] in their review of distal femur fractures in geriatric patients they permitted immediate weight bearing only for 3 out of 50 patients (6%).However, Poole., et al. [31] allowed immediate full weight bearing as tolerated for 84% of their patients.Four fractures fail to unite but the rate of clinical and radiological union was 95%.
The overall function of the patients in the current study was assessed using Pritchett score.It depended mainly on evaluation of the knee range of motion and the presence of residual deformity or pain.Poor results in 6 patients with a reduced and painful range of motions (less than 75 degrees) was attributed to their preoperative state.Five of them had total knee replacement (TKR) followed by peri-prosthetic fracture and the patients reported that there was pain and partial limitation of knee motion before their femur fractures.The remaining patient had originally advanced osteoarthritis of the knee preceding the distal femur fracture.
We achieved overall good results in 70% of our cases.Similar good results were reported by Vink., et al. [28] and Rademakers., et al. [32] who showed that knee function could improve for up to one year after surgery.
Limitations of the Study
Our study is retrospective and included groups of patients with different age and bone quality.The study sample of 34 patients after exclusion of 7 cases, is relatively small and the results could be more informative if the use of locked plates was compared with the recent designs of locked nails used for distal femur fracture fixation.
Conclusion
The overall results in this study strongly support the use of locked distal femoral plates for fixation of various patterns of distal femoral fractures particularly in presence of osteoporosis and comminution.
criteria were femoral fractures in locations other than the distal third, open fractures grade 3 and femoral fractures in skeletally immature patients.This study included 41 patients with an average age of 62.9 years (18-94 years).Ten patients had AO/ OTA classification type 3.2 fractures while 29 patients had type 3.3, and 2 cases had both type 3.2 and type 3.3 fractures.Five patients had Gustello-Anderson Type-I open fracture and 2 cases had open fracture type-II.Internal fixation by open technique using a direct lateral approach to the distal femur was used in 35 cases while minimally invasive plate osteosynthesis (MIPO) technique was applied in fixation of 6 fractures.Twenty seven patients (65.8%) had associated comorbidities.The most common were diabetes mellitus, ischaemic heart disease, chronic kidney disease and Alzheimer disease.
Figure 3 :
Figure 3: Union in comminuted fracture with the fracture gap filled with bone substitute.
Table 2 )
. So, both low BMD and comminution of the fracture had a positive relation with union.Both factors increased the time to union.
Table 3 )
. The knee movement at the latest follow up ranged between 60 and 130 degrees with an average of 106 degrees.Excellent results were found in 14 patients, good in 10, fair in 4 (11.76%) and poor in 6 cases (17.66%).So, good and excellent results were reported in 24 patients (70.58%).
Table 3 :
The Pritchett rating system for distal femoral fractures. | 2020-09-03T09:12:23.243Z | 2020-08-21T00:00:00.000 | {
"year": 2020,
"sha1": "7d020bff1109689ecd216dfde02f5eb25b85f707",
"oa_license": null,
"oa_url": "https://doi.org/10.31080/asor.2020.03.0204",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a98a67e701c4a55c82b1dafacbfc0370c972618c",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269184461 | pes2o/s2orc | v3-fos-license | Synthesis of different types of nano-hydroxyapatites for efficient photocatalytic degradation of textile dye (Congo red): a crystallographic characterization
The textile industry, a vital economic force in developing nations, faces significant challenges including the release of undesired dye effluents, posing potential health and environmental risks which need to be minimized with the aid of sustainable materials. This study focuses on the photocatalytic potential of hydroxyapatite together with different dopants like titanium-di-oxide (TiO2) and zinc oxide (ZnO). Here, we synthesized hydroxyapatite (HAp) using different calcium sources (calcium hydroxide, calcium carbonate) and phosphorous sources (phosphoric acid, diammonium hydrogen phosphate) precursors through a wet chemical precipitation technique. Pure and doped HAp were characterized via different technologies, which consist of X-ray diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy, scanning electron microscopy (SEM), as well as UV-vis spectroscopy. The effectiveness of the synthesized photocatalyst was evaluated by its interactivity with synthetic azo dyes (Congo red). The photodegradation of Ca(OH)2_HAp, CaCO3_HAp, ZnO-doped HAp as well as TiO2-doped HAp, were obtained as 89%, 91%, 86%, and 91%, respectively. Furthermore, at neutral pH, TiO2-doped HAp shows the highest degradation (86%), whereas ZnO-doped HAp possesses the lowest degradation (73%). Additionally, various XRD models (Monshi–Scherrer's, Williamson–Hall, and Halder–Wagner methods) were employed to study crystallite dimension.
Introduction
Water pollution and the energy crisis are being addressed as global headaches and becoming more crucial events via population growth and fast industrialization.The worldwide production of organic dyes is approximately 450 000 tons, whereas 20-60% of wastewater is generated using hazardous organic azo dyes by numerous industries like pigment, fertilizer, cosmetics, paper, and textile. 1,2Synthetic dyes like Congo red (CR) are tough to biodegrade because of their stable molecules as well as complicated aromatic structures, which might induce carcinogenic and mutagenic effects. 3,4Several purication processes, such as reverse osmosis ltration, biological degradation, adsorption, occulation, and centrifugation, were developed for wastewater purication. 5,6However, the majority of these purication techniques are plucked entirely for the degradation of the organic dye, owing to their high expenditure in operation, complex steps, and difficulty in applications. 7So, there is an urgent necessity to develop the existing technique that makes the purication process more efficient, easier, economical, and eco-friendly for synthetic dye degradation.Considering the above-mentioned criteria, the photocatalytic phenomena became a prominent innovation for the deterioration of synthetic dye along with various inorganic compounds like phenol and their derivatives. 8,9However, the degradation efficiency of dyes depends on the nature of the photocatalyst used.Because of this, several photocatalysts have been developed to estimate their performance in synthetic dye degradation. 10,113][14] Ideal HAp [Ca 10 (-PO 4 ) 6 (OH) 2 ] crystal is monoclinic in nature.However, conventional HAp has a lattice deciency, which consists of a hexagonal ionic compound along with a P6 3 /m space group, as well as 44 atoms in every repeating unit.The cations "Ca" are separate two distinctive sites such as Ca(I) and Ca(II).Ca(I) is associated with the columnar site along with the c axis, where Ca(II) is located around OH ions. 15Since HAp has a complex molecular arrangement, various types of cations can be incorporated into its crystal structure.][18][19] Titanium dioxide (TiO 2 ) and zinc oxide (ZnO), along with other semiconductors, have lately been used as affordable and environmentally safe photocatalysts for the decomposition of different contaminants. 20Conversely, ZnO has a signicant advantage over TiO 2 in terms of its ability to absorb a larger portion of the UV spectrum. 213][24] Though HAps are well known bioceramics, these can be utilized as the photocatalytic materials.HAp in pure form cannot impart so much catalytic activity but in doped form or with the combination other materials the photocatalytic activity is increased.Normally metals are used to replace the calcium ions of hexagonal crystal of HAp.
In this present work, HAp was synthesized from CaCO 3 and Ca(OH) 2 by wet chemical precipitation method.TiO 2 and ZnO were used to replace the calcium ions to modify the crystallographic parameter with a hope to augment the photocatalytic activity.In-depth crystallographic analysis was also performed to nd out the variation due to doping the materials.
Materials
To proceed with this experiment, calcium carbonate (CaCO 3 ) and calcium hydroxide (Ca(OH) 2 ), as well as phosphoric acid (H 3 PO 4 ), are utilized as potential sources for calcium and phosphate correspondingly.Apart from that, zinc oxide (ZnO), along with anhydrous titanium dioxide (TiO 2 ), is employed as a dopant for synthesizing doped Hydroxyapatite.All these chemicals were purchased from E-Merck, Germany.No buffer solution was utilized in this experiment, but ammonium hydroxide (NH 4 OH) and nitric acid (HNO 3 ) were used to maintain the pH (10-11) of the reaction media.The production of deionized (DI) water was simplied via a two-stage distillation procedure.
Synthesis of doped and pure HAp.
To synthesize pure and metal-oxide doped HAp, the molar ratio of calcium and phosphate was maintained as 5 : 3.At the beginning, a predetermined amount of Ca 2+ , as well as TiO 2 (0.05%) and ZnO (0.05%), are separately dissolved in 50 mL DI.Diammonium hydrogen phosphate [(NH 4 ) 2 HPO 4 ] and H 3 PO 4 were then added into the different Ca 2+ solutions [CaCO 3 and Ca(OH) 2 ] at the rate of 4 mL min −1 .The solution is then stirred at 300 rpm while introducing H 3 PO 4 and [(NH 4 ) 2 HPO 4 ] into the calciumcontaining solution.Furthermore, the pH of the system was controlled at 10-11 by employing a 30% NH 4 OH solution.Finally, the solution was ltered and dried at 105 °C.A similar approach was conducted for metal-oxide-doped HAp (Fig. 1).
2.2.2 Photocatalytic activity.The photocatalytic activity of pure and doped HAp was observed for the deterioration of Congo red dye (CR) solution under a halogen lamp (SEN TAI JM-500), which is placed on the top of the in-house built wooden box shown in Fig. 2. The distance from the lamp to the CR solution was kept constant (0.14 m).The box is then introduced into the cooling water circulation system.Additionally, the temperature and humidity of this system were maintained at approximately ∼25 °C and 60%, respectively.UV-vis spectrophotometer (Hitachi U-2910) was utilized to measure the absorbance of the dye solution by utilizing concentration.The degradation percentage (D p ) as well as degradation capacity (q e ) were measured by employing mathematical eqn (a) and (b).
Degradation percentage :
Degradation capacity : Here, C t and C 0 denote the nal as well as initial concentration of the samples at time "t" correspondingly. 25In comparison, V and W represent the volume of the dye solution and the weight of the catalyst correspondingly.2.2.3 Scavenger's experiment.The captured test for radicals, and electrons was performed to examine the nest suitable species connecting photodegradation of dye under simulated sunlight (i.e.halogen lamp).Several scavengers were used in principal component analysis to measure the function of hydroxyl radicals, holes, and electrons. 26Using the abovementioned ideal setting, isopropyl alcohol (IPA) as well as ethylenediaminetetraacetic acid (EDTA) were utilized to examine the function of hydroxyl radicals and electrons on all synthesized Hap.If not mentioned otherwise, 10 mL of the scavenger was chosen in 40 mL of 20 ppm Congo red dye for 90 min using 0.1 g of catalyst.3 Results and discussion
XRD data interpretation
The patterns, obtained from XRD for doped HAp as well as pure HAp are exhibited in Fig. 3.The plane positions for these samples were visible at (002) The crystallographic study assesses crystalline features such as cell volume, degree of crystallinity, dislocation density, lattice parameters, crystallinity index, microstrain, and crystallite size, employing eqn (1)- (7). 27,28ttice parameter equation; In the previously mentioned formulas, the unit cell is indicated by plane (h,k,l) and a,b,c reects as lattice variable, X c = crystallinity degree, q = angles of the diffraction (in degree), b = FWHM (full width at half maximum) in radian, D c = dimension of repeating unit, d = dislocation density, K = shape factor (arbitrary constant)/Scherrer's constant = 0.94, H (hkl) = peak height of the respective plane, K a = 0.24, for HAp, as well as CI XRD = crystallinity index.The specic surface area of the synthesized HAp was estimated through eqn (8).Where density as well as crystallite dimensions of HAp were denoted by r (3.16 g cm −3 ) as well as D c . 29ecific surface area; Crystallite dimensions in orderly distributed substances are essential in different uses, alongside minuscule crystallites being distinguished by enormous surface areas and vice versa. 30icrostrain leads to crystallite deformation, which results in changes in element's features, especially applicability.Imperfection in crystalline substances is brought about by defects such as point dislocation, line dislocation, and area dislocation, which have a strong connection to the structure of the crystal. 31he value of line dislocation was determined via eqn (6), and the date is shown in Table 1.
The level of crystallinity signicantly impacts the properties of materials, yet efficiently modulating it can be challenging.The investigation indicates that HAp exhibits levels that vary in the degree of crystallinity, while microstrain alludes to the intrinsic stress of crystalline planes, which can manifest as tensile or compressive forces.
The crystallinity index (CI) is explained for measuring the numerical quantication of crystal structure.In this segment, solely the (XRD) X-ray diffraction data were analyzed for calculating the crystallinity index by utilizing eqn (7), and the resulting values are shown in Table 1.
3.1.1Estimation of crystallite size using various models.Exact crystallite dimension estimation for any purpose is a vital requirement.Yet, for determining the size of the crystallite of HAp specimens, many approaches and algorithms have been developed, including the Williamson-Hall Method (WHM), Monshi-Scherrer Method (MSM), and Halder-Wagner Method (HWM).The Williamson-Hall Method was also expanded, focusing on the UDEDM (uniform deformation energy density model), UDM (uniform deformation model), and UDSM (uniform stress deformation model).
Monshi-Scherrer method (MSM).
To measure the precise size of the crystal, Monshi-Scherrer's method, which is also known as the modied form of Scherrer's equation, is utilized.This revised equation was developed by taking "ln" on the side of Scherrer's equation, which is denoted in eqn (3).The mathematical formula of this modied form is expressed in eqn (9). 32 For the implementation of the graph of this revised model, ln 1 cos q was placed on the X-axis, and ln b was evaluated on the Y-axis (shown in Fig. 4).The straight-line equation (y = mx + c) and the eqn (9) were analyzed to estimate the slope, where the intercept was noted as Kl D MÀS .This method supplied an indicator for the assessment of the reliability of results.Subsequently, the resultant magnitude of crystal size was obtained 5.96 nm for Ca(OH) 2 _HAp, 9.29 nm for TiO 2 _HAp, 103.56 nm for CaCO 3 _-HAp, and 7.12 nm for ZnO_HAp.
Williamson-Hall method (WHM).
Scherrer's Equation, whereas resolving the impact of the Size of crystallite on XRD reection, neglects the inherent strain in nanocrystals resulting from factors like dislocations, point defects, stacking, as well as boundaries of grains. 33The Williamson-Hall examination may identify an inherent strain, which can be determined by examining the effect of strain on crystallite dimension data.Eventually, the overall broadening may be described as eqn (10). 34Total = b size + b strain (10) where b strain is connected to the strain broadening effect as well as b size is the broadening due to its size.The modied form of the Williamson-Hall, considered a UDM, USDM, and UDEDM, will be discussed in this context.35 3.1.4Uniform deformation model (UDM).The estimated value of strain derived through crystalline defects and deformation in the synthetic HAp can be mathematically represented as a eqn (11).36 3 ¼ b hkl 4 tan q (11) The UDM idea relies on the concept of homogenous strain in all directions and regards lattice strain as isotropic in nature despite its spatial amplitude. 37The peak broadening, which is caused via lattice strain, is oen denoted in eqn (12).
The overall broadening, b hkl is expressed as FWHM of a re-ected peak, which is related to the inuence of the strain of crystal lattice (b strain ) and the value of the size of the crystals (b size ) in a specic peak that may be stated as eqn ( 13)- (15).
Eqn ( 14) can be written as: b hkl cos q ¼ Placing 4 × sin q along the X-axis as well as b hkl × cos q on the Y-axis permits a straight-line equation.In the graph, both slope (3) and crystallite size (D w ) (y-intercept) can be computed.The graphs are depicted in Fig. 5, and the computed (D w ) and 3 values are reported in Table 2.
3.1.5Uniform stress deformation model (USDM).UDM models, depending on material homogeneity and isotropic nature, frequently remain unvalidated because of the potential anisotropic nature of real crystals, requiring an altered Uniform Stress Deformation Model (USDM). 33From Hooke's law, it is known that there is a linear link between 3 (Strain) as well as stress (s) expressed via eqn (16).
The mathematical expression Y hkl depicts Young's modulus or modulus of elasticity, which is a reliable approximation for minimal strain.Increasing the amount of strain causes a shi in the magnitude of Young's modulus, demonstrating that the strain is not linear in nature. 38By rearranging and replacing eqn (16) with eqn (10), we have the following relation (eqn (17)) Thus, plotting b total × cos q on the Y-axis as well as 4 × sin q/ Y (hkl) along the X-axis produces a straight-line graph.The gradient of this expressed straight line delivers a measure of stress (s), whilst the point of intersection offers the crystallite size D (hkl) of the HAp nanocrystals.The plots are illustrated in Fig. 6, and the computed s and D (hkl) values are shown in Table 2.
3.1.6Uniform deformation energy density model (UDEDM).UDM includes anisotropic entities that necessitate an alteration of the W-H relationship for effective alignment in
Model name
Crystallite size (in nm), strain, anisotropic nanocrystals. 39UDEDM, determined by Hooke's law, demonstrates a linear relationship between s and 3 in real crystals.Yet, this straight proportionality is unsuitable since there are defects in the long-range order, including agglomerations and dislocations.UDEDM evaluates crystal imperfections, asymmetric deformation, and distortion causes as a measure of energy density (u), so the stress and strain constants remain independent. 40Eqn (18) denotes 3 (energy per unit volume), which is determined by Hooke's expression.
By plotting the graph between b total cos q on the Y-axis as well as 4 sin q ffiffiffi 2 p ffiffiffiffiffiffiffi ffi Y hkl p .On the X-axis, an anisotropic energy density (u) and crystallite size (D w ) were measured from the slope and Yintercept (Fig. 7).The estimated crystallite dimensions are shown in Table 2.
3.1.7Halder-Wagner method (HWM).The SSP technique utilizes the Gaussian function to express the expansion of tension and the Lorentzian function to symbolize the broadening of size in XRD patterns.However, the XRD area correlates to the Gaussian operation, whereas the bottom drops excessively.The lower portion of the prole ts the Lorentz function yet does not match the XRD peak area. 41,42The Halder-Wagner approach employs the symmetrical Voigt function, a convolution of Gaussian and Lorentzian processes, to describe the FWHM of the physical prole, as illustrated in eqn (20).
A plot of (*b hkl /*d hkl ) 2 on the Y-axis and *b hkl /(*d hkl ) 2 on the Xaxis generates a straight line, where slope equal to 1/D w and from the y-intercepts, microstrain was estimated (Fig. 8).The estimated crystallite size synthesized HAp is shown in Table 2.
Function group analysis
The functional group in the synthesized product is analyzed by Fourier Transform Infrared (FTIR) spectra (Model: IR-Prestige 21 (Shimadzu, Japan)), which are shown in Fig. 9.In HAp, PO 4 3− and OH − are optically active groups, which are responsible for the resulting spectra. 44In this present study, pure and doped HAp were synthesized through the wet chemical precipitation method to modify the crystalline structure, which also shows similar spectra.PO 4 3− (Tetrahedral) ions possess four preliminary forms of vibrations, which are symmetric stretching (n 1 ), asymmetric stretching (n 3 ), symmetric bending (n 2 ), and asymmetric bending (n 4 ). 45Three forms of stretching oscillation were found around 962, 1026, and 1087 cm −1 wavenumbers, whereas bending vibration yielded peaks near 465, 563, and 599 cm −1 wavenumbers, which are equivalent to hydroxyapatite and the results have already been published. 46,47The peak at 962 cm −1 was observed due to (n 1 ) oscillation.Conversely, the peaks at 1087 and 1026 cm −1 were responsible for (n 3 ) vibration.At 563 and 599 cm −1 wavenumbers, (n 4 ) vibrations are observed, while at 473 cm −1 (n 2 ) vibration is predominant.Asymmetric bending (n 4 ) vibration is shown at 563 and 599 cm −1 wavenumbers, while symmetric bending (n 2 ) is observed at 473 cm −1 wavenumbers.In this experiment for synthesized HAp, OH − shows FTIR peaks at 3000-3800 cm −1 wavenumber; these identical positions were reported in numerous literature. 48,49
.3 Optical properties
The spectral band gap of pure and doped HAp was evaluated utilizing a double-beam UV-vis spectrophotometer (Model: U-2910), wherein the powder sample was dispersed in water at room temperature.The absorption frequency of synthesized HAp is used to measure the optical band gap in the UV-vis spectrophotometer.For direct band gap analysis, the Tauc plot method was used, which is expressed in the mathematical eqn (24). 50,51m = A(hq where, h is Planck's constant, E g is the optical band gap.A is a constant, a is the absorption coefficient, m is the photon frequency, as well as n = 1/2 for a direct band gap.The optical band gaps of the synthesized samples were estimated (Fig. 10) and grouped in Table 3.
Scanning electron microscopy (SEM)
The SEM images (Machine model: JEOL JSM-7610F) of synthesized HAp are shown in Fig. 11, where different types of dopants, as well as different precursors, are used for nanocrystalline HAp synthesis.From analyzing the images, it's prominently visible that many distinctive forms of nanoparticles are present in synthesized HAp, which tend to agglomerate.Apart from that, most of these particles are greater than a hundred nanometers as a prominent form.The presence of doped metal oxide was conrmed by the EDS analysis which is visualized in Fig. 11.The existences of Ti and Zn were found in the doped hydroxyapatite.
3.5 Pure and synthesize HAp photocatalytic activity 3.5.1 Effect of contact time on CR degradation.In this experiment, distinctive time frames such as (30, 60, 90, 120, and 150 min) and different amounts of adsorbent like 0.05 g, 0.075 g, 0.1 g, 0.15 g, 0.2 g were observed to estimate the degradation percentage as well as the capacity of CR solution (Fig. 12 and 13).The degradation percentage rises with the increase in adsorbent, as the number of active sites increases, hence boosting the effectiveness of the photocatalyst.The Similarly, the degradation capacity of the synthesized HAp is investigated for adsorbent 0.05 g, 0.075 g, 0.1 g, 0.15 g, and 0.2 g correspondingly.The optimum degradation capacity is considered for adsorbent 0.1 g and the contact time 90 minutes which are 7.19 mg g −1 , 7.34 mg g −1 , 7.32 mg g −1 , 6.92 mg g −1 for Ca(OH) 2 _HAp, TiO 2 _HAp, CaCO 3 _HAp, and ZnO_HAp, respectively.
3.5.2Effect of catalyst dose on the photodegradation.To investigate the impact of catalytic dose on the photodegradation of CR, several compounds such as Ca(OH) 2 _HAp, TiO 2 _HAp, CaCO 3 _HAp, and ZnO_HAp catalyst from 1.25 to 5 g L −1 at an initial concentration of CR as 2.87 × 10 −5 M as well as irradiation time 90 minutes were employed.Fig. 14 denotes the alteration in the photodegradation of CR as the catalyst at several dosages of the above-mentioned samples.In the case of Ca(OH) 2 _HAp, TiO 2 _HAp, and CaCO 3 _HAp, the observed % of removal of CR is vindicated to rise within catalyst dosage up to 2.5 g L −1 and extended their height values as 89.89%, 91.76% and 91.58% correspondingly.This phenomenon can be ascribed to an increased formation in accessible surface area for the photocatalysts, facilitating the generation of more active radicals. 28,52,53Conversely, with the additional enhancement in the dosage of the catalyst from 2.5 to 5 g L −1 , the percentage of degradation efficacy was attenuated.However, this could be attributed to the high amount of catalytic dose, which makes the solution turbid and reduces the photodegradation of the solution. 54,55Additionally, for ZnO_HAp, the degradation percentage rises with the increase in catalyst dosage, which is around 90% for 5 g L −1 .
3.5.3Effect of solution pH the photodegradation.A group of investigations was accomplished to investigate the effect of pH on the degradation percentage and degradation capacity of CR dye.Different pH, such as pH 5, pH 7, and pH 9 were studied to determine the effect of varying pH (Fig. 15).A 40 mL solution of 20 mg L −1 concentration of CR dye was irradiated at ambient temperature under a 500 W halogen lamp for 90 minutes.The pH of the reaction solution substantially inuences photocatalytic degradation, as it effectively streamlines the entire process.However, estimating the effect of solution pH on photodegradation is a crucial event; several variables have been identied affecting this, such as the electrostatic and catalyst's nature and the presence of the pollutant molecule. 56,57Lower pH levels are observed to increase the photodegradation of weakly acidic contaminants, contradicting previous study results. 58,59he value of degradation percentage increased with increasing pH from 5 to 7 but decreased at pH 9. The reaction was greatly impacted by the numerous hydroxyl and hydrogen ions, resulting in a larger expenditure owing to the requirement for more effort and an acidic solution; hence, pH 7 was picked as the optimal value.
3.5.4Effect of initial CR concentration on photodegradation.The concentration of pollutants possesses a crucial role in photocatalytic degradation, and investigations were done to establish the appropriate dye concentration for optimal effects.For pH 7 and 2.5 g L −1 of catalyst dose, the impact of various dye concentrations was analyzed.In Fig. 16, the percentage of degradation as well as degradation capacity are denoted.From the analysis, it's unequivocally stated that the degradation percentage increases within the dye concentration raised from 10 mg L −1 to 40 mg L −1 , which can be correlated to the manner that the extent of a suitable number of active sites as well as the radicals are present on the synthesized HAps surface.Furthermore, the additional expansion in initial dye concentration above 40 mg L −1 results in a decrease in the efficacy of degradation percentage.The removal of CR dye diminishes with increasing dye concentration owing to the capture of more photons by the dye compared to the photocatalyst, which produces a low amount of hydroxyl radicals. 60he number of active sites produces more active radicals O 2 c − as well as cOH, which maximize the efficiency of photocatalytic event.Conversely, the reduction in the degradation percentage are observed when the number of free radicals becomes low.In comparison to the degradation percentage, the degradation capacity increased by increasing the concentration of the dye solution.
3.5.5Photocatalytic mechanism of HAp.The photodegradation technique employs a photon source to activate a redox reaction, enhancing the efficacy of the process by limiting recombining properties. 51The proposed simplied reaction mechanism for Congo red degradation using pure, TiO 2 as well as ZnO doped HAp is illustrated in Fig. 17 and mathematically expressed in eqn ( 25)- (32).
Doping hydroxyapatite (HAp) with materials like TiO 2 and ZnO alters its electronic and optical properties, particularly its band gap, enhancing photocatalytic activity.This process creates defects and vacancies in the HAp lattice, facilitating electron-hole pair separation and improving reaction efficiency.The effect of doping on HAp's band gap and photocatalytic activity is complex and depends on factors like dopant type, concentration, synthesis method, and application.Direct band gap materials are preferred due to their prolonged lifetime of charge carriers, while indirect band gap materials may have advantages, but their direct band gap is inherent to HAp. [61][62][63][64] Photocatalytic agents, such as free radicals and electrons, play a crucial part in the photocatalytic deterioration of Congo red dye.These agents, coupled with holes, act as effective oxidizing agents, helping in the formation of more reactive species owing to the delayed reproduction of e − as well as h + .This phenomenon can be mathematically expressed with the help of Mulliken's theory eqn (33) and (34). 65 In these equations, E CB (energy of conduction band), X (electronegativity of the photocatalyst), E c (free electron energy and its magnitude is 4.5 eV), E bg (band gap energy), and E VB (valence band energy), correspondingly.Synthesized HAp possesses electronegativity as 5.89 eV, which is attributable to the geometrical mean of its structure, depicted in literature. 65The estimated magnitude of the conduction band, as well as the valence band, are shown in The dye solution was ltered and dried at 60 °C in an oven for 2 hours, and then the new solution of CR dye was introduced to the dried samples.From Fig. 18, it's prominently visible that the degradation percentage for the rst cycle, Ca(OH) 2 _HAp, and ZnO_HAp, shows a lower degradation percentage as compared to TiO 2 _HAp as well as CaCO 3 _HAp.In the second cycle, ZnO_HAp exhibited a cabalistic decrease in degradation, while CaCO 3 _HAp depicted the lowest degradation percentage.Finally, in the third cycle, CaCO 3 _HAp and ZnO_HAp had lower percentages of degradation as compared to the Ca(OH) 2 _HAp and TiO 2 _HAp, whereas Ca(OH) 2 _HAp showed lower degradation.This phenomenon of adsorption of dye molecules on the photocatalyst surface can be described as the extent of unoccupied active sites present in the photocatalyst.A greater amount of photodegradation efficiency is observed where the dye molecules are easily adsorbed by active sites staying in the samples.Since the quantity of active sites gets more engrossed within the cycles, this causes lower photocatalytic degradation in the subsequent cycle.
3.5.7 Scavenging studies.2-Propanol (IPA) and EDTA were used as the scavenging agents for OH* and electrons correspondingly.Fig. 19 shows a scavenging test that depicts 2propanol and EDTA, considerably decreased degradation rate, indicating the OH* and electrons are preliminary photoactive species related to photodegradation of CR dye.Conversely, in both scavenging tests, TiO 2 _HAp shows a different phenomenon where degradation percentage increased while adding IPA as well as EDTA, which can be the reason for having holes as a predominant.
Kinetics study
The photodegradation kinetics (Fig. 20) of CR by Ca(OH) 2 _Hap, TiO 2 _Hap, CaCO 3 _Hap, and ZnO_Hap have been studied under simulated sunlight.7][68][69] The degradation rate of dyes may be stated as: where, C 0 = initial concentration of reactant (mol L −1 ), and C = nal concentration of reactant (mol L −1 ).The rst-order rate constant (K 1 ) was estimated by plotting time (t) along the x-axis and −ln(C/C 0 ) along the y-axis, and generated graph is shown in Fig. 20.Table 5 provides a detailed record of the rate constant (k 1 ) and regression coefficient (R 2 ).The research data reveals a correlation between the reaction rate and the catalyst composition.The rst-order rate constant values range from 0.82636 min −1 (ZnO_HAp) to 1.46661 min −1 (Ca(OH) 2 _HAp), with an average of 1.172755 min −1 .The lower rate constant indicates a slower reaction compared to pure HAp, while a higher rate constant indicates a faster reaction.[72]
Practical applications of this research
The laboratory scale study analyzed hazardous compounds in industrial wastewater, highlighting the need for further data on treating real-world wastewater containing a blend of organic and inorganic compounds.This study can be replicated using various pollutant types, including caprolactam, phenol, benzoic acid, toluene, adipic acid, anionic and cationic dyes, benzene, amoxicillin, and ciprooxacin, but requires extensive literature study and a pilot plant study before industrial application.
Conclusion
Hydroxyapatite (HAp) was synthesized successfully in pure and metal-oxide-doped form and crystallite size, calculated from various models, carried good evidence for the formation of nano-sized products.The effectiveness of these synthesized materials as potential photocatalysts was evaluated, with factors such as contact time, catalyst dose, initial dye concentration, pH, radical scavengers, and catalyst reusability inuencing activity, and found these materials can be applied as potential photo-catalysts for degradation of organic pollutants.The synthesized products maintained excellent catalytic activity even aer three reuse cycles.TiO 2 -doped HAp showed 90% degradation of Congo red dye, suggesting it could be a potential candidate for synthetic dye degradation compared to pure and ZnO-doped HAp.It may also be applied for the degradation of growing contaminants in pharmaceutical wastewater, which is composed of most antibiotics.Overall, TiO 2 -doped HAp has signicant potential in photocatalysis applications, and further research can be performed for industrial scale application.
Fig. 1
Fig.1Systematic approach for pure and metal-oxide doped HAp synthesis.
Fig. 2
Fig. 2 Laboratory setup for investigating photodegradation with halogen lamp.
Fig. 3
Fig. 3 XRD patterns of pure and doped HAp (A) full scan, (B) focused region.
42 ,43 b hkl 2 = 2 ( 20 )
b L b hkl + b G where, b L = FWHM for Lorentzian function.b G = FWHM for Gaussian function.The approach lends a larger weight to Bragg peaks in small and intermediate angles, decreases reection overlapping, and connects crystallite size and lattice 3 with the H-W technique, as indicated by eqn (21)-(23).34
Fig. 14
Fig. 14 Effect of various doses on the photodegradation (A) degradation percentage and (B) degradation capacity.
Fig. 15
Fig. 15 Impact of pH on photodegradation (A) degradation percentage and (B) degradation capacity.
Fig. 18 .
Denotes the recyclability test of synthesized pure and dope HAp at optimum parameters like 0.1 g of catalyst, 2.87 × 10 −5 M, 40 mL CR solution, at 90 minutes under halogen lamp (500 W) for three cycle tests.
Fig. 18
Fig. 18 Reusability of synthesized HAp in terms of degradation percentage.
Fig. 20
Fig.20The plot of −ln(C/C 0 ) against time (min) for various samples to estimate the reaction rate constant for 0.1 g of photocatalyst.
Table 3
Estimated band gap of pure and synthesized HAp
Table 4 .
28e Synthesized HAp demonstrated greater negativity potentials of the Conduction band than O 2 /cO 2 − (−0.33 eV) as well as greater positive potentials of the valence band compared to OH/cOH (1.99 eV), showing both radicals can be generated for photocatalysis of the Congo red dye.3.5.6Photocatalytic reusability experiments.The recyclability or reusability test is a vital criterion that measures a catalyst's optimum and repeated usage, assuring its long-term sustainability.28
Table 4
Synthesized HAp Valance Band (VB) as well as Conduction Band (CB) potentials for
Table 5
Estimated values of linear fit for synthesized HAp of 0.1 g photo catalyst dose | 2024-04-18T05:09:21.152Z | 2024-04-03T00:00:00.000 | {
"year": 2024,
"sha1": "83af2d9c2932e25fcdf9d093cdd6d098aabfb1ae",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "83af2d9c2932e25fcdf9d093cdd6d098aabfb1ae",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257739707 | pes2o/s2orc | v3-fos-license | Balloon dilatation is superior to CO2 laser excision in the treatment of subglottic stenosis
Introduction Endoscopic treatment of subglottic stenosis (SGS) is regarded as a safe procedure with rare complications and less morbidity than open surgery yet related with a high risk of recurrence. The abundance of techniques and adjuvant therapies complicates a comparison of the different surgical approaches. The primary aim of this study was to investigate disease recurrence after CO2 laser excisions and balloon dilatation in patients with SGS and to identify potential confounding factors. Materials and methods In a tertiary referral center, two cohorts of previously undiagnosed patients treated for SGS were retrospectively reviewed and followed for 3 years. The CO2 laser cohort (CLC) was recruited between 2006 and 2011, and the balloon dilatation cohort (BDC) between 2014 and 2019. Kaplan‒Meier and multivariable Cox regression analyzed time to repeated surgery and estimated hazard ratios (HRs) for different variables. Results Nineteen patients were included in the CLC, and 31 in the BDC. The 1-year cumulative recurrence risk was 63.2% for the CLC compared with 12.9% for the BDC (HR 33.0, 95% CI 6.57–166, p < 0.001), and the 3-year recurrence risk was 73.7% for the CLC compared with 51.6% for the BDC (HR 8.02, 95% CI 2.39–26.9, p < 0.001). Recurrence was independently associated with overweight (HR 3.45, 95% CI 1.16–10.19, p = 0.025), obesity (HR 7.11, 95% CI 2.19–23.04, p = 0.001), and younger age at diagnosis (HR 8.18, 95% CI 1.43–46.82, p = 0.018). Conclusion CO2 laser treatment is associated with an elevated risk for recurrence of SGS compared with balloon dilatation. Other risk factors include overweight, obesity, and a younger age at diagnosis.
Introduction
Subglottic stenosis (SGS) is a rare condition of mucosal scarring, compromising the extrathoracic part of the tracheal airway below the vocal folds. An inflammatory response leading to fibrosis can be triggered by prolonged intubation or tracheostomy, gastroesophageal reflux disease (GERD), or autoimmune conditions, such as vasculitis, sarcoidosis, and relapsing polychondritis [1]. The idiopathic type of SGS is considered to be very rare with an incidence of up to 1:200,000, affecting otherwise healthy perimenopausal females of Caucasian origin [1,2]. Since SGS presents with common and relatively unspecific symptoms, such as exertional dyspnea, wheezing, chronic cough, or dysphonia, it is frequently misinterpreted as a difficult-to-treat lower airway obstruction resulting in a diagnostic delay of up to 2 years; thus, occasionally manifesting with stridor at rest [3].
Given that recurrence of SGS is regarded as the natural course of the condition, the main treatment goal is to 1 3 restore durable airway patency without the need for tracheostomy. Open surgical procedures are considered to have the lowest incidence of recurrence; thus, a chance for a permanent treatment. However, these procedures are quite demanding with respect to institutional resources and are associated with increased perioperative and postoperative morbidity in terms of voice and swallowing deterioration [4][5][6]. Endoscopic techniques are low-risk, voice-sparing procedures that are safe to perform in an outpatient surgery setting; thus, have high patient acceptance [7][8][9]. However, they are considered to have a significantly higher recurrence rate than open surgery, reported to be approximately 30% within 1 year postoperatively, 50% within 2 years, and 80% within 3 years [10,11]. Resection of quadrants of the fibrotic tissue with carbon dioxide (CO 2 ) laser and balloon dilatation alone or following cold knife incisions in the stenotic part of the airway have frequently been used, among others, as the endoscopic treatment of SGS [12]. The rarity of SGS combined with the different types and concepts of endoscopic procedures, the divergence of volumes and resources in different institutions, and other unmeasured confounding factors leading to a selection bias make the comparison of these two techniques complicated [11,13].
The aim of this study was to describe the disease characteristics of the patient cohort treated for SGS in our institution, a tertiary referral center in Sweden, to retrospectively assess whether balloon dilatation is a superior treatment compared to CO 2 laser excision of the scar tissue, and to identify potential confounding factors in terms of time to disease recurrence.
Study subjects
Previously undiagnosed adult patients treated primarily for isolated SGS at the Örebro University Hospital, a tertiary academic referral center in Sweden, between 1 January 2006 and 31 December 2019 were identified based on a retrospective chart review of relevant ICD-10 codes, in particular J38.6, J95.5, and J95.8. Patients with SGS caused by malignant tumors, external compression of the airway, or a damaged laryngotracheal cartilaginous framework, and those previously treated for stenosis in the laryngotracheal part of the airway, or with multilevel and distal tracheal strictures, were excluded from the study.
Surgical techniques
From the early 1990s until 2011, patients with SGS had traditionally been treated with endoscopic CO 2 laser excision of the scar tissue by every laryngologist in our institution.
The procedure was performed under general anesthesia with high-frequency positive pressure ventilation (HFPPV, Mon-soon™ ventilation, Acutronic Medical Systems AG, Fabrik im Schiffli, CH-8816, Hirzel, Switzerland) through a steel, laser-resistant catheter. Stenosis was then either vaporized or divided with radial incisions through suspension microlaryngoscopy, depending on the nature of the cicatrix and its craniocaudal length.
During 2012, Superimposed High-Frequency Jet Ventilation (SHFJV ® , Twinstream™, Mariannengasse 17, 1090 Wien, Austria) was introduced at our institution as a promising method for airway surgery. Concurrently, the absence of a ventilation catheter in the trachea favored the switch of our surgical approach from CO 2 laser excisions to balloon dilatation of the stenotic part of the airway, which became the surgical method of choice by the end of that year and has exclusively been used since. Through suspension laryngoscopy under general anesthesia with SHJV ® , a balloon catheter is advanced in the airway and gently dilates the stenotic part of the airway, following radial incisions with cold steel if appropriate. An INSPIRA AIR ® Balloon Dilatation System (Acclarent, Inc., 33 Technology Drive Irvine, CA 92618, USA) sized 14 mm at 10 atm pressure was used until 2017. It was then substituted by Continuous Radial Expansion™ balloons (Boston Scientific Corporation, 300 Boston Scientific Way, Marlborough, MA 01752, USA) for dilations of up to 15 mm at 8 atm pressure in females and 18 mm at 7 atm pressure in males. The pressure was applied during a short period of apnea aiming for a total of threeto-four dilatation attempts, with a duration between 1 and 2 min or until the patient started desaturating, and up to the maximum possible balloon expansion.
Data collection
This sharp switch in the surgical approach of treating SGS in our department generated the two patient groups we utilized in this study: the cohort of patients treated with CO 2 laser excisions (CLC) between 1 January 2006 and 31 December 2011, and the cohort of patients treated with balloon dilatation (BDC) between 1 January 2014 and 31 December 2019. The period from 1 January 2012 to 31 December 2013 was considered an adaptation period for both the surgeons and the anesthesiology staff to acquaint themselves with the novel techniques.
The follow-up time for both cohorts was set to 3 years postoperatively. The natural history of the disease after an endoscopic procedure is commonly implicated with a recurrence. In our study, this was defined as significant dyspnea requiring a new surgical treatment when assessed clinically with laryngotracheoscopy by an airway surgeon. Thus, the primary outcome of the study was determined as the time interval from the first surgery until the repeat surgery at recurrence (if it occurred), and the endpoints were a recurrence-free status at 3 years postoperatively or a surgical procedure for recurrence within the follow-up period. Demographic data extracted from the patients' records included sex, age, time to SGS diagnosis, body mass index (BMI), SGS etiology, smoking history, the presence of diagnosed or self-reported GERD, and tracheal trauma from previous history of tracheostomy at any age or intubation within 2 years prior to the date of diagnosis. Other conditions registered from the patients' records were diabetes, conditions of the lower airway or the lungs, and cardiovascular comorbidities, including ischemic heart disease, heart failure, arrhythmia, or cerebrovascular condition.
Statistical analysis
A power calculation was made prior to performing the statistical analysis. A total of 72 patients were required to have an 80% chance of detecting a reduction in the recurrence rate from 80% in the CLC group to 50% in the BDC at 3 years postoperatively, which was significant at the 5% level [10,11]. Continuous variables were analyzed by the Mann-Whitney U test and are presented as medians and the 25th-to-75th percentiles, whereas categorical variables were analyzed by the Chi-square test or Fisher's exact test when appropriate and are presented as numbers and percentages.
We visualized time to recurrence with the Kaplan-Meier (KM) method and presented it as cumulative recurrence risk (1-KM). All patients were followed up after the initial operation to the first reoperation or censored at 3 years. Cox proportional hazard models were applied, estimating hazard ratios (HRs) with 95% confidence intervals (CIs) to compare disease recurrence for the two treatment groups. Models were both crude and adjusted for sex, age (categorized as 18-39, 40-49, 50-59, and ≥ 60 years), cause of SGS, smoking, positive intubation history within 2 years prior to the initial SGS diagnosis, BMI according to the World Health Organization (WHO) classification (< 25 kg/m 2 defined as normal weight, 25-29.9 kg/m 2 defined as overweight, and ≥ 30 kg/m 2 defined as obese), presence of self-reported or diagnosed GERD, and diabetes. Confounders were chosen prior to data analysis and in accordance with the previous studies [5,11,12]. The proportional hazard assumption was tested by the phtest command in STATA. A p value less than 0.05 was considered statistically significant. IBM ® SPSS ® Statistics, version 27 (IBM Corp. Armonk, NY, USA) and STATA release 17 (StataCorp. 2021. College Station, TX: StataCorp LLC.) were used for the statistical analysis.
Ethics
This human study was performed in accordance with the Declaration of Helsinki Guidelines and was approved by the Ethics Review Board in Uppsala (diary number 2016/193) and the Swedish Ethical Review Authority (diary numbers 2020-05509 and 2022-02708-02). All adult participants provided written informed consent to participate.
Results
The study population consisted of 19 patients in the CLC and 31 patients in the BDC. We excluded 16 patients in total: Eight of them were previously treated for SGS outside our inclusion period, 3 subjects were found to have multilevel stenosis engaging other parts of the airway (2 with glottic, 1 with bronchial stenosis), 3 cases had a damaged cricotracheal cartilaginous framework and were not appropriate for endoscopic treatment, and in 2 cases treated with CO 2 laser, we could not establish contact and receive an informed consent.
Both groups had a similar mean time to diagnosis, yet the mean age at diagnosis was significantly lower in the BLC. The most predominant SGS cause was the idiopathic type, followed by trauma, in both cohorts, and none of the patients were tracheostomized at any age. Table 1 lists the demographic data and comorbidities of the study population at baseline. Only one patient presented with ischemic heart disease. None of them were diagnosed with conditions of the lower airway or the lungs, yet 7 patients had been prescribed steroid inhalers by general practitioners suspecting asthma prior to the diagnosis of SGS. No readmissions or other complications were observed postoperatively for either surgical technique. Because of the relatively small sample size, the SGS cause variable was converted into a binary variable to consolidate the regression analysis.
A total of 30 events were observed in the study population, 14 in the CLC and 16 in the BDC. The 3-year recurrence risk was 73.7% for the CLC (14 of 19 study subjects at risk) compared with 51.6% for the BDC (16 of 31 patients at risk, Fig. 1). As seen in the KM plot, the association between study groups was different during the 3-year follow-up, with a tendency for disease recurrence within the first year in the CLC and after the first year in the BDC. Since the proportional hazard assumption was violated, we modeled the group variable interact with the follow-up time (0-1 vs. 1-3 years) as an indicator variable to estimate time-dependent analysis [14,15]. In the first year, the cumulative risk of recurrence was 63.2% for the CLC compared to 12.9% for the BDC, with a crude HR of 7.55 (95% CI 2.42-25.6, p < 0.001) and an adjusted HR of 33.0 (95% CI 6.57-166, p < 0.001). Among patients without a recurrence during the first year, the follow-up period from 1 to 3 years showed a crude HR of 0.55 (95% CI 0.12-2.46) and an adjusted HR of 1.85 (95% CI 0.32-10.8).
The group of patients aged 40 years and below was also found to have a higher risk of recurrence (adjusted HR 8.18, 95% CI 1.43-46.8, p = 0.018) than the group of patients aged 50-59 years ( Table 2).
Discussion
The primary findings of our study indicate a superiority of balloon dilatation compared to CO 2 laser excisions in short-term disease recurrence, particularly within the first year postoperatively. Furthermore, patients who were overweight or obese or had a disease presentation at a younger age were independently found to have a statistically significant increased risk of SGS recurrence. The diversity of surgical approaches in the endoscopic treatment of SGS, such as different dilation instruments (e.g., rigid endoscopes or inflatable balloons), scar excision instruments (e.g., cold steel or CO 2 laser), and adjuvant therapies (e.g., mitomycin C or steroids), complicates the comparison of these procedures. The homogeneity of the two surgical techniques used in our study population facilitates, in essence, the comparison of the thermal effect of a CO 2 laser excision with the cold tissue expansion of balloon dilatation, minimizing the confounding impact of different endoscopic treatments. This is reflected by the 51.6% risk of recurrence at 3 years for our BDC group, which is consistent with other studies investigating the outcomes of balloon dilatation without CO 2 laser-assisted excisions [5,6,13]. There is indisputable evidence that open surgical techniques prevail regarding the durability of maintaining a patent airway without the need for tracheostomy or repeated surgery, eliminating dyspnea. However, they are associated with substantial perioperative risks (e.g., anastomotic complications or temporary tracheostomy). Postoperative morbidity, including poor voice outcomes or even an eventual delayed disease recurrence of up to 30% between 5 and 10 years postoperatively, cannot be overlooked [4,6,11,[16][17][18]. Thus, endoscopic treatment still has an important role in the treatment of SGS with its excellent convalescence and despite the higher recurrence rate when compared to open surgical procedures [5,19].
Our results encourage the use of balloon dilatation instead of CO 2 laser excisions considering the longer time to recurrence, since this is ultimately considered the natural course of the condition. We showed that there is a particular propensity for recurrence in the CLC during the first year postoperatively, whereas stenoses treated with balloon dilatation tend to recur during the second year of follow-up. Interestingly, there seems to be a trend of stability in the relapsing manner of the condition in both groups within the third year (73.7% for both the 2-year and 3-year recurrence risk for the CLC compared to 42.9% and 51.6%, respectively, for the BDC; Fig. 1). These findings could be considered in the context of preoperative patient counseling and the individual selection of an endoscopic treatment. The vigilant perspective of an exceptional increase in the incidence of laryngotracheal stenosis during the COVID-19 outbreak [20] led to a prioritized handling of patients with airway problems. Therefore, the treatment of patients with airway obstruction, in particular SGS recurrence, was never delayed.
Although stenoses related to iatrogenic trauma are regarded to be more prevalent [1,21,22], the profile of our study population matches the idiopathic type of the condition. Previously published studies have discussed potential environmental or hereditary factors related to the high prevalence of idiopathic SGS [12,[23][24][25][26]. However, this finding might also reflect the anticipative policy in our institution of striving for either tracheostomy in patients with expected prolonged intubation or prompt decannulation combined with noninvasive ventilation to minimize mucosal trauma and scarring predisposing for traumatic SGS. Furthermore, the idiopathic type consists predominantly of otherwise healthy, middle-aged, nonsmoking females experiencing symptoms of dyspnea for approximately 2 years before given the correct diagnosis of SGS [11,13,27]. An elevated BMI is also identified as a factor associated with disease recurrence [17,28,29]. This view is supported by our findings with a relatively low incidence of comorbidities, and HRs of 3.5 and 7.1 for overweight and obese patients, respectively, compared to normal or underweight patients. The large CIs observed apparently depend on our study's small sample size. The theory of a hormonal imbalance in perimenopausal females has been previously studied to explain the onset of idiopathic SGS in that age group. Estrogen receptors are thought to be expressed either unproportionally compared to progesterone receptors and more extensively in females with idiopathic SGS compared to patients with a nonidiopathic type of SGS [30,31]. Moreover, there is evidence of an age-related elevation in peripheral estrogen formation occurring in adipose tissue [32]. Thus, being overweight or obese could potentially affect and complicate the hormonal equilibrium in premenopausal females contributing to the development of idiopathic SGS before menopause. Pregnancy-associated idiopathic SGS, although a rare entity, further supports the hypothesis of a hormonal origin or blossoming of symptoms in an established and occult stenosis due to the physiological vascular and respiratory changes of pregnancy [33]. These are concepts requiring further studies that could potentially explain the idiopathic prevalence in our cohort and the higher risk of recurrence in the fertile age group (18-39 years old) than in the peri-or postmenopausal age groups.
The major strength of our study is the segmentation of the inclusion period into nonoverlapping timeframes where the physicians in our department performed only one of two interventions, including a distinct learning period in between. In this manner, we sought to minimize performance bias, since nonrandom intervention assignment is a well-described disadvantage in all retrospective studies. Furthermore, only previously untreated patients with isolated stenosis of the subglottic region were included to eliminate the potential confounding effect of scar transformation by previous surgery and potential selection bias. Due to the relapsing nature of SGS and in conformity with results from previous reports [6,11,13], the follow-up time was set to 3 years for both cohorts, ensuring an equal and homogenous assessment of the survival analysis.
The absence of an objective and subjective severity grading of stenosis both before the initial intervention and at the clinical assessment upon recurrence is the main limitation of our study. An anatomical classification made by the surgeon was absent from the entire CLC, as neither the Cotton-Myer nor McCaffrey system had been used by the physicians in our department at that time. Although these scales have been widely proposed to assess SGS disease severity and prognosis, the former does not address the length and complexity of the lesion, and the latter does not justify the cross-sectional degree of stenosis [1]. Song et al. [34] showed the poor interrater reliability of a visual estimation in Cotton-Myer grading among physicians and further discussed the difficulty in identifying cricoid cartilage when assessing stenosis length endoscopically. Moreover, neither of the two systems correlates with functional airway assessment with spirometry, as shown by several studies [35][36][37]. Since there is evidence that several measurements of pulmonary function could be used in the diagnosis and postoperative monitoring of patients with SGS [34,35], the lack of a preoperative functional evaluation with spirometry in our study population is considered another shortcoming of our study. Moreover, it would be interesting to quantify patient-experienced dyspnea using questionnaires specifically developed for upper airway obstruction [38,39]. However, these data were missing from the entire CLC, since a routine assessment with spirometry and the validated Swedish version of the Dyspnea Index was not introduced as part of the preoperative workup in our department until 2016.
Our study indicates that balloon dilatation is superior to CO 2 laser treatment in SGS patients, which is in conformity with several other retrospective studies [5,6,13]. Future prospective multicenter randomized control trials are recommended to achieve a sufficient sample size to further evaluate this evidence and examine the effect of adjuvant therapies and the associations of different patient-specific confounders predisposing patients to SGS recurrence.
Conclusion
Endoscopic treatment for SGS with balloon dilatation is more favorable regarding short-term SGS recurrence compared to CO 2 laser treatment, and patients with a younger age of SGS onset, overweight, or obesity showed a higher risk for SGS recurrence.
Author contributions EN: conceptualization, design, conduct, analysis, and writing of the original manuscript draft. JS: design, writing-review and editing, and supervision. AM: analysis, and writingreview and editing. MvonB: design, writing-review and editing, and supervision.
Funding Open access funding provided by Örebro University. This study was funded by Örebro County Council (ALF). | 2023-03-26T06:17:08.317Z | 2023-03-24T00:00:00.000 | {
"year": 2023,
"sha1": "8be126ba8c820941761c4c6dd2fbb380f0830872",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00405-023-07926-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5605ca5a9d3e41a62b1327b2fc3639b20873600",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216913924 | pes2o/s2orc | v3-fos-license | Can Your Context-Aware MT System Pass the DiP Benchmark Tests? : Evaluation Benchmarks for Discourse Phenomena in Machine Translation
Despite increasing instances of machine translation (MT) systems including contextual information, the evidence for translation quality improvement is sparse, especially for discourse phenomena. Popular metrics like BLEU are not expressive or sensitive enough to capture quality improvements or drops that are minor in size but significant in perception. We introduce the first of their kind MT benchmark datasets that aim to track and hail improvements across four main discourse phenomena: anaphora, lexical consistency, coherence and readability, and discourse connective translation. We also introduce evaluation methods for these tasks, and evaluate several baseline MT systems on the curated datasets. Surprisingly, we find that existing context-aware models do not improve discourse-related translations consistently across languages and phenomena.
Introduction and Related Work
The advances in neural machine translation (NMT) systems have led to great achievements in terms of state-of-the-art performance in automatic translation tasks. There have even been claims that their translations are no worse than what an average bilingual human may produce (Wu et al., 2016) or even that the translations are on par with professional translators (Hassan et al., 2018).
However, these claims only hold under a narrow set of controlled circumstances. When translations are evaluated monolingually or at the document level, human translations are preferred over MT outputs. Läubli et al. (2018) conduct extensive experiments for Chinese-English translations with professional translators, and find that although there is no statistical difference in adequacy between human and MT output at a sentence level, there is a statistically strong prefer-ence for human translations both in terms of adequacy and fluency when evaluated at the document level. Crucially, the document (or discourse) level phenomena (e.g., coreference, coherence) may not seem lexically significant but contribute significantly to readability and understandability of the translated texts (Guillou, 2012).
Meanwhile, there have been numerous attempts to model extra sentential context for MT -previously within the statistical MT (Carpuat et al., 2013;Hardmeier et al., 2013), and recently within the NMT framework. The NMT framework such as the Transformer (Vaswani et al., 2017) provides more flexibility to incorporate larger context. This has spurred a great deal of interest in developing context-aware NMT systems that take advantage of source or target contexts, e.g., (Maruf and Haffari, 2018), (Miculicich et al., 2018) and (Voita et al., 2018(Voita et al., , 2019, to name a few. Despite the increasing interest in contextual MT, there is no framework for a principled comparison of results: there are no standard corpora and no agreed-upon evaluation measures. The selection of training datasets has mostly been arbitrary and much smaller in size compared to the standard ones (e.g., WMT datasets).
More critically, the lack of appropriate evaluation measures has been the key impediment in advancing contextual MT as it is important to measure if the context improves translations in terms of discourse phenomena, rather than mere improvements in lexical matching as is done with BLEU (Papineni et al., 2002). Indeed, recent studies also propose targeted datasets for evaluating phenomena like coreference (Guillou et al., 2014;Guillou and Hardmeier, 2016;Lapshinova-Koltunski et al., 2018;Bawden et al., 2018;Voita et al., 2018), and in the case of (Voita et al., 2019), testsets for ellipsis and lexical cohesion. The WMT-2019 tasks have also included docu-ment level translation and several adjoining usersubmitted testsets targeted towards specific phenomena including subject-verb agreement, coreference, and others (Bojar et al., 2018(Bojar et al., , 2019. In this work, we cover four diverse discourse phenomena using automatic data extraction methods, and also propose automatic evaluation methods for these tasks. Our targeted evaluation datasets are called the DiP benchmark tests (for Discourse Phenomena), that will allow us to compare models across discourse task strengths.
Our analysis of state-of-the-art (SOTA) NMT models proves that testing a system on a single language pair is not sufficient as we observe significant differences in system behavior and quality across languages. Our methods for automatically extracting testsets can be applied to multiple languages, and find cases that are difficult to translate without having to resort to synthetic data. Moreover, they can be easily updated to reflect current challenges, since datasets can become outdated as systems improve over the years.
Our aim is to push the improvement of translation systems towards human-like output. Our main contributions in this paper are as follows: • Benchmark datasets for four discourse phenomena: anaphora, coherence & readability, lexical consistency, and discourse connectives.
• Automatic evaluation methods and agreements with human judgments.
• Benchmark evaluation and analysis of three SOTA context-aware systems contrasted with baselines, for French/German/Russian-English language pairs. We open-source our framework at https://ntunlpsg.github.io/project/discomt/DIP/.
Machine Translation Models
We first introduce the baseline MT systems that we will be benchmarking in this work and report their BLEU scores in our proposed setup.
Model Architectures
We test the performance of three context-aware NMT models introduced by Voita et al. (2018), Miculicich et al. (2018) and Zhang et al. (2018) on our DiP benchmark testsets. 1 Alongside, we also evaluate a sentence-level model, and a simple concatenation-based model (Tiedemann and Scherrer, 2017) to contrast with the three elaborate context-aware models. SEN2SEN: Our SEN2SEN baseline is a standard 6-layer base Transformer model (Vaswani et al., 2017) which translates sentences independently.
CONCAT: Our CONCAT model is a 6-layer base Transformer whose input is two sentences (previous and current sentence) merged, with a special character serving as a separator.
ANAPH: Voita et al. (2018) incorporate the source context by encoding it with a separate encoder, then fusing it in the last layer of a standard Transformer encoder using a gate. They claim that their model explicitly captures anaphora resolution. HAN: Miculicich et al. (2018) introduce a hierarchical attention network (HAN) into the Transformer framework to dynamically attend to the context at two levels: word and sentence. They achieve the highest BLEU when hierarchical attention is applied separately to both the encoder and the decoder. SAN: Zhang et al. (2018) use a separate Transformer encoder to encode the context in the source side, which is then incorporated into the source encoder and target decoder using gates. We refer to this model as source attention network (SAN).
For the context-aware models, we use the implementations from official author repositories. As the official code for ANAPH (Voita et al., 2018) has not been released, we implement the model in the Fairseq framework (Ott et al., 2019). 2 For training the SEN2SEN and CONCAT models we used the Transformer implementation from Fairseq. We confirmed with the authors of HAN and SAN that our configurations were correct, and we took the best configuration directly from the ANAPH paper. Further details about the training settings and hyperparameters can be found in Appendix A.4.
Training Data
It is essential to provide the models with training data that contains adequate amounts of discourse phenomena, if we expect them to learn such phenomena. To construct such datasets, we first manually investigated the standard WMT corpora consisting of UN (Ziemski et al., 2016), Europarl (Tiedemann, 2012) as well as the standard IWSLT dataset (Cettolo et al., 2012). We analyzed 100 randomly selected pairs of consecutive English sentences from each dataset, where the first sentence was treated as the context. Table 1 shows the percentage of cases containing the respective discourse phenomena.
In accordance with intuition, data sources based on narrative texts such as IWSLT exhibit increased amounts of discourse phenomena compared to strictly formal texts such as the UN corpus. On the other hand, the UN corpus consists of largely unrelated sentences, where only lexical consistency is well-represented due to the usage of very specific and strict naming of political concepts. We decided to exclude the UN corpus and combine the other datasets that have more discourse phenomena. We evaluate the models on the WMT-14 testset which consists of news articles. Table 2 shows the statistics of the resulting datasets.
BLEU Scores
The BLEU scores on the WMT-14 testset for each of the five trained models for De-En, Fr-En and Ru-En translation tasks are given in Table 3.
We observe a variability in BLEU scores across the models. In contrast to increases in BLEU for selected language-pairs and datasets reported in the published work, incorporating context within elaborate context-dependent models decreases BLEU scores for Fr-En and De-En. CONCAT, the simple concatenation-based model, achieves the best BLEU out of all of the tested models. This shows that context knowledge is indeed helpful for improving the BLEU. For Ru-En task, dedicated context-aware mod- els improve the performance. In particular, ANAPH achieves the highest score of all -interestingly, it has been trained and tested on En-Ru in the original paper (Voita et al., 2018). This shows that complex architectures might only be useful for certain types of languages (such as highly inflected languages, like Russian).
Benchmark Testset Generation
We extract the testsets for the evaluated discourse phenomena automatically, based on existing errors in system outputs. This ensures that the data can (i) provide hard contexts for translation without being artificial, (ii) be generated for multiple source languages, and (iii) be updated as frequently as possible; making them adaptable to errors in newer (and possibly more accurate) systems, and making the tasks harder over time.
We use the system outputs released by WMT for the most recent years (Bojar et al., 2017(Bojar et al., , 2018(Bojar et al., , 2019 to build our testsets. For De-En, Fr-En and Ru-En, these consist of translation outputs from 51, 31 and 41 unique systems respectively. Since the data comes from a wide variety of systems, our testsets representatively aggregate different types of errors from several (arguably SOTA) models. Also note that the MT models we are benchmarking are not a part of these system submissions to WMT, so there is no potential bias in the testsets.
In this paper, we focus on translations from French, German, and Russian to English. We include French since Fr-En is a popular translation pair that results in some of the highest BLEU scores. WMT discontinued French from 2016 onwards, so the benchmark testsets for French are smaller and based on relatively older 2013-2015 (Bojar et al., 2013(Bojar et al., , 2014(Bojar et al., , 2015 data. Other source languages that are part of WMT can be extracted as needed; the testsets can also be expanded if older data were to be considered. The following sections describe the dataset, evaluation and verification procedures, and analysis of each of the discourse phenomena we benchmark. Anaphora are references to entities that occur elsewhere in a text; mishandling them can result in ungrammatical sentences or the reader inferring the wrong antecedent, leading to misunderstanding of the text (Guillou, 2012). We focus specifically on the aspect of incorrect pronoun translations.
Pronoun Testset
To obtain hard contexts for pronoun translation, we look for source texts that lead to erroneous pronoun translations in recent WMT submissions. We align the WMT system translations with their references, and collect the cases in which the translated pronouns do not match the reference. This process requires the pronouns in the target language to be separate morphemes as in English.
Our anaphora testset is an updated version of the one proposed by Jwalapuram et al. (2019), who also provide a list of cases where the translations can be considered wrong (rather than acceptable variants). We filter the system translations based on their list. The corresponding source texts are extracted as a test suite for pronoun translation. This gives us a pronoun benchmark testset with 1478 samples for Fr-en, 2245 samples for De-En and 2368 samples for Ru-En.
Pronoun Evaluation
Targeted evaluation of pronouns in MT has been challenging as it is not fair to expect an exact match with the reference. Evaluation methods like APT (Miculicich Werlen and Popescu-Belis, 2017) or AutoPRF (Hardmeier and Federico, 2010) are specific to language pairs or lists of pronouns, requiring extensive manual intervention. They have also been criticised for failing to produce evaluations that are consistent with human judgments (Guillou and Hardmeier, 2018). Jwalapuram et al. (2019) propose a model based evaluation measure for pronouns that generalizes well across language pairs and pronouns. They train a pairwise ranking model that scores "good" pronoun translations (like in the reference) higher than the "poor" pronoun translations (like in the MT output) in context, and show that their model is good at making this distinction, along with having high agreements with human judgements. However, they do not rank multiple system translations against each other, which is our main goal; the absolute scores produced by their model are not useful since it is trained in a pairwise fashion.
We devise a way to use their model to score and rank system translations in terms of pronouns. First, we re-train their model with more up-to-date WMT data. 3 We obtain a score for each benchmarked MT system (SEN2SEN, CONCAT, etc.) translation using the model, plus the corresponding reference sentence. We then normalize the score for each translated sentence by calculating the difference with the reference. To get an overall score for an MT system, the assigned scores are summed across all sentences in the testset.
where ρ i (.|θ) denotes the score given to sentence i by the pronoun model with parameters θ. The systems are ranked based on this overall score, where a lower score indicates a better performance.
User study. To confirm that our normalizationbased ranking of systems agrees with human judgments, we conducted a user study. Participants are asked to rank given translation candidates in terms of their pronoun usage. We include the reference in the candidates, as a control. We ask participants to rank system translations directly rather than a synthetically constructed contrastive pair (as was done by Jwalapuram et al. (2019)) to ensure that our evaluations, which will be conducted on actual translated texts, are reliable. We first conducted the study in a bilingual setup, in the presence of the source for German-English. Participants were shown a source context of two sentences and the source sentence in bold, followed by three candidate translations of the source sentence, one of which is the reference.
The other two were translations with different pronoun errors produced by MT systems. Participants annotate 100 such samples. See Appendix A.1 for the user study interface.
We then conducted the study in a monolingual setup without the source, i.e., native speakers are shown the reference context in English, and the two candidate English translations and the reference translation as possible options for the sentence that follows ( Figure 1). To facilitate comparison, the data used for the German-English and only-English studies is the same.
The results are analysed to check (i) how often the reference is preferred over the system translations (our control), and (ii) how often the users agree in preference over the system translations (i.e., human judgment for translation quality). There were two participants in the bilingual setup, with the control experiment yielding an agreement of 0.72 according to Gwet's AC1 (Gwet, 2008). 4 There were four participants in the monolingual setup, with the control yielding an AC1 agreement of 0.82, which is higher than the bilingual setup. We therefore use the monolingual setup to evaluate the rankings obtained from our modified evaluation method. We obtain an agreement of 0.91, justifying the use of our modified pronoun model for evaluation.
Results and Analysis
The ranking results obtained from evaluating the MT systems on our pronoun benchmark testset using our evaluation measure are given in Table 4. We also report common pronoun errors for each model based on our manual analysis.
Overall, we observe that surprisingly, SEN2SEN is translating pronouns comparatively well -outperforming all other models in De-En and Fr-En, and only giving way to ANAPH in Ru-En. The success of the SEN2SEN model can be explained by its tendency to use it as the default pronoun, which statistically appears most often due to the lack of grammatical gender in English. More variability in pronouns occurs in the outputs of the context-aware models, but this does not contribute to a greater success. Table 4: Pronoun evaluation: Rankings of the different models for each language pair, obtained by summing the evaluation score for each sample in the pronoun benchmark. Each set of rankings is followed by the results of the manual analysis on a subset of the translation data. The percentages for the following types of errors are reported: Anaphora -instances of Gender Copy, Named Entity and Language specific errors.
Specifically, we observed the following types of errors in our manual analysis on a subset of the translation data: (i) Gender copy. Translating from Fr/De/Ru to En often requires the 'flattening' of gendered pronouns to it, since Fr/De/Ru assign gender to all nouns. In many cases the machine translated pronouns tend to (mistakenly) agree with the source language. For example, diese Wohnung in Earls Court..., und sie hatte... is translated to : apartment in Earls Court, and she had..., a version which upholds the female gender expressed in sie, instead of translating it to it. This was the most common error, except for Ru-En, where Named Entity errors were slightly more prevalent.
(ii) Named entity. A particularly hard problem is to infer gender from a named entity, e.g., Lady Liberty...She is meant to...-she is wrongly translated to it. Such examples demand higher inference abilities such as world knowledge (e.g., distinguish male/female names).
(iii) Language specific phenomena. Pronouns can be ambiguous in the source language. For example in German, the pronoun sie can mean both she and you, depending on capitalization, sentence structure, and context. This type of error often appears in the context-aware models, while being relatively rare for the SEN2SEN model. Pitler and Nenkova (2008) define coherence as the ease with which a text can be understood, and view readability as an equivalent property that indicates whether it is well-written. It has been shown that NMT systems generate more fluent sentences than their phrase-based counterparts (Castilho et al., 2017). However, when the output is evaluated at the document-level, it has also been shown that it lacks coherence (Läubli et al., 2018).
Coherence Testset
Our coherence and readability benchmarking is conducted at the document level; we try to find documents that can be considered hard to translate. To do this, we use the coherence model recently proposed by Moon et al. (2019), that achieves state-of-the-art results in most coherence assessment tasks. The model has a Siamese framework, trained in a pairwise ranking fashion with positive and negative documents. The network models both syntax and inter-sentence coherence relations, along with global topic structures.
The coherence model is originally trained on WSJ articles, where a negative document is formed by shuffling sentences of an original (positive) document. It needed to be re-trained with MT data to better capture the coherence issues that are present in MT outputs. It has been shown in some studies that MT outputs are incoherent (Smith et al., 2015(Smith et al., , 2016Läubli et al., 2018). We thus re-train the coherence model with reference translations as positive and MT outputs as negative documents. We use the older WMT submissions from 2011-2015 for this re-training, to ensure that the training data does not overlap with the data used for extracting our benchmark testset.
The model takes a system translation (multisentential) and its reference as input and produces a score for each. Similar to Eq. 1, we consider the difference between the scores produced by the model for the reference and the translated text as the coherence score for the translated text.
For a given source text (document) in the WMT testsets, we obtain the coherence scores for each of the translations (i.e., WMT submissions) and average them. The source texts are then sorted based on the mean coherence scores of their translations. The texts that have lower mean coherence scores can be considered to have been hard to translate coherently. We threshold the scores to extract approximately the bottom 30% of the texts as a tradeoff between getting hard enough contexts and a reasonably-sized testset. These source texts form our benchmark testset for coherence and readability. This yields 38 documents for Fr-En, 128 documents for De-En and 180 documents for Ru-En.
Coherence Evaluation
Coherence and readability is also a hard task to evaluate, as it can be quite subjective. We resort to model-based evaluation here as well, to capture the different aspects of coherence in translations.
We use our re-trained coherence model to score the benchmarked MT system translations and modify the scores for use in the same way as the anaphora evaluation (Eq. 1) to obtain a relative ranking. As mentioned before ( §3), the benchmarked MT systems do not overlap with the WMT system submissions, so there is no potential bias in evaluation since the testset extraction and the evaluation processes are independent. To confirm that the model does in fact produce rankings that humans would agree with, and to validate our model re-training, we conduct a user study. User study. The participants are shown three candidate English translations of the same source text, and asked to rank the texts on how coherent and readable they are (Figure 2). To optimize an- Rankings of the different models for each language pair, obtained by summing evaluation scores for each document in the coherence benchmark testsets.
notation time, participants are only shown the first four sentences of the document; they annotate 100 such samples. We also include the reference as one of the candidates for control, and to confirm that we are justified in re-training the evaluation model to assign a higher score to the reference. Three participants took part in the study. Our control experiment results in an AC1 agreement of 0.84. The agreement between the human judgements and the coherence evaluation model's rankings is 0.82. The high agreement validates our proposal to use the modified coherence model to evaluate the benchmarked MT systems.
Results and Analysis
From the rankings in Table 5, we see that SEN2SEN is the most coherent model for De-En and Ru-En. For Fr-En however, we observe an advantage of the context-aware model -SAN, which ranks high for De-En as well. We identified the following types of coherence and readability errors (more examples in Appendix A.6).
(i) Inconsistency. As in Somasundaran et al. (2014), we observe that inconsistent translation of words across sentences (in particular named entities) breaks the continuity of meaning.
(ii) Translation error. Errors at various levels spanning from ungrammatical fragments to model hallucinations introduce fragments which bear little relation to the whole text (Smith et al., 2016). An example of this: Reference: There is huge applause for the Festival Orchestra, who appear on stage for the first time in casual leisurewear in view of the high heat.
Translation: There is great applause for the solicitude orchestra , which is on the stage for the first time, with the heat once again in the wake of an empty leisure clothing.
Lexical Consistency
Lexical consistency in translation was first defined as 'one translation per discourse' by Carpuat (2009), i.e., the translation of a particular source word consistently to the same target word in that context. Guillou (2013) analyze different humangenerated texts and conclude that human translators tend to maintain lexical consistency, which supports the important elements in a text. The consistent usage of lexical items in a discourse can be formalized by computing the lexical chains (Morris and Hirst, 1991; Lotfipour-Saedi, 1997).
Lexical Consistency Testset
To extract a testset for lexical consistency evaluation, we first align the translations from WMT submissions with their references. In order to get a reasonable lexical chain formed by a consistent translation, we consider translations of blocks of 3-5 sentences in which the (lemmatized) word we are considering occurs at least twice in the reference. For each such word, we check if the corresponding system translation produces the same (lemmatized) word at least once, but fewer than the number of times the word occurs in the reference. In such cases, the system translation has failed to be lexically consistent in translation (see Figure 3 for an example). We limit the errors considered to nouns and adjectives. The source texts of these cases form the benchmark testset. This gives us a testset with 172 sets of sentences for Fr-En, 312 sets for De-En and 358 sets for Ru-En.
One possible issue with this method could be that reference translations may contain forced consistency, i.e., human translators introduce consistency to make the text more readable, despite inconsistent word usage in the source. It may not be reasonable to expect consistency in a system translation if there is none in the source. To confirm, we conducted a manual analysis where we compared the lexical chains of nouns and adjectives in Russian and French source texts against the lexical chains in the English reference. We find that in a majority (77%) of the cases, the lexical chains in the source are reflected accurately in the reference, and there are relatively few cases where humans force consistency. Considering the fact that the same data is used for BLEU calculations, we presume that this should not be a significant issue.
Lexical Consistency Evaluation
For lexical consistency, we adopt a simple evaluation method. For each block of 3-5 sentences, we either have a consistent translation of the word in focus, or the translation is inconsistent. We simply count the instances of consistency and rank the systems based on the percentage of accuracy.
It is possible that the word used in the system translation is not the same as the word in the reference, but the MT output is still consistent (e.g., a synonym used consistently). We tried to use alignments coupled with similarity obtained from ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) embeddings to evaluate such cases to avoid unfairly penalizing the system translations, but we found this to be noisy and unreliable. Thus, we match with the reference, as it can be argued that such words are salient and therefore must be translated exactly to convey the correct meaning.
Results and Analysis
The rankings of the MT systems based on accuracy on the lexical consistency benchmark testsets are given in Table 6, along with our findings from a manual analysis on a subset of the translations.
The overall low quality of Russian translations contributes to the prevalence of Random translations, and the necessity to transliterate named entities increases NE errors, compared to other languages. CONCAT and SEN2SEN are again successful on average, taking the first or second place in all tested languages, while ANAPH leads the board again for Ru-En. Our manual inspection of the lexical chains shows the following tendencies: (i) Synonym & related word. Words are exchanged for their synonyms (poll -survey), hypernyms/hyponyms (ambulance -car) or related concepts (wine -vineyard).
(ii) Named entity. Models tend to distort proper names and translate them inconsistently. For example, the original name Fchtorf (name of a town) gets translated to feeding-community. (iii) Omission. Occurs when words are omitted altogether from the lexical chain. Table 6: Lexical consistency evaluation: Rankings of the different models for each language pair, ranked by their Accuracy. Accuracy here is defined as the percentage of samples in the benchmark dataset translations in which the models maintain lexical consistency. Each set of rankings is followed by the results of the manual analysis on a subset of the translation data for Synonyms, Related words, Omissions, Named Entity, Random translation.
Discourse Connectives
Discourse connectives are used to link the contents of texts together by signaling coherence relations that are essential to the understanding of the texts (Prasad et al., 2014). Failing to translate a discourse connective correctly can result in texts that are hard to understand or ungrammatical.
Discourse Connective Testset
Finding errors in discourse connective translations can be quite tricky, since there are often many acceptable variants. To mitigate confusion, we limit the errors we consider in discourse connectives to the setting where the reference contains a connective but the translations fail to produce any. Although there is an accepted list of explicit discourse connectives, it would not be appropriate to simply extract such cases, since those words may not always act in the capacity of a discourse connective. In order to identify the discourse connectives, we build a simple explicit connective classifier (a neural model) using annotated data from the Figure 4: Connective study interface. Participants are shown the reference with the connective and another option without the connective, and asked to choose the best option that follows the given context. Penn Discourse Treebank or PDTB (Prasad et al., 2018). The classifier achieves an average crossvalidation F1 score of 93.92 across the 25 sections of PDTBv3, proving that it generalizes well. See Appendix A.3 for more details about the model.
After identifying the explicit connectives in the reference and the system translations, we align them and extract the source texts of cases with missing connective translations. We only use the classifier on the reference text, but consider all possible markers in the system translations to give them the benefit of the doubt. This gives us a discourse connective benchmark testset with 109 samples for Fr-En, 109 samples for De-En and 117 samples for Ru-En.
Discourse Connective Evaluation
There has been some work on semi-automatic evaluation of translated discourse connectives in Meyer et al. (2012) and Hajlaoui and Popescu-Belis (2013); however, it is limited to only En-Fr, based on a dictionary list of equivalent connectives, and requires using potentially noisy alignments and other heuristics. In the interest of evaluation simplicity, we expect the model to produce the same connective as the reference. Since the nature of the challenge is that connectives tend to be omitted altogether, we report both the accuracy of connective translations with respect to the reference, and the percentage of cases where any candidate connective is produced. User study. To confirm that the presence of the connective conveys some information and contributes to better readability and understanding of the text, we conduct a user study. As presented in Figure 4, participants are shown two previous sentences from the reference for context, and asked to choose between two candidate options for the sentence that may follow. These options consist of the reference translation with the connective highlighted, and the same text with the connective deleted. We also conducted a study using system translations with missing connectives directly; see Appendix A.3 for discussion.
Participants are asked to choose the sentence which more accurately conveys the intended meaning. There were two participants who annotated 200 such samples. The reference with the connective was chosen over the version without the connective with an AC1 agreement of 0.98. See Appendix A.3 for connective-wise results. Note that participants may prefer the version with the connective due to loss of grammaticality or loss of sense information when the connective is missing. Although indistinguishable in this setting, we argue that since both affect translation quality, it is reasonable to expect a translation for the connectives.
Results and Analysis
The rankings of MT systems based on their accuracy of connective translations are given in Table 7, along with our findings from a manual analysis on a subset of the translations. The ranking shows that SEN2SEN models are on average the most accurate and omit the connectives less often. ANAPH continues its high performance in Ru-En, and while SAN leads the board for De-En in terms of accuracy, it has a low percentage of cases overall in which any connective is produced.
In benchmark outputs, we observed mostly omissions of connectives (disappears in the translation), synonymous translations (e.g., Naldo is also a great athlete on the bench -Naldo's "great sport" on the bank, too.), and mistranslations.
Discussion
Our benchmark evaluation on various discourse phenomena across different MT systems and language pairs reveals gaps in evaluation results that are typically reported. A lack of comprehensive evaluation makes it difficult to determine which models perform conclusively better than others.
Our results re-emphasize the gap between BLEU scores and translation quality at the discourse level. The overall BLEU scores for Fr-En are higher than the BLEU scores for De-En; however, we see that both the lexical consistency and the discourse connective accuracies are higher for De-En. Similarly, for Ru-En, both SAN and HAN have higher BLEU scores than the SEN2SEN and CONCAT models, but are unable to outperform these simpler models consistently in the discourse tasks, often ranking last.
We also reveal a gap in performance consistency across language pairs. Models may be tuned for a particular language pair, such as ANAPH which was trained for En-Ru. For the same language pair (Ru-En), we show results consistent with what is reported; the model leads the board for anaphora and lexical consistency, while ranking second for coherence and readability, and discourse connectives. However, it is not so successful in other languages, ranking at the bottom for anaphora in De-En and discourse connectives in Fr-En, and close to bottom for coherence in Fr-En and De-En. SAN performs highly in coherence for Fr-En and De-En, in contrast to its performance on other tasks and languages; the authors originally report improved results for Fr-En.
In general, our findings match the conclusions from Kim et al. (2019) regarding the lack of satisfactory performance gains in context-aware models. Given no comprehensive evaluation across language pairs, the best bet for training an MT model is to use the baseline SEN2SEN and CON-CAT models, which perform more or less reliably across different tasks. Our results emphasize the need for standard benchmarking datasets and evaluation measures across language pairs, that will provide a better picture of MT system performance.
Although some of the testsets we provide are limited in size, it is a consequence of favouring precision to maintain data quality and limiting data to recent years. However, since the extraction is automatic, the datasets can be extended as submissions are added to the upcoming evaluation campaigns, while also increasing the difficulty of the tasks as MT systems improve. We hope that the discourse benchmark testsets and evaluation procedures we provide can contribute towards a more comprehensive MT evaluation framework, and prove useful in obtaining a more complete idea of a system's translation quality.
Conclusions
We presented the first of their kind discourse phenomena based benchmarking testsets called the DiP tests, designed to be challenging for NMT systems. We show that complex context-aware models are not consistent in their performance. Our main goal is to motivate the benchmarking of MT systems with more indicative performance yardsticks. We will release the document-level training corpora and discourse benchmark testsets for public use, and also propose to accept translations from MT systems to maintain a leaderboard for the described phenomena. is based on a model that is trained on WMT11-15 data and tested on WMT-2017 data. We retrain the model with more up-to-date data from WMT13-18, and test the model on WMT-19 data. Note that this training data is taken from WMT submissions, which do not overlap with the benchmarked MT models; there is therefore no conflict in using this trained model to evaluate the benchmarked model translations. Results are shown in Table 8. Their model scores the translations in context; we provide the previous two sentences from the reference translation as context according to their settings. Figure 5.
User Study. The bilingual (German-English) user study interface for pronoun translation ranking is shown in Figure 6. Results. The total assigned scores (difference between reference score and translation score) obtained for each system after summing the over the samples in the respective testsets are given in Table 10. The models are ranked based on these scores from lowest score (best performing) to highest score (worst performing).
A.2 Coherence
Re-trained model. We re-train the pairwise coherence model in Moon et al. (2019) to suit the MT setting, with reference translations as the positive documents and the MT outputs as the negative documents. The results are shown in Table 9.
Training data Test data Accuracy WMT11-15 WMT17-18 77.35 Table 9: Results of the re-trained coherence model.
Results. The total assigned scores (difference between reference score and translation score) obtained for each system after summing the over the samples in the respective testsets are given in Table 11. The models are ranked based on these scores from lowest score (best performing) to highest score (worst performing).
A.3 Discourse Connectives
Connective Classification model. We build an explicit connective classifier to identify candidates that are acting in the capacity of a discourse connective. The model consists of an LSTM layer (Hochreiter and Schmidhuber, 1997) followed by a linear layer for binary classification, initialized by ELMo embeddings (Peters et al., 2018). We use annotated data from the Penn Discourse Treebank (PDTBv3) (Prasad et al., 2018) and conduct cross-validation experiments across all 25 sections. Our classifier achieves an average crossvalidation precision of 95.58, recall of 92.35 and F1 of 93.92, which shows that it generalizes very well. The high precision also provides certainty that the model is classifying discourse connectives reliably.
User Study. For discourse connectives, we conducted two user studies. The first study in which the participants chose between the reference and its noisy version with the connective deleted was reported in the main paper. We present the connective-wise breakdown in Table 12.
In the second study, the participants were shown the reference along with the system translation that was missing the connective (Figure 7). In this study, the setup has no artificially constructed data; the idea is to check if there is a possibility that the system translation is structured in such a way as to require no connective. However, the AC1 agreement for preferring the reference was 0.82 (2 annotators; different annotators from the first study) for this study as well, which is still quite high. Table 13 has the connectivewise breakdown; here we see that the results are slightly different for certain connectives, but overall the strong preference for the reference with the connective is retained. Our assumption that connectives must be translated is validated through both studies.
Note that for both studies, participants were also given options to choose 'Neither' in case they didn't prefer either choice, or 'Invalid' in case there was an issue with the data itself (e.g., transliteration issues, etc.); data that was marked as such was excluded from further consideration.
A.4 Model Parameters
Parameters used to train SEN2SEN, CONCAT, ANAPH, and SAN models are displayed in Table 15, and parameters for HAN in Table 14.
A.5 Datasets
Our trainset is a combination of Europarl (Tiedemann, 2012), IWSLT (Cettolo et al., 2012) and News Commentary datasets, the development set is a combination of WMT-2016 and older WMT data (excluding 2014). We test on WMT-2014 data. We tokenize the data using the Moses software 5 , lowercase the text, and apply BPE encodings 6 from Sennrich et al. (2016). We learn the BPE encodings with the command learn-joint-bpe-and-vocab -s 40000.
Phenomenon
Example Anaphora Gender Copy S: Mir wurde diese Wohnung in Earls Court gezeigt, und sie hatte ... T: I was shown this apartment in Earls Court , and she had .. Named Entity T: ... Lady Liberty is stepping forward. It is meant to be carrying the torch of liberty R: She is meant to be carrying the torch of Liberty.
Lexical Consistency
Synonym T: Watch the Tory party conference. The convention is supposed to be about foreign policy, (...). R: Under tight security -the Tory party conference. The party conference was to address foreign policy (...).
Related Word T:
In the collision of the car with a taxi, a 27-year-old passer was fatally injured. R: A 27-year old passenger was fatally injured when the ambulance collided with a taxi.
Named Entity T: The Feeding-Community farmer , however , also had the ready-filled specialities. The demand for the good "made in Feed orf" was correspondingly high. R: But the Fchtorf farmer also had bottled specialties with him.
There was a lot of demand for the good "made in Fchtorf" beverage.
Omission T: (...) during the single-family home attempt, it stayed by the royal highlands thanks to the burglar alarm. They got off when the culprits turned hand on Friday just before 20 a.m. R: It is thanks to the alarm system that the attempt in the Knigswieser Strae at the single family home (...). On Friday just before 20.00 the alarm rang when the offenders took action.
Coherence
Ungrammatical T: "They didn't play badly for long periods -like Stone Hages , like Hip Horst -Senser.
Only the initial phase, we've been totally wasted", annoyed the ASV coach. R: "Over long periods, they had -as in Steinhagen, as against Hllhorst -not played badly. We only overslept the initial phase", said the ASV coach annoyed.
Hallucination T: Before appointing Greece , Jeffrey Pyett was the US ambassador to Kiev. When it came to the Maidan and the coup in 2014 , it was a newspaper. R: Before his appointment, Geoffrey Ross Pyatt was an ambassador in Kiyv. During his mission, the Maydan events and state coup happened, reminds Gazeta.Ru Inconsistency T: The one-in-house airline crashed on Sunday afternoon at a parking lot near Essen-Mosquitos. Essen Mill is a small airport that's used a lot by airline pilots. R: On Sunday afternoon, the single-seated aircraft crashed (..) a parking lot near the airport Essen-Mlheim Essen-Mlheim is a small airport, which is frequently used by pilots with light private planes.
Discourse Connectives
Omission T: Two people died driving their car against a tree . R: Two people died after driving their car into a tree.
Synonym T: Naldo's "great sport" on the bank, too. R: Naldo is also a great athlete on the bench | 2020-05-01T01:00:47.455Z | 2020-04-30T00:00:00.000 | {
"year": 2020,
"sha1": "d426157cc4f287b1279626cb618ca9df8359b36f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d426157cc4f287b1279626cb618ca9df8359b36f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234767662 | pes2o/s2orc | v3-fos-license | Nuclear spin readout in a cavity-coupled hybrid quantum dot-donor system
Nuclear spins show long coherence times and are well isolated from the environment, which are properties making them promising for quantum information applications. Here, we present a method for nuclear spin readout by probing the transmission of a microwave resonator. We consider a single electron in a silicon quantum dot-donor device interacting with a microwave resonator via the electric dipole coupling and subjected to a homogeneous magnetic field and a transverse magnetic field gradient. In our scenario, the electron spin interacts with a $^{31}\mathrm{P}$ defect nuclear spin via the hyperfine interaction. We theoretically investigate the influence of the P nuclear spin state on the microwave transmission through the cavity and show that nuclear spin readout is feasible with current state-of-the-art devices. Moreover, we identify optimal readout points with strong signal contrast to facilitate the experimental implementation of nuclear spin readout. Furthermore, we investigate the potential for achieving coherent excitation exchange between a nuclear spin qubit and cavity photons.
I. INTRODUCTION
Nuclear spins are promising candidates for quantum information applications due to their long coherence times [1,2] that can even be observed up to room temperature [3]. However, the small gyromagnetic ratio of nuclear spins that renders them well isolated from the environment and thus underlies their robust coherence also leads to long gate operation times compared to those reported for electron spins. Therefore, nuclear spins will most likely find their application as quantum memories [4,5], either for pure storage or as buffers for quantum computation purposes. In many cases, it is beneficial to perform readout directly on the nuclear spin instead of coherently transferring the information to another system, e.g. an electron spin [6], before this system is read out. Particularly with regard to scalable quantum computing devices, readout relying on electrical means is favourable over methods depending on ac-magnetic fields.
To a large extent driven by the microelectronics industry, the manufacturing of nanoscale semiconductor devices has matured during the last decades, and siliconbased devices have a particularly high potential for scaling. Moreover, isotopically purified 28 Si material containing predominantly nuclear spin 0 atoms can be produced and thus provides an excellent host material for spin qubits based on the electron spin or single impurity nuclear spins [6][7][8][9][10][11][12][13].
Cavity quantum electrodynamics (cQED) has been successfully used for charge-photon [14][15][16] and spinphoton coupling [17][18][19][20][21][22][23][24], as well as the detection of photons [25]. Moreover, cQED and gate reflectometry lend themselves for qubit readout [14,[26][27][28][29][30][31][32][33]. Nevertheless, it is an open question whether or not single nuclear spins could be detected via a cavity or coupled to cavity photons. Among the before mentioned achievements in cQED it is particularly noteworthy that the strong coupling regime is accessible for the spin of a single electron in a Si double quantum dot (DQD) subject to a magnetic field gradient where spin photon coupling emerges due to Figure 1. QD-donor system coupled to a single-mode microwave cavity. The cavity transmission Ac = aout / ain reveals the nuclear spin state. The QD-donor energy levels are detuned by an amount and hybridized by tunnel coupling tc. The spin of the confined electron is subject to a homogeneous magnetic field Bz and a gradient field bx perpendicular to Bz, while the electron charge is coupled to a single mode (ωc) of a microwave cavity with electric-dipole coupling strength gc. The electron interacts with the nuclear spin of the implanted donor via the hyperfine interaction A.
an effective spin orbit coupling caused by the the combination of the electric dipole interaction and spin-charge hybridization [20,21]. The same mechanism can also be used to realize a flopping-mode spin qubit with full electrical spin control via electric dipole spin resonance (EDSR) [34,35]. In this system, a longitudinal magnetic field gradient leads to a shift of the phase and amplitude response of the cavity transmission depending on the strength of the field gradient [35]. Motivated by this observation we consider a lateral architecture consisting of a quantum dot (QD) in a planar Si/SiGe structure and a single 31 P donor implanted in the Si host material.
While this system has been successfully operated in the multi electron regime [36], we consider the single electron regime to form a flopping-mode electron spin qubit (Fig. 1).
arXiv:2012.01322v2 [cond-mat.mes-hall] 17 May 2021
As a consequence, if the electron is confined to the donor, it couples to the donor nuclear spin via the hyperfine interaction. In this configuration, we expect the donor to generate a nuclear spin state dependent Overhauser field. This field constitutes a longitudinal magnetic field gradient that leads to a nuclear spin state and detuning dependent shift of the electron spin transition frequency. Therefore, the cavity response essentially probing the EDSR frequency is expected to shift accordingly. Our detailed discussion of the expected characteristics in the cavity transmission indicates that the observable signature of the strong electron spin photon coupling [20,21] is indeed significantly altered by the state of the nuclear spin and could therefore be used for nuclear spin state readout. This prediction is verified by calculating the cavity transmission using input-output theory.
Moreover, we investigate the effective excitation conserving nuclear spin photon coupling and find that our suggested method for nuclear spin readout does not require strong nuclear spin photon coupling.
This article is organized as follows: The following Sec. II contains a discussion of the model of the QDdonor system coupled to a cavity mode. In Sec. III we predict the impact of the donor nuclear spin state on the cavity transmission, and verify our expectation by calculating the cavity transmission using input-output theory. Section IV contains the derivation of an effective Hamiltonian describing the nuclear spin dynamics followed by a discussion of the emerging effective nuclear spin photon coupling. Finally, we summarize our results in Sec. V.
II. THEORETICAL MODEL
We consider a lateral QD-donor system fabricated in isotopically enriched 28 Si. QD and donor are aligned along the z-axis, such that a single electron can either be localized in the QD on the left or the donor on the right by adjusting the QD-donor energy level detuning . QDdonor tunnel coupling t c results in charge hybridization near = 0. The proposed experimental setup, including the various interactions, is sketched in Fig. 1. The detuning determined by the energy difference between QD and donor can be controlled by applying an electric field in z-direction and by tuning the gates defining the QD confinement potential. In the presence of a homogeneous magnetic field B z and a magnetic field gradient b x perpendicular to B z , the QD-donor system in the single electron configuration can be modelled by the Hamiltonian with τ i and σ i the Pauli operators in position and electron spin space, respectively. The interaction between the nuclear spin and the magnetic field is neglected because it is roughly three orders of magnitude smaller than all other relevant energy scales [37]. The magnetic fields B z and b x are given in units of energy and energy units are chosen such thath = 1. The donor ground and first excited state are energetically separated by > ∼ 2.5 meV taking into account strain effects due to the Si/SiGe interface [38]. On the other hand, a low-lying excited state, the excited valley state, is present in Si QDs. However, valley splittings > ∼ 50 µeV observed in recent devices [39,40] together with the possibility to operate the QD-donor system at temperatures < ∼ 50 mK allow for a negligible population of the excited valley state. Hence, the valley degree of freedom can be neglected in our model.
Electric dipole interactions allow to couple the electron in the QD-donor system to microwave resonator photons, described by the coupling Hamiltonian where a and a † are the bosonic cavity photon annihilation and creation operators of the relevant cavity mode, respectively. The charge-photon coupling strength for a DQD has been found to be on the order of g c /2π ≈ 30 to 40 MHz [20,41]. In the QD-donor scenario we expect it to be ≈ 1/3 of the DQD value, as discussed in Appendix A. The Hamiltonian for the cavity mode with frequency ω c is given by H cav = ω c a † a. If the electron is confined to the donor, electron spin and 31 P donor nuclear spin couple via the hyperfine interaction. The hyperfine interaction strength A = 117 MHz [42,43] present in bulk Si is significantly reduced to A ≈ 25 MHz in the Si quantum well of a Si/Si 0.7 Ge 0.3 heterostructure due to strain effects caused by the Si-and SiGe-lattice mismatch [38,44]. On the other hand, the donor is ionized and the electron does not interact with the donor nuclear spin if it occupies the left QD. Therefore, we can represent the electron spin nuclear spin interaction as with ν = (ν x , ν y , ν z ) T and ν i the nuclear spin Pauli operators. The factor (1 − τ z )/2 is a projection on the subspace with the electron bound to the donor.
Signatures of the electron spin-photon coupling can be observed in the cavity transmission [20,21]. We now investigate whether these signatures will be altered in the presence of a nuclear spin interacting with the electron spin via the hyperfine interaction and whether the combined spin-photon and hyperfine interactions have a potential application for nuclear spin readout.
To calculate the cavity response, we first transform the total Hamiltonian to the eigenbasis |± of τ z /2 + t c τ x , with the electron position expressed in terms of antibonding (+) and bonding (−) molecular orbital states because these basis states are a good approximation for the eigenstates |n with corresponding energies E n of H sys = H 0 + H e−n as illustrated in Fig. 2. Then, the Hamiltonian H = H sys + H I + H cav can be written as the sum of a diagonal part H 0 and an off-diagonal perturba- tion V as H = H 0 + V , with where τ i are Pauli operators acting on the space of bonding (+) and antibonding (−) orbitals, i.e. τ z |± = ±|± . Moreover, we introduce the orbital energy Ω = 2 + 4t 2 c and the orbital mixing angle θ = arctan( /2t c ). H 0 is diagonal with respect to the basis {|±, ↓ (↑), ⇓ (⇑), n } indicating the orbital state of the electron (±), the electron spin state (↓, ↑), the nuclear spin state (⇓, ⇑) and the number of photons in the cavity mode (n), respectively, while V is purely off-diagonal in this basis. In order to predict the impact of the nuclear spin on the cavity transmission, we derive an effective Hamiltonian for the lower orbital subspace defined by the projection operator P 0 = (1 − τ z )/2, that projects on the subspace spanned by the states |−, ↓, ⇑, n , |−, ↓, ⇓, n , |−, ↑, ⇑, n , |−, ↑, ⇓, n with n = 0, 1, 2, . . .. As a next step, we apply a Schrieffer-Wolff transformation to decouple the subspaces defined by the projection operators P 0 and Q 0 = 1 − P 0 [45], to find the effective Hamiltonian H eff = e S He −S , and follow the perturbative method presented in [45] to determine the block off-diagonal and antihermitian generator S defining the unitary transformation e S . If one chooses the ansatz S = ∞ n=1 S n with S n ∼ V n , the first contribution (S 1 ) must obey the relation [45] [H 0 , S 1 ] = P 0 V Q 0 + Q 0 V P 0 . This relation together with the commutation relations of the Pauli operators and the bosonic photon operators allows us to determine S 1 . The knowledge of S 1 is in turn sufficient to compute the effective Hamiltonian for the subspace defined by P 0 up to second order in the perturbation V [45], The explicit form of H e is presented in Appendix B. However, for the following discussion it is essential to determine transition frequencies as precisely as possible. To this end, we transform H e to a basis accounting for the electron spin mixing due to the magnetic field gradient with the basis states defined by the electron spin mixing angle φ via Since, here, b x B z the electron spin mixing angle is small and therefore the states |↓(↑) are predominantly the electron spin states | ↓ (↑) up to small contributions of the opposite electron spin state. Hence, in the following we refer to |↓(↑) as the electron spin states. The diagonal part of the transformed Hamiltonian reads with the Pauli operators σ i operating on the |↓(↑) states and as derived in Appendix B. Since the signatures of the electron spin-photon coupling that we expect to change due to the nuclear spin are observed close to resonance between the electron spin transition and the resonator [20,21], it is justified to assume Eσ ≈ ω c . Under this assumption we can apply the rotating wave approximation (RWA) retaining terms rotating with frequencies Eσ ≈ ω c and find that the nondiagonal part of the transformed Hamiltonian comprises interactions between the electron spin and the nuclear spin of the QD-donor system and the cavity mode with the explicit forms of the spin-photon couplings gσ ν , gσ, δgσ given in Appendix B. The interaction terms in (15) are of particular interest since one can expect to see signatures of these interactions in the transmission. However, the terms in the first line are negligible for φ 1.
The terms in the second line of (15) incorporate a flip of both the nuclear spin and the electron spin if the two are antialigned with the concomitant creation or annihilation of a cavity photon. This coupling emerges due to the combined effect of the dipole operator coupling the states |−, ↑, ⇓ (⇑) and |+, ↑, ⇓ (⇑) , and the hyperfine interaction between the states |+, ↑, ⇓ and |−, ↓, ⇑ . Thus, the interaction persists in the absence of the magnetic field gradient and has already been observed and analyzed in setups without such a gradient. The interaction can be used to control the flip-flop qubit and to construct gates between two such qubits [37], while the combination with an oscillating magnetic field allows for controlling the nuclear spin qubit and implementing a nuclear spin two-qubit gate [46].
On the other hand, the combined effect of the magnetic field gradient, giving rise to the coupling between the states |+, ↑, ⇓ (⇑) and |−, ↓, ⇓ (⇑) , and the dipole operator leads to the terms in the third line that describe a flip of the electron spin accompanied by the annihilation or creation of a cavity photon, while the state of the nuclear spin remains unchanged. These two different types of interaction cause a hybridization of the QDdonor system and the cavity mode when the transition in the QD-donor system is close to resonance with the cavity mode. Since the resulting hybrid states have a significant impact on the cavity transmission we inspect the energy expectation values of the QD-donor system states involved in the respective transitions. The energy expectation values of the four basis states defining the lower orbital subspace can be easily read off from (10): and we immediately find the transition frequencies for electron spin flips with a fixed nuclear spin state as well as the transition frequency for the electron spinnuclear spin flip-flop The energy expectation values (18) as a function of the QD-donor detuning and the various transition frequencies are presented in Fig. 3. Both Figure 3 and Eq. (18) show that the electron spin flip transition frequency depends on the state of the nuclear spin. More precisely, for a small electron spin mixing angle φ 1 the transition frequency with the nuclear spin in the states ⇑ and ⇓ differs by Hence, in the limits of large positive, zero, and large negative DQD detuning , the shift in the resonance frequency ∆ takes the values (note that t c > 0), −2t c : lim The increasing impact of the nuclear spin on ∆ with increasing QD-donor detuning is intuitively easy to understand: For −2t c the electron is localized in the left QD and therefore decoupled from the nuclear spin, at = 0 it is completely delocalized between the left QD and the donor, while it is trapped in the donor with a high probability for 2t c such that the coupling to the nuclear spin is maximized.
We note that in a DQD architecture with the second QD overlapping with an isoelectric 29 Si nuclear spin, readout of the nuclear spin state has been realized for the maximal coupling scenario 2t c by probing the electron spin resonance frequency with frequency-selective ac magnetic field pulses [47]. Even though the hyperfine interaction is as low as a few hundred kHz in such a device, we expect that alternatively our suggested readout method can be used as discussed in detail in Appendix E.
III. NUCLEAR SPIN READOUT VIA THE ELECTRON SPIN
We now describe how the nuclear-spin dependent shift ∆ of the electron-spin resonance frequency Eq. (20) allows for a read-out of the nuclear spin. The last term in (10) identifies the cavity resonance frequency including shifts of the empty cavity frequency ω c due to the interaction with the QD-donor system. Thus, the cavity mode is resonant with the electron spin flip transition for a fixed nuclear spin state if and resonant with the electron spin-nuclear spin flip-flop transition if We expect a signature of the respective coupling in the cavity transmission in the vicinity of system parameters , t c , B z , b x , and ω c for which one of these relations is fulfilled.
In order to verify our prediction we calculate the cavity transmission A c using input-output theory (Appendix C) and compare the system parameters for which characteristic features emerge with those satisfying the resonance conditions derived above. The calculation of A c takes charge relaxation processes due to the phonon environment and quasi-static charge noise affecting the detuning parameter into account (see Appendix D for details). Figure 4 shows the absolute value of the cavity transmission |A c | for three different populations of the hyperfine levels where (a) the two lowest energy levels are equally populated approximating the thermal equilibrium state for T > ∼ 30 mK, i.e., the QD-donor system is with equal probability in the states |0 and |1 which, up to small corrections, correspond to the nuclear spin up and down states |↓, ⇑ and |↓, ⇓ , respectively; (b) only the ground state ≈ |↓, ⇑ is populated; (c) only the excited state ≈ |↓, ⇓ is populated. We point out that a single measurement will always be represented by the Figs. 4(b) or (c), while Fig. 4(a) corresponds to the average over many measurements if the system is initialized with equal probability in the states |0 ≈ |↓, ⇑ and |1 ≈ |↓, ⇓ before the measurement. We find that the emerging characteristic features, given by a significantly reduced transmission due to the interaction of the cavity mode with the QD-donor system appear in the immediate vicinity of the parameters fulfilling the resonance conditions Eqns. (22) and (23), as indicated by the dashed lines in Fig. 4. One also observes that the signatures are less pronounced for | | 2t c . The last line of Eq. (5) shows that the electric dipole moment of the |+ ↔ |− transition is proportional to cos θ and therefore decreases with increasing | /2t c |, which, in turn leads to the weakening of the effective couplings responsible for the observed signatures.
For the experimental realization of nuclear spin state readout it is essential to obtain a strong contrast between the signal for nuclear spin ⇑ and ⇓. In order to identify suitable readout points, we calculate the difference of the cavity transmission |A c | obtained for the ex- cited state populated, |A c | ⇓ , and the one with only the ground state populated, |A c | ⇑ , i.e., Fig. 4(b) is subtracted from Fig. 4(c). The result presented in Fig. 5(a) unveils extended regions providing a high signal contrast for nuclear spin readout in the vicinity of the three resonance conditions, the two resonances (22) and the resonance (23), and weak QD-donor detuning in the range between = −10 µeV and = 15 µeV. The linecuts in Fig. 5(b) show that, within this range of QD-donor detuning, maximal contrast is achieved for points in the immediate vicinity of the resonance for nuclear spin ⇑.
The amplitude difference of the readout contrast between the resonances for ⇑ and ⇓ can be attributed to a shift of the cavity resonance frequency caused by the interaction with the QD-donor system (see Appendix E for more details). Moreover, we can check the sensitivity of the readout contrast with respect to the cavity detuning from the probe field for good readout points. To do so, we calculate |A c | ⇑ , |A c | ⇓ and the readout contrast, |A c | ⇓ − |A c | ⇑ , for the point in Fig. 5 where the red and the second dashed orange line from the left intersect, as a function of the detuning δω. The result is presented in Fig. 6 and shows a readout contrast larger than 0.2 for |δω| < 1.5 MHz.
In addition, the figure allows one to identify the origin of reduced transmission in Fig. 4(b) and the resulting good readout contrast: At the chosen readout point, the electron spin flip transition for nuclear spin ⇑ is close to resonance with the cavity, while the electron spin flip transition for nuclear spin ⇓ is off-resonant. Due to the strong electron spin photon-coupling one observes Rabi splitting for nuclear spin ⇑ (|A c | ⇑ in Fig. 6), whereas |A c | ⇓ shows a single resonance located between the Rabi split modes of |A c | ⇑ . To further characterize the nuclear spin measurement, we go beyond the input-output theory and inspect Eqs. (10) and (B38) describing the effective electronic Hamiltonian H e,0 + H e,int in order to assess the expected measurement back-action. We note that since [H e,0 , ν z ] = 0, the main part of the hyperfine coupling leads to a nuclear spin readout in the form of a quantum non-demolition (QND) measurement [48]. In general, [H e,int , ν z ] = 0 leading to small corrections to the QND behavior. However, for an adiabatic transfer of the electron from the left QD to the delocalized configuration between the QD and the donor and back under continuous transmission of a microwave field at constant frequency, we expect a recovery of the QND readout because the pure nuclear spin states are adiabatically transferred to eigenstates of H e,0 + H e,int . Away from the resonance, the analogous argument holds with the nonresonant Hamiltonian (B1).
For the experimental verification of the suggested method for nuclear spin readout, we envision the following protocol: The cavity transmission is measured at one of the suitable readout points. Then, a nuclear spin resonance π-pulse is performed before the cavity transmission is probed again. Following the above discussion, successful nuclear spin readout is achieved if there is a significant difference in the absolute value of the transmission, and, depending on this value for the respective measurement, the state of the nuclear spin at the time of each measurement can be assigned.
IV. NUCLEAR SPIN PHOTON COUPLING
It has been shown that the nuclear spin of a QD-donor system can be controlled with a classical electric field [49]. However, this does not allow coherent information transfer between the nuclear spin and photons. In order to assess the potential of the system for coherent coupling of the nuclear spin to cavity photons, we derive a Hamiltonian describing the effective dynamics of the nuclear spin interacting with the resonator mode while the remaining parts of the system are near the ground state. More precisely, we investigate the dynamics of the subspace determined by the projection operator that defines the subspace spanned by the states |−, ↓, ⇓, n , |−, ↓, ⇑, n , with n = 0, 1, 2, ... . To do this, we apply a Schrieffer-Wolff transformation to decouple the subspaces defined by the projection operators P 0 and Q 0 = 1 − P 0 [45]. Following the procedure sketched in Sec. III, we determine S 1 and S 2 , where S 2 is defined by , to obtain the effective Hamiltonian for the subspace defined by P 0 up to third order in the perturbation V [45], In particular, the diagonal part of the effective Hamiltonian reads with the expressions for Eν,ω c and δEν presented in Appendix F. We find that |δEν| |Eν| ω c if the electron is not entirely confined to the left QD. Thus, the microwave resonator and the donor nuclear spin flip transition cannot be tuned to resonance. The coherent excitation exchange between these two subsystems is described by the term, within H n given in (F1). We note that E |⇓,n > E |⇑,n because Eν ≈ − A 4 (1 + sin(θ)) + O(V 2 ) such that Eq. (28) is an excitation conserving interaction term. The explicit form of the coupling constant g ν in terms of the system parameters is given in Appendix F and we find that nuclear spin to photon coupling strengths of g ν ≈ 0.5 MHz can be achieved. Given realistic values for the nuclear-spin and cavity loss rates, γ κ ≈ 1 MHz, we note that the strong coupling regime for nuclear spin cavity QED (g ν κ, γ) should be within reach. However, the coherent excitation exchange between these two subsystems is suppressed by the large detuning from resonance.
The raising or lowering of the nuclear spin state along with the creation or annihilation of a cavity photon results from the combined effect of the hyperfine interaction, the magnetic field gradient and the electric dipole interaction. The fundamental problem preventing resonant coupling is that, in principle, the energy splitting between the orbital states (+ and −) and the energy splitting between the electron spin states (↑ and ↓) can simultaneously be tuned close to resonance with the microwave resonator, while, at the same time, the energy splitting between the nuclear spin states (⇑ and ⇓) is far off-resonant because the nuclear gyromagnetic ratio is ≈ 1000 times smaller than the one of the electron spin.
V. CONCLUSION
In conclusion, we have investigated a system composed of a donor nuclear spin coupled to the spin of a single electron in a QD-donor architecture via the hyperfine interaction. The electron is subject to a homogeneous magnetic field and a magnetic field gradient perpendicular to the homogeneous component, while it is also dipole coupled to a microwave resonator. We demonstrate that the effective excitation-conserving nuclear spin-photon interaction resulting from the combined effect of the hyperfine interaction, the electric dipole interaction, and the magnetic field gradient cannot directly be tuned to resonance.
Nevertheless, we show that the signature of the strong electron spin-photon coupling [21] in the cavity transmission is altered due to the hyperfine interaction. We find well separated signatures for the electron spin-photon coupling with the nuclear spin in the states ⇑ and ⇓, whereby the splitting of the two signatures is determined by the hyperfine interaction strength A. For a 31 P donor in the strained Si quantum well with A ≈ 25 MHz we expect that recent experimental setups are able to resolve the split signatures individually. Moreover, we identified good readout points at which one finds a high contrast between the measurement signal for the two opposing nuclear spin polarizations. Therefore, the cavity transmission allows for a readout of the nuclear spin state and for the measurement of the hyperfine interaction strength.
ACKNOWLEDGMENTS
We thank Mónica Benito and N. Tobias Jacobson for helpful discussions. This work has been supported by ARO grant number W911NF-15-1-0149.
Appendix A: QD-donor system
In this Appendix, we present a simulation of the QDdonor architecture that allows us to obtain a rough estimation for the size of the achievable tunnel coupling strength and the electric dipole moment.
In Si/SiGe heterostructures electrons in the Si quantum well are strongly confined in growth direction, defining the vertical position of the QD in the Si quantum well [10]. Additional lateral confinement, required to form a QD, can be realized with a layer of gate electrodes a few tenths of nanometers above the quantum well. In order to obtain a lateral QD-donor architecture, a 31 P donor has to be implanted in the quantum well. In the following we assume a separation of 56 nm between the gate layer and the plane containing the donor in the quantum well in line with recent Si/SiGe QD systems [14,20,50].
As a first step, we determine the electrostatic potential Φ in the donor-plane generated by the gate architecture illustrated in Fig. 7 and the ionized donor by numerically solving the Poisson equation with r = 11.7 the relative permittivity of Si. The applied gate voltages are considered by setting the boundary conditions accordingly, while the ionized donor is modelled by the homogeneous spherical charge density with r d = (−56, 0, 30) nm the donor position, i.e. the donor is implanted 56 nm below the gate layer and displaced by 30 nm in z-direction relative to the center of the rectangular gate in Fig. 7. We choose r c = 0.95 nm ensuring that the correct 31 P donor binding energy (45.5 meV) is achieved if no gate voltage is applied. The resulting electron confinement potential −eΦ for the gate voltages indicated in Fig. 7 in the donor plane is also shown in Fig. 7. We note that our calculations do not consider layers of different materials and material interfaces between these layers present in real Si/SiGe devices. However, due to the similar dielectric constants of Si and Si 0.7 Ge 0.3 , the resulting effects on the electrostatic potential in the donor plane are small and can be compensated by slightly modifying the gate architecture and the applied gate voltages. In the following, the level detuning between the lowestlying QD and donor state is adjusted by an external electric field in z-direction. Alternatively, the level detuning could also be controlled with more complex gate architectures.
Given the strong confinement in growth direction, it suffices to solve the two-dimensional Schrödinger equation for an estimation of the QD-donor tunnel coupling strength t c . Explicitely, the Schrödinger equation reads Figure 8. Energies of the ground state and the first excited state of the QD-donor system as a function of the external electric field Eext determining the level detuning . The dots are obtained from numerical solutions of (A3), while the solid lines describe a simplified two-level system.
The energies of the ground and first excited states of the QD-donor system as a function of the external electric field obtained by numerically solving (A3) are shown as the points in Fig. 8. The spectrum shows an avoided crossing at E 0 ext ≈ −2.0235 MV/m with minimal energy difference ∆E min ≈ 18 µeV. We find good agreement between the simulation (points) and a simplified two level model (solid lines) with tunnel coupling 2t c = ∆E min = 18 µeV and level detuning = −e E 0 ext − E ext d, where d = 37 nm gives the QD-donor distance discussed later. This observation justifies the orbital two-level model in (1) and shows that a sizeable tunnel coupling strength is reachable in lateral QD-donor devices despite the sharp confinement potential of the donor.
For the suggested nuclear spin readout method a notable tunnel coupling strength alone is not sufficient, since also the charge-photon coupling g c has to be sufficiently strong. The charge-photon coupling strength depends linearly on the electric dipole moment ed, where d is the QD-donor distance [51]. For the setup discussed in this section, one can extract d ≈ 37 nm from Fig. 9, that shows the ground state wave function at E ext = E 0 ext , where the electron equally populates QD and donor. At other values of E ext in the range given in Fig. 8, the wave functions of the ground state and the first excited state have to be compared, but similar results for d are obtained. In DQD devices typical values for the inter dot distance are 100 − 120 nm. Therefore, the charge-photon coupling strength in the QD-donor device is expected to be ≈ 1/3 of the coupling strength reported for DQD devices. The Schrieffer Wolff transformation (6) yields the Hamiltonian H e =(α 1 + α 2 ν z )σ z + α 3 ν z + (α 4 + α 5 σ z ν z )a † a + (α 6 + α 7 ν z )(σ + + σ − ) with the coefficients α 1 to α 18 discussed below. The transformation requires the coupling between states of the subspace defined by P 0 and Q 0 to be much smaller than the energy separation of those states [45]. In the present case this requirement is ensured provided that the following relations hold: Hence we can use the approximations Taking into account the above approximations, the parameters of the Hamiltonian (B1) read We note, that the term proportional to α 6 causing a mixing between the electron spin states is not not negligible. Thus, we need to account for this term when calculating transition energies between states that we expect to resemble the actual eigenstates of the Hamiltonian. To this end, we transform H eff into the eigenbasis of The transformed basis states are with the electron spin mixing angle Since, here, the magnetic field gradient b x is small compared to the homogeneous magnetic field B z one finds |α 6 | |α 1 |, such that the electron spin mixing angle is small and therefore the states |↓(↑) are predominantly the electron spin states | ↓ (↑) up to small contributions of the opposite electron spin state. The electron spin Pauli operators transform as with the Pauli operators σ i operating on the |↓(↑) states. We divide the transformed Hamiltonian H e in a diagonal part H e,0 and a part containing the interactions between the basis states H e,int . For H e,0 we find Since we consider a parameter regime with B z , ω c A, b x , g c we find Eσ, ω c E ν , δEσ.
If we additionally assume the effective cavity frequency ω c to be close to resonance with the electron spin transition frequency Eσ, we can apply the RWA to H e,int keeping terms rotating with frequencies Eσ: with gσ ν = 2 α 6 |α 6 | α 13 = −sgn 2α 13 , (B39) and gσ > δgσ.
Appendix C: Input-Output Theory To investigate the transmission through the cavity interacting with the QD-donor system we use input-output theory. We divide the Hamiltonian into three parts with the system Hamiltonian, H sys = H 0 + H e−n , comprising the single electron in the QD-donor confinement potential (1) and its hyperfine interaction to the nuclear spin (3). The eigenstates and the corresponding eigenenergies of H sys are denoted |n and E n with E n ≤ E n+1 , respectively. In the eigenbasis of H sys Eq. (C1) reads where the eigenstates of H sys define the operators σ nm = |n m|. For the quantum Langevin equations forȧ(t) anḋ σ nm (t) one obtainṡ with a in,1 (t) and a in,2 (t) the incoming parts of the external fields at the cavity ports 1 and 2. Moreover we have introduced the decoherence superoperator with matrix elements γ mn,m n , that is discussed in detail in Appendix D, and the quantum noise F of the QD-donor system. In the following discussion we will neglect the quantum noise F. Using Eq. (C2), we finḋ We will now decompose σ mn (t) into a contribution independent of the cavity coupling g c and a part that is linear in g c , while higher order contributions in g c are neglected, where σ where p m are the average populations of the energy levels obtained for g c = 0. Following the above discussion one obtains for the expectation values of the operators first order in g c . A Fourier transformation to frequency space yields If the cavity has a large quality factor Q = ω c /κ 1 and is probed close to resonance such that |ω − ω c | ω c a RWA for the cavity mode can be applied showing that the impact of a * −ω is negligible [53]. In this operating regime we can solve the set of linear equations (C10) to obtain the susceptibilities χ mn (ω), Calculating the expectation value of (C5), considering (C7) as well as (C8), before employing a Fourier transform to frequency space and using (C11) yields According to input-output theory, the incoming and outgoing fields are related by [54] a out,ν − a in,ν = √ κ ν a.
Charge relaxation due to the phonon environment
The electron phonon interaction for an electron in a QD-donor system is described by the Hamiltonian with the momentum q and mode ν dependent coupling constants λ qν , and the corresponding phonon creation and annihilation operators. Let us recall that τ z transforms to m,n d mn σ mn under the transformation to the eigenbasis of H sys . Hence, using Fermi's golden rule we find the transition rate from eigenstate |n to |m at zero temperature where |0 denotes the phonon vacuum and |q, ν is a single phonon state with energy ε q,ν . J(ν) = q,ν |λ q ,ν | 2 δ (ν − ε q,ν ) is the phonon spectral density. We can also calculate the transition rate for the orbital transition |+ → |− for = 0: This relation allows one to specify the scale factor J 0 introduced below in (D4) to describe the phonon spectral density, because values for this rate were reported in a recent experiment [20] considering a similar setup. Due to the inversion symmetry of the unit cell of the crystal structure of silicon electron phonon coupling is caused by bulk deformation potential coupling [55] and the phonon spectral density at low energy can be modeled by [56,57] where J 0 is a scale factor, ω 0 a cutoff frequency, d the spacing between the QD and the donor and c b the speed of sound.
To capture the decoherence effects due to the phonon environment we use a markovian quantum master equation in Lindblad form with the jump operator [58] We assume the phonon bath to be at zero temperature such that only transitions to lower energy states are possible, i.e., with j mn = 2πJ(E n − E m ).
One can calculate the mean value for the decoherence dynamics of the operators σ mn to identify the elements γ mn,m n of the decoherence superoperator: where D[L] represents the Dissipator superoperator D[L]ρ(t) = Lρ(t)L † − 1 2 ρ(t)L † L + L † Lρ(t) [59].
Charge noise
In semiconductor QD architectures charge noise is omnipresent. Charge noise leads to fluctuations of the electrostatic potentials in the proximity of the QD and the donor. Hence, charge noise mainly affects the QD-donor system in the form of fluctuations of the detuning parameter → +δ . Here, quasistatic and gaussian distributed fluctuations of with standard deviation σ are considered. In this context quasistatic means that δ does not change during a single run of the experiment, but differs for different runs, wherefore we include the noise in our calculation of a quantity by convolving the respective quantity with the gaussian distribution. In particular one has Appendix E: Characteristics of the readout contrast In order to derive an expression estimating the readout contrast, we use the derived effective Hamiltonian ((10) and (15)) for input-output theory. Following the steps outlined in Appendix C, one findṡ where we have neglected the contribution from the first term in (15) because sin 2 φ 2 1. Moreover, straightforward calculations result in and where in comparison to the discussion in Appendix C the ideal decoherence free scenario is considered for simplicity. In analogy to Appendix C, the susceptiblities for the three different processes can be determined: and With the susceptibilities, one obtains and therefore the cavity transmission reads Using the explicit expressions for the susceptibilities, the terms in the denominator can be expressed , the term ∝ χ ⇑⇓ in the denominator leads to a sharp feature in the transmission that does not significantly influence the readout contrast away from this feature, and is therefore neglected in the following. Equation (E4) shows that χ ⇓ = 0 (χ ⇑ = 0) if the system is initially prepared in the state characterized by p |↓,⇑ = 1 (p |↓,⇓ = 1). Probing the cavity at its resonance frequency (ω = ω c ) and approximatingω c ≈ ω allows one to omit the first term in the denominator of (E7). Taking into account all these considerations and assuming κ 1 = κ 2 = κ/2, one finds with g ⇑(⇓) = gσ cos φ + (−) δgσ sin φ. In the parameter domains suggested for nuclear spin readout with again describing a line shape with maximum value 1 and symmetric around the resonance defined by ω − In the suggested nuclear spin readout method, the discrimination between ⇑ and ⇓ is based on the transmission difference for the two nuclear spin states. The signal shapes for ⇑ and ⇓ are almost similar in parameter domains with b x B z such that g ⇑ ≈ g ⇓ ≈ gσ, while the maxima of 1 − |A c | ⇑ and 1 − |A c | ⇓ are separated by ∆ = 2δEσ. Therefore, given the line shape (E9), the absolute value of the readout contrast |A c | ⇓ − |A c | ⇑ is maximal for values of where B z ≈ ω c can be chosen to determine gσ and δEσ. However, this result does not account for the small but finite detuning ω c − ω if ω = ω c and the noise processes discussed in Appendix D. The detuning can be considered by the replacement ω − E |↑,⇑(⇓) − E |↓,⇑(⇓) → ( ω c − ω) < 0 ∧ δB z < 0 : This observation implies that the side of the peak of 1 − |A c | with δB z > 0 decreases more slowly while the side with δB z < 0 decreases faster as a function of B z compared to the non-detuned scenario. The resonance for ⇑ is achieved for lower values of B z than the one for ⇓, and, therefore, the readout contrast at the resonance for ⇑ is determined by the fast decreasing flank of 1 − |A c | ⇓ at δB z = −∆, while the readout contrast at the resonance condition for ⇓ is determined by the slow decreasing flank of 1 − |A c | ⇑ at δB z = ∆, wherefore the absolute value of the readout contrast is larger at the resonance for ⇑. This is exactly the behaviour of the line cuts shown in Fig. 5 and subsequently calculate the absolute value of the readout contrast at the resonance for ⇑ (⇓): This result is in good agreement with the cut for = 0 (purple line) in Fig. 5(b). However, it overestimates the extremal values of the cuts for > 0 significantly, because there, the B z values at which the extremal readout contrast occurs, are sensitive to small changes in the detuning (see also Figs. 4 and 5(a)). Thus, the quasistatic charge noise considered in the Figures (for details see Appendix D) reduces the absolute value of the extremal readout contrast. The readout contrast observed in Fig. 5 is certainly sufficiently large for nuclear spin readout in recent experimental devices. Nevertheless, we can comment on the minimal hyperfine interaction strength leading to a sufficient contrast for readout. Using (E7), one can numerically calculate the absolute value of the readout contrast and account for quasistatic charge noise in the way discussed in Appendix D. A map of the readout contrast dependence on the hyperfine interaction strength is presented in Fig. 10. The plots clearly show that there are readout points with (|A c | ⇓ − |A c | ⇑ ) > 0.01 in domains with A < 1 MHz. This is sufficient for readout because recent cQED experiments are able to measure |A c |/|A 0 | with precision of fractions of a percent [60]. Therefore, we expect that the suggested nuclear spin readout technique is also applicable in a DQD system with an isoelectric nuclear spin, e.g. 29 Si, at the position of one of the QDs because A in the range of several hundred kHz is reported for such devices [47]. Fig. 4. For the QD-donor system studied in this paper, we have A = 25 MHz. | 2020-12-03T02:47:47.183Z | 2020-12-02T00:00:00.000 | {
"year": 2020,
"sha1": "296097b7146c54872a445439156fcc00967343e8",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PRXQuantum.2.020347",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "7d5b42c6a5add8666706e1a3ac733a85a67ac65a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
55341507 | pes2o/s2orc | v3-fos-license | CONTRACEPTIVE VAGINAL SUPPOSITORY CONTAINING NONOXYNOL-9 AND ZINC ACETATE SALT IN A CLINICAL TRIAL
Spermicidal agents are defined as drugs that have the ability to immobilize or kill the sperm upon contact. An ideal spermicide should immediately and irreversibly produce immobilization of the sperm, nonirritating to the vaginal and penile mucosa, not have adverse effects on the developing fetus, free from long term topical and systemic toxicity and should not be systemically absorbed. Spermicides as a contraceptive methods have advantages as that they do not depend on high skilled personnel for their prescription and use, they don’t interact systemically or interfere with absorption of other drugs, they are used on demand only not on exact time that may be not remembered also it isn’t hormones that may disturb woman body by effecting on ovulation, lactation or others. Spermicides disadvantages appear as higher rate of failure than hormonal methods, vaginal irritation and vaginal secretions which appear mainly on frequent use. N-9 spermicides have a failure rate of 18% per year on perfect use, and 29% under typical use and irritation rate was recorded 12% of participants sharing in clinical trial at 2000. on comparing advantages to disadvantages we found spermicide can be better contraceptive method than other hormonal method on treating their problems. N-9 which is the most popular spermicide had low acceptability (16.9%) and it offered to women looking for a short term, usercontrolled contraceptive.
INTRODUCTION
Spermicidal agents are defined as drugs that have the ability to immobilize or kill the sperm upon contact. An ideal spermicide should immediately and irreversibly produce immobilization of the sperm, nonirritating to the vaginal and penile mucosa, not have adverse effects on the developing fetus, free from long term topical and systemic toxicity and should not be systemically absorbed 1,2 . Spermicides as a contraceptive methods have advantages as that they do not depend on high skilled personnel for their prescription and use, they don't interact systemically or interfere with absorption of other drugs, they are used on demand only not on exact time that may be not remembered also it isn't hormones that may disturb woman body by effecting on ovulation, lactation or others 3 . Spermicides disadvantages appear as higher rate of failure than hormonal methods, vaginal irritation and vaginal secretions which appear mainly on frequent use 4 . N-9 spermicides have a failure rate of 18% per year on perfect use, and 29% under typical use 5 and irritation rate was recorded 12% of participants sharing in clinical trial at 2000 6 . on comparing advantages to disadvantages we found spermicide can be better contraceptive method than other hormonal method on treating their problems. N-9 which is the most popular spermicide had low acceptability (16.9%) and it offered to women looking for a short term, usercontrolled contraceptive 7 .
Zinc acetate is another approved spermicide which has a spermicidal effectiveness in a 1% concentration it appears this is due to the acetate in zinc acetate which can decrease oxygen utilization by sperm 8 . Zinc acetate came over N-9 as it doesn't cause irritation but it can reduce irritation of the mucosal tissue if found due to zinc ion that is effective in preventing or reducing irritation in a concentration of 0.5% which avoid zinc toxicity 9 . Effective concentrations of Nonoxynol-9, benzalkonium chloride, zinc acetate, cupric chloride, cysteamine, tannic acid and propranolol ranged at least from 0.15 to 1% 10 . N-9 spermicidal activity can withstand to 6 hours 11 . While, zinc acetate spermicidal activity can withstand for only one hour 12 . Addition of Zn (OAC) 2 to N-9 was a trial to produce a new product has advantages of both spermicides together.
In this study we suggest that the prepared spermicide vaginal suppository contain N-9 and Zn (OAC) 2 in concentration 10:1 respectively possess more advantages over presently marketed formulations which contains N-9 alone for the causes that mentioned before. New preparation contains N-9 plus Zn (OAC) 2 salt was tested invivo and in-vitro in compare with market suppositories containing N-9 alone.
Suppository formulation
Suppository formulations were prepared from water soluble bases (PEG 400 and PEG 6000) by melting method 13 . The molten base was poured into a mold of torpedo shape then refrigerated and packaged. Each 2 gm. vaginal suppository contains 100 mg N-9 and 10 mg Zn (OAC) 2 .
Drug release measurement of the prepared suppository
Dissolution was conducted in the USP dissolution apparatus 2 operating at 50 rpm using 500 ml of distilled water at 37ºC. 5 ml samples were taken at different time intervals and replaced with 5 ml of fresh dissolution medium maintained at the same temperature. Samples were taken with filter-tipped pipette and analyzed spectrophotometrically at 276 nm for N-9 13 , while zinc acetate was analyzed at 550 nm 14 . Results were plotted against time in the representative curve. Release of N-9 from the combination in the prepared suppository was compared with N-9 release lonely as reported by Parrott 12 and determined at the same time intervals so we can study changes made on addition of zinc acetate.
In-vitro study
Human males participating in this study were fertile semen donors selected after appropriate screening. Three specimens from each of three males were used in this study. Specimens were collected by masturbation following three days abstinence. Following collection, specimens were incubated at 37ºC for 15-30 minutes to allow for liquefaction. Semen volume, sperm aggregation and motility percentage were assessed using a light microscope. Sperm motility and aggregation recorded as percent and best spermicide produce 100% sperm immobility and 100% aggregation. Motility was calculated according to the formula: Motility = Motile sperms ∕ Motile + non motile sperms × 100 Sperm motility and sperm aggregation were measured before and after addition of spermicide on the semen samples after liquefaction and after one hour from liquefaction for testing different spermicides concentrations on it. Three dilutions were prepared from each suppository (with and without zinc acetate) A, B and C as seen in table 1 from each type of suppository and they tested on semen samples for comparing market (No Gravida®) and prepared suppository in different concentrations.
In-vivo study
The study recruited 78 participants referred for using spermicide suppository formulated of N-9 in PEG base (market or compounded) for conception purpose at family planning unite at General Abo-Korkase hospital and General Minia Hospital during the period from July 2010 to August 2011. Study was made to achieve two purposes from using the new spermicide which were decrease failure and irritation rates. This study is randomized prospective clinical trial. Randomization was computerized and blind.
We choose participants which have special criteria and accept the experiment and study 17,18 . Participants should know as much as possible about the clinical trial.
Participants in this study were informed and randomized to receive either N-9 suppository preparation or N-9 plus zinc acetate preparation as computer divides them. All participants would be instructed on the use of the test products 18 . Method of application is the most important point in clinical work-up as can change results at all; explain method for each volunteer. Verbal and written informed consent was obtained from all volunteers after giving information about the aim of the study and the procedure involved in it 18 .
Participants would be followed through 12 menstrual cycles (approximately 13 months) at least and would have 8 study visits and two studies phone call.
All volunteers had comprehensive evaluation full history of taking other contraceptive methods and after using spermicides (market or prepared). Special notes were made of the age, cycle length, literati, parity, previous method of contraception and lactation. Notes were taken about pregnancy if occur, vaginal irritation, coital problems, cycle irregularities, secretions and other compliances after use 19 .
RESULTS AND DISCUSSIN
Prepared suppository with zinc acetate mainly appeared to have similar physical quality properties to the market suppository as weight variations, melting point, hardness and others.
The dissolution studying of the prepared suppository with zinc acetate has high dissolution rate and high drug release profile as it start release with high concentration that reach >95% at first 18 min.
The concentrations of N-9 was measured spectrophotometrically at 276 nm and it was increasing manner until it started to make plateau level after 20 min. that was compared to that made for N-9 by Parrott 13 . Zinc acetate was measured spectrophotometrically at 550 nm and it was increasing manner at first 20 min and reaches to ≥95%. Table (2) and figure (1) illustrate the high dissolution rates of both N-9 and zinc acetate in the prepared suppository. Release start from first 3 min in increasing manner and the maximum amount released were at 18 min. Table (3) and figure (2) show no significant difference in release of N-9 from market suppository as made by Parrote 13 and that of prepared suppository and so Zn (OAC) 2 don't change N-9 release profile. High dissolution rate return to the formula which contains N-9 as it is Non-ionic surfactant 20 in non-ionic polymer of PEGs base. Zn (OAC) 2 do not interact with it or impair N-9 activity as the low concentration of Zn (OAC) 2 salt (in compare to N-9 concentration) prevents it from interference with N-9 add to that both active ingredients N-9 and Zn (OAC) 2 are highly stable components and need high temperature degrees to melt or interact.
The in-vitro test made on human semen samples (out human body) for testing changes on sperm motility and sperm aggregation on addition of different concentration of two group of spermicides (with and without Zn (OAC) 2 ) on it.
Prepared suppository (N-9 plus Zn (OAC) 2 ) show significant increase in efficacy of the formula in reduction of sperm motility and increasing sperm aggregation than market one in all dilutions especially before first hour passing.
As illustrated in table (4). In-vitro test prove the role of zinc acetate addition to N-9 that can increase spermicidal efficacy of the N-9 not diminish it. These results come right with 16 and against 9 .
Different mechanisms of two spermicides (N-9 and Zn (OAC) 2 ) give the new formula of the combination high strength in reduction of sperm motility. Zinc acetate contains acetate ion which decreases oxygen utilization by sperm that the cause in decrease motility and increase in aggregation. 8 . Nonoxynol-9 vaginal spermicides interact with the lipoproteins of the cell membrane to permanently disrupt the cell membranes of spermatozoa, resulting in severe damage to the acrosome (head), neck, midpiece, and tail of the sperm and rapid, irreversible loss of function and motility within the vagina and viability 11 . Different mechanisms give the synergistic effect.
In the clinical studying, When discuss the results related to reduction of irritations we found that after application of drug on group Ӏ (market supp.) and group ӀӀ (prepared supp.) as seen in table (5) The results were observed in the positive side as vaginal irritation decrease with a high significant difference (P=0.02) when zinc acetate salt added to N-9. Irritation in this context may be evidenced by redness or other changes in coloration, inflammation or swelling, hypersensitivity, the occurrence of burning, itching or other painful stimuli. So, zinc ion that is effective in preventing or reducing irritation in a concentration of 0.5% which avoid zinc toxicity 9 .
The most common satisfied mechanism of zinc salt use as anti-irritant product was that zinc salts may prevent irritation by zinc ions which can bind to negatively-charged regions exposed on the surface of proteins and alter the charge configuration of the protein and prevent subsequent protein-protein interactions between irritants and exposed mucous membranes., thereby preventing its subsequent binding to the underlying tissue and so prevent irritation 16 .
Failure which means pregnancy occur show the following results in the table (6). The results show increase in the spermicidal efficacy of N-9, Zn acetate combination than N-9 alone so that Zn acetate spermicidal effect can increase the efficacy of N-9 not reduce it. Failure rate of N-9 also decreased significantly (P value equal 0.03). Zinc acetate (and zinc gluconate on adjusting pH to 7.0) was proven as strong spermicide between zinc salts and useful as a vaginal contraceptive 8 . And we assure on Combining two birth control methods, can increase their effectiveness to 95% or more for less effective methods 21 .
CONCLUSION
Nonoxynol-9 is the active ingredient in all of the over-thecounter (OTC) spermicidal products available in the markets and has been used for pregnancy prevention since the 1950s. but it start to be withdrawn from markets after compliance from it's high failure rate and low safety due to high irritation which lead to wounds and lesions that increase rate of sexual transmitted diseases. After screening in family planning clinics we found that local methods of contraception have high acceptability and high fair in the same time so, we tried to introduce a new formula which safe and effective. The new formula contains 0.5% concentration zinc acetate and 5% N-9 concentration which could produce a new spermicidal product with the best quality to the market as two components are approved spermicides at the mentioned concentrations. Different zinc salts as zinc lactate, zinc gluconate, zinc acetate, and other water-soluble organic zinc salts can reduce irritation caused by surfactants (nonoxynol or octoxynol) and other microbicides in topical genital formulations with different degrees but zinc acetate is the only spermicidal without any changes in its structure. Zinc-containing additives additionally can stabilize and protect cellular membranes, thereby helping protect genital surfaces against damage caused by repeated exposure to agents that attack the lipid membranes that surround mammalian cells. Suitable zinc Salts which have been tested and shown to be non-irritating during sexual intercourse include zinc acetate, zinc propionate, and zinc gluconate. Other zinc salts have also been identified which are soluble in water and have low pK values, which indicates a high rate of zinc ion release 22 .
Our results reached to that addition of Zn (OAC) 2 to N-9 produced an effective spermicide product plus the demulcent effect of zinc ion which appears well at 0.5% concentration that came with 16,12 .
So we recommended addition of zinc acetate salt to N-9 spermicidal formula to reach the best properties of spermicidal contraceptive.
ACKNOLEDGMENT
This work was developed in dialogue with the members of the Statistics, pathology, and doctor stuff of family planning unit in general Abo-korkase hospital for Clinical and Behavioral Studies; we gratefully acknowledge their contribution. We would like to thank Dr. Magdy Hassan for providing the data and assisting in their interpretation. Also, we thank several reviewers as well as the editor for their helpful suggestions. | 2018-12-11T08:56:44.879Z | 2013-01-14T00:00:00.000 | {
"year": 2013,
"sha1": "a36572c1d14f2e1cc4488ff78ea52454700359d1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.22270/jddt.v3i1.354",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a36572c1d14f2e1cc4488ff78ea52454700359d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244591312 | pes2o/s2orc | v3-fos-license | Hungarian Government on the Whitening of the Economy Measures to Stabilize the Budget
The Hungarian government developed a package of tools for improving legal control and compliance for Hungarian companies and entered into force between 2012 and 2021. Part of this complex package of measures sought to broaden the tax base without raising taxes, while others sought to reduce the size of the black economy by reducing the amount of illegally available benefits. The level of compliance may increase, as inspections are continuous, the risk of failure is high, and compliance with legal actions does not require a large intellectual or financial expense from the taxpayer. Part of this package was the obligation to use online cash registers, the introduction of reverse charge VAT, the targeted reduction of VAT rates, the introduction of an electronic goods tracking and verification system, and the introduction of online invoicing.
Introduction
The common feature of each step of change was that it was built on increased cooperation between taxpayers and/or reduced the tax administration or expanded the audit database by technical means without imposing a significant additional burden on customers. Although voluntary compliance is an important element in the operation of any tax system, it is at least as important that taxpayers are constantly aware of the state's controlling presence. One of the least annoying ways for taxpayers to do this is to enforce mandatory electronic steps built into the process of electronic communication, control, and taxation. Using them can reduce the time spent on inspections and increase their efficiency. As VAT 1 www.scholink.org/ojs/index.php/uspa Urban Studies and Public Administration Vol. 4, No. 4, 2021 provides the largest share of budget revenues, twenty to twenty-five percent, in most countries, it was appropriate to incorporate new means of control into the VAT management system of companies.
Value Added Tax (VAT) is a consumption tax charged on most goods and services consumed in the EU.
The tax is levied on the "value added" to the product at each stage of production and distribution. This means that VAT is charged when VAT-registered businesses sell to other businesses (B2B)
Method
The study derives the VAT Total Tax Liability (VTTL) for each country from national accounts by mapping information on different VAT rates (standard, reduced and exemptions) onto data available on final and intermediate consumption, along with other information provided by Member States. This means that the quality of the VAT Gap estimates depends on the accuracy and completeness of national accounts data. When national accounts figures are reliable, the methodology is precise enough to estimate the VAT Gap. The main limitation of the methodology is the quality of the national accounts: better data-in, better estimations-out. Moreover, Member States use different methodology to estimate the informal economy and to reflect it in their national accounts, thus indirectly affecting the VAT gap figures (CASE, 2020).
Variations in the VAT gap reflect the differences in Member States in terms of tax compliance, fraud, avoidance, bankruptcies, insolvencies and tax administration. The estimates also reflect structural differences in national economies and other variables. Indirect circumstances such as the organization of national statistics could also have an impact on the size of the VAT Gap.
European Union Requirements Related to VAT
The European Union (EU) lacks a uniform tax policy, however in relation to VAT-since those serve as the basis for EU budget contributions by the Member States-uniform frameworks have been determined to be used by the Member States. The European Union has issued directives in relation to VAT-which is the equivalent of General Excise Tax, ÁFA in Hungary. Based on the provisions of the relevant EU directive, the general rate of the tax cannot be lower than 15%, at the same time it fails to specify an upper limit. According to the directive, the Member States may apply one or two discounted tax rates with regards to specific product sale or service provision categories. The discounted tax rates cannot be lower than 5% determined as the percentage of the tax base. At the same time the regulation allows the Member States which applied tax free status or lower rates in the case of products and 2 www.scholink.org/ojs/index.php/uspa Urban Studies and Public Administration Vol. 4, No. 4, 2021 services beyond the discounted categories on 1 January 1991 may continue to apply those. This is how it is possible that in certain Member States three or more discount rates are in effect, including 0% rates (European Commission, 2020c). Tackling fraud: VAT will now be charged on cross-border trade between businesses. Currently, this type of trade is exempt from VAT, providing an easy loophole for unscrupulous companies to collect VAT and then vanish without remitting the money to the government.
Comprehensive Reform of the EU's VAT System
One Stop Shop: It will be simpler for companies that sell cross-border to deal with their VAT obligations thanks to a "One Stop Shop". Traders will be able to make declarations and payments using a single online portal in their own language and according to the same rules and administrative templates as in their home country. Member States will then pay the VAT to each other directly, as is already the case for all sales of e-services.
Greater consistency: A move to the principle of "destination" whereby the final amount of VAT is always paid to the Member State of the final consumer and charged at the rate of that Member State. This has been a long-standing commitment of the European Commission, supported by Member States. It is already in place for sales of e-services.
Less red tape: Simplification of invoicing rules, allowing sellers to prepare invoices according to the rules of their own country even when trading across borders. Companies will no longer have to prepare a list of cross-border transactions for their tax authority (the so-called "recapitulative statement").
The changes affecting VAT will be introduced in two phases. The first phase started on 1 January 2020, and included measures that simplified and standardized the previous procedures. On 18 February 2020, 3
VAT Revenue Gap EU Member States
The increase in revenues, in relative terms, the EU-wide gap dropped to 11.0%, down from 11.5% in 2017.
Hungarian Government Measures to Whiten the Economy
In Hungary from the results improving year after year it can be clearly concluded that the Hungarian government's measures to whiten the economy are successful. In 2013 the tax evasion rate in Hungary was 21%, in 2019 it was only 6.6% according to the relevant VAT Gap study (CASE, 2020).
The Hungarian government's measures to whiten the economy, among other things, were targeted at reducing the size of the black or grey economies, and at reducing tax evasion, and within that
Introduction of Online Cahiers
The introduction of online cahiers was mainly justified by the fact that it results in higher revenue without increasing the VAT rate. Therefore, online cahiers should be used for the invoicing of transactions where the sale is to the end user who pays cash onsite. Specifically, in these cases the customers is typically not interested in asking for an invoice, while the salesperson may be strongly interested in hiding the involved income. Online cahiers make retail transactions traceable, in a way that the commercial transaction data are displayed directly at the National Tax and Customs Administration. However, for this it is essential to operate a secure data communication system between online cahiers and the tax authority that cannot be externally manipulated. With these basic Vol. 4, No. 4, 2021 onsite, and to forward them in real-time through a communication channel as data reporting to the National Tax and Customs Administration. An advantage of the Hungarian regulation is that the data reporting fully includes the sales of the taxpayer, changes made to the online cashier (power outages, daily opening and closing times); its complete data content can be downloaded from the online cashier's memory and it is automatically integrated into the registry of the tax authority. The tax authority uses the data incoming by the data reporting of the online cashier for a risk analysis procedure, during which the tax authority performs risk analyses to identify the risks related to the fulfillment of tax obligations as well as the exclusion or confirmation of the existence of the identified risks (Adó Online, 2017).
Introduction of Reverse VAT Payments
According to the rules related to reverse taxation, the taxpayer is not mandated to determine and account the VAT applicable to the transaction, rather its client (i.e. the customer or the taxpayer using the service) (Hungarian Act CXXVII of 2007 State in which the tax is due may mandate that the VAT shall be payable by any taxable person to whom the services are supplied, if the services are supplied by a taxable person not established regardless of where the supplier is established or resides. Such reverse taxation, the purpose of which is to deal with possible tax evasion and tax fraud, is regulated by national laws.
The reverse taxation mechanism may be conducted by the Member States in specific cases based on Article 395 of the VAT-Directive (or based on the provisions of Article 94) in harmony with the provisions of the separate permission, and in accordance with the conditions specified in Articles 199, 199a and 199b of the Directive (ANNEX to the Report from the Commission). 6 www.scholink.org/ojs/index.php/uspa Urban Studies and Public Administration Vol. 4, No. 4, 2021 The reverse taxation mechanism places the responsibility of accounting VAT from the supplier to the customer. This prevents the supplier from charging the VAT to the customer but not pay it to the Treasury. With this the Member States have the option of applying reverse taxation in the case of the predetermined sale of products or services, particularly in the case of products and services that are suitable for fraud committed within the EU by fraudulent traders.
Targeted Reduction of VAT Rates
In recent decades the black economy and the presence of tax fraud as well as tax evasion for obtaining illegal income has been typical in food production and distribution in Hungary. Therefore, the products affected by VAT rate reduction in recent years have been characteristically agricultural products. VAT
Introduction of the Electronic Trade and Transport Control System (EKÁER)
EKÁER is a technical system established and operated by the National Tax and Customs Administration that monitors, inspects and registers the movement of goods, the primary purpose of which is to reduce the number fraud related to the transport of goods and VAT fraud (Szilovics, 2019).
The most important element of the new system's operation is that goods planned to be transported by trucks which became subject to road fee starting on 1 January 2015, those with a total weight of 3.5 tons or more, must be reported to the authority before the beginning of the trip by the person responsible for the transport or the transport company, and a so-called EKÁER-number must be requested. Those in the above specified category may only perform transport activity with a valid EKÁER-number. The law states that in the case of so-called risky products stricter rules must be applied. Mostly food products were classified in the risky products category (meat, milk, butter, cheese, 7 www.scholink.org/ojs/index.php/uspa Urban Studies and Public Administration Vol. 4, No. 4, 2021 flour, sugar, cooking oil). In the case of this product range even those transports that are under the total weight of 3.5 tons must be reported and permitted. The HU-GO nationwide camera system is connected to the operation of the system, the rules for the functioning of which are specified in Hungarian Act LXVII of 2013. Accordingly, as of 1 January 2015 the National Tax and Customs Administration is authorized to receive and use for its work the digital transport data collected by the HU-GO system. By the cameras monitoring road traffic the movement of goods transports can be accurately traced on the entire Hungarian road network, and the collected data can be compared with the data obtained by the National Tax and Customs Administration, thus including information based on EKÁER-reports. The system has been supplemented by a financial warranty system provided by transport companies. The following laws and legal regulations are relevant from the aspect of EKÁER: The laws and legal regulations that serve as the basis for EKÁER have changed as of 1 January 2021.
The most important change is that as of 1 January 2021 only those products must be reported that are listed in the Annexes of Decree 51/2014 of NGM on the determination of risky goods in connection with the Electronic Trade and Transport Control System). The range of those that are exempt from the risk deposit has been expanded. According to the main rule a risk deposit must be provided for products subject to EKÁER, but the regulation gives an exemption from this obligation in certain cases. 2021.
As of 1 January 2021, on top of the previous exemptions, those who qualify as reliable taxpayers do not need to provide a risk deposit either. Goods transport with road vehicle can still only be performed with a valid EKÁER-number, along with sales involving road transport and the movement of goods for other reasons. The EKÁER system assists in the inspection work of the National Tax and Customs Administration (NAV) makes financial transactions more transparent and broadens the rage of compliant taxpayers.
Summarizing Conclusions
The Hungarian government started the online era of whitening the economy in 2014; this is when using online cashiers became mandatory. After this started the live operation of the Electronic Trade and Transport Control System. The system of online invoicing was introduced after this. As a result of all of 8 Vol. 4, No. 4, 2021 this, in 2018 the VAT Gap was reduced the most in Hungary among European countries and in 2019 it was further improved, and with this from the aspect of the VAT Gap we managed to surpass counties like Germany, Austria and Denmark. On top of the fact that the online inventions of the Hungarian economy whitening have by now become exemplary models on the international level, they created the possibility for tax reductions. During the corona virus crises it is especially important that the state should protect law abiding businesses, and nobody should be able to avoid paying their taxes, so the fight against the black economy will continue to age the tax evasion rate as low as possible. | 2021-10-18T15:09:41.646Z | 2021-10-16T00:00:00.000 | {
"year": 2021,
"sha1": "85391ca14f00c815b989dbcf89b35cd207639d60",
"oa_license": "CCBY",
"oa_url": "http://www.scholink.org/ojs/index.php/uspa/article/download/4222/4703",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0048a2cd942fdf9025d0e6168f3bcdbac4036b6d",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
53213467 | pes2o/s2orc | v3-fos-license | Controlled Growth of Large-Area Aligned Single-Crystalline Organic Nanoribbon Arrays for Transistors and Light-Emitting Diodes Driving
Abstract Organic field-effect transistors (OFETs) based on organic micro-/nanocrystals have been widely reported with charge carrier mobility exceeding 1.0 cm2 V−1 s−1, demonstrating great potential for high-performance, low-cost organic electronic applications. However, fabrication of large-area organic micro-/nanocrystal arrays with consistent crystal growth direction has posed a significant technical challenge. Here, we describe a solution-processed dip-coating technique to grow large-area, aligned 9,10-bis(phenylethynyl) anthracene (BPEA) and 6,13-bis(triisopropylsilylethynyl) pentacene (TIPS-PEN) single-crystalline nanoribbon arrays. The method is scalable to a 5 × 10 cm2 wafer substrate, with around 60% of the wafer surface covered by aligned crystals. The quality of crystals can be easily controlled by tuning the dip-coating speed. Furthermore, OFETs based on well-aligned BPEA and TIPS-PEN single-crystalline nanoribbons were constructed. By optimizing channel lengths and using appropriate metallic electrodes, the BPEA and TIPS-PEN-based OFETs showed hole mobility exceeding 2.0 cm2 V−1 s−1 (average mobility 1.2 cm2 V−1 s−1) and 3.0 cm2 V−1 s−1 (average mobility 2.0 cm2 V−1 s−1), respectively. They both have a high on/off ratio (I on/I off) > 109. The performance can well satisfy the requirements for light-emitting diodes driving. Graphical Abstract Electronic supplementary material The online version of this article (doi:10.1007/s40820-017-0153-5) contains supplementary material, which is available to authorized users.
Up to now, most of OFETs based on single-crystalline organic micro-/nanocrystals demonstrate mobility over 1.0 cm 2 V -1 s -1 [6,13,16,17]. This value has been surpassing that of amorphous silicon (a-Si)-based field-effect transistors (FETs) (mobility of 0.1-1.0 cm 2 V -1 s -1 and on/off ratios of 10 6 -10 8 ) and approaching that of polycrystalline silicon (c-Si)-based FETs (mobility larger than 10 cm 2 V -1 s -1 ) [16][17][18][19][20][21], revealing the great potential of the organic micro-/nanocrystals for high-performance, low- cost organic electronics. Despite this progress, rational control of crystal orientations and growth directions of the organic micro-/nanocrystals in large area has posed significant technical challenges [22,23]. Alignment and patterning of the organic micro-/nanocrystals can reduce or eliminate parasitic leakage paths, improve device uniformity and reproducibility, and thus facilitate the device fabrication and integration. Also, as a result of the anisotropic nature of charge transport in organic micro-/nanocrystals, control on the azimuthal orientation of the crystals in a desirable direction (p-p stacking direction in general) is critical to the optimal charge transport of the devices [24]. To date, a variety of deposition techniques have been developed for the aligned growth of singlecrystalline organic micro-/nanocrystals [22][23][24][25][26][27][28][29], such as droplet-pinned crystallization (DPC) [22], geometry-restricted evaporation [25], and direct printing [28,29]. However, these fabrication methods usually require growth templates, complex instruments, or multi-step processes to obtain aligned organic micro-/nanocrystals [25,29] and are not quite suitable for convenient, large-area commercial productions. So, a new deposition technique is surely desired to satisfy the practical applications. 9,10-Bis(phenylethynyl) anthracene (BPEA) and 6,13bis(triisopropylsilylethynyl) pentacene (TIPS-PEN) are well known for excellent electrical characteristics due to their strong intermolecular p-p interaction and have been broadly used in OFETs [30,31]. However, the spun BPEA and TIPS-PEN films generally display a low crystallinity and poor OFET performances (with mobility below 0.1 cm 2 V -1 s -1 ). Recent studies demonstrated that the single-crystalline BPEA and TIPS-PEN without any defects and grain boundary can improve OFET performances [30]. However, the typical device area was very small, which is not capable of producing a high density of device with reasonable throughput.
Herein, we report a facile dip-coating method for largearea deposition of well-aligned single-crystalline BPEA and TIPS-PEN nanoribbon arrays. The quality of organic nanocrystals was controlled by tuning the dip-coating speed. Moreover, OFETs based on the organic crystal arrays were systematically investigated. Our work is expected to have a great potential of the aligned singlecrystalline organic nanoribbons for high-performance, lowcost organic devices.
Substrate Treatment
The silicon wafers were initially cleaned by the chemical cleaning process in a piranha solution (4:1 mixture of H 2 SO 4 :H 2 O 2 ) for 10-15 min. The substrates were rinsed several times in deionized water (resistivity = 18 MX cm), then dried with a stream of nitrogen, and cleaned with an oxygen plasma (PVA TePla Ion 40) cleaner (200 mm HgO 2 , 300 W) for 600 s in the subsequent procedures.
Materials and Sample Preparation
Highly doped n-type silicon wafers (resistivity \0.01 X cm) with a 300-nm thermally grown silicon oxide gate dielectric layer were used as the substrates for OFET fabrication. The BPEA (received from Sigma-Aldrich) and TIPS-PEN (received from Luminescence Technology Corp.) were used without further purification. The BPEA and TIPS-PEN solutions were both prepared at a concentration of 4 mg mL -1 in dichloromethane. The substrate was dipped into a BPEA or TIPS-PEN solution and then lifted out at a constant rate of 10, 30, 60, 80, and 120 lm s -1 , respectively. Dip coating was performed in clean bench for reducing the effect of air current and mechanical vibration. Electrodes were formed by thermal evaporation using a shadow mask on the active layer.
Characterizations
The samples were characterized with the assistance of fluorescence microscope (Leica, DM4000M), atomic force microscope (AFM, Veeco MultiMode V), and scanning electron microscope (SEM, FEI Quanta 200 FEG) operated at 20 kV. The crystallinity of the nanoribbon arrays was determined by selective-area electron diffraction (SAED) in transmission electron microscope (TEM, FEI, Tecnai G2 F20) operating at 200 kV and confirmed by X-ray diffraction (XRD, PANalytical BV Empyrean), using a Cu source running at 40 kV and 40 mA. Source and drain electrodes were deposited by thermal evaporation onto the single-crystalline organic nanoribbon arrays through shadow masks that consists of tungsten wires with different diameters, creating transistors with different channel lengths (L). Electrical characteristics of the OFETs were measured with a semiconductor parameter analyzer (Keithley 4200-SCS) in air ambient (relative humidity & 30%) at room temperature. The field-effect mobility l and threshold voltage V T were calculated in the saturation regime (V DS = -50 V) by plotting the square root of the drain current versus the gate voltage using where C i is the capacitance/ unit area of the gate dielectric layer, and W and L are the actual crystal width and channel length, respectively, which were measured using an optical microscope (BX51, Olympus).
Fabrication of Large-Area, Aligned Single-Crystalline Organic Nanoribbon Arrays
To demonstrate the efficiency of the dip-coating method for large-area growth of aligned micro-/nanocrystal arrays, BPEA and TIPS-PEN were chosen as model organic semiconductors, because of their high charge carrier mobilities, and have been broadly used in film-based OFETs [32][33][34][35][36][37].
The dip-coating method used to prepare large-area singlecrystalline organic nanoribbon arrays is illustrated in Fig. 1a, b. Firstly, a piece of SiO 2 /Si substrate was immersed vertically into BPEA or TIPS-PEN solution at room temperature; then, the substrate was lifted at a certain coating speed (V). With the gradual evaporation of dichloromethane, parallel organic nanoribbons would be deposited along the lifting direction. Figure 1c shows the optical image of the large-sized SiO 2 /Si substrate (5 9 10 cm 2 ) coated with BPEA nanoribbon arrays. Notably, the parallel and highly aligned nanoribbon arrays can extend over almost the entire substrate, forming continuous and aligned nanoribbon from top to bottom, as shown in Fig. 1d. The surface coverage of the organic nanoribbon on the substrate is estimated to be around 60% from the optical image.
Crystal Structures of Aligned Organic Nanoribbons
The structures of the samples were examined by TEM and XRD. TEM images and corresponding SAED patterns of the BPEA and TIPS-PEN nanoribbons are shown in Fig. 2.
The presence of discrete diffraction points (Fig. 2b, e) clearly indicates the single-crystalline nature of the BPEA and TIPS-PEN nanoribbon arrays. The nanoribbons have a growth orientation along [010] direction, which coincides with the p-p stacking directions of BPEA and TIPS-PEN molecules [32,34]. XRD patterns of the BPEA and TIPS-PEN nanoribbon arrays disclose a well-defined set of (001) reflections (Fig. 2c, f). For the BPEA nanoribbon arrays, the primary peak displays strong diffraction with d-spacing of 21.8 Å , which is close to the BPEA c-axis length of 22.7 Å calculated by HyperChem 7.0 [34]. As for the TIPS-PEN nanoribbon arrays, the strong and sharp XRD diffraction peak observed at 5.53°suggests a well-organized molecular structure with an interplanar d-spacing of 16.1 Å . This value of the c-axis length is also approximate to the value derived from single-crystalline data (16.8 Å ) [38]. These results collectively demonstrate that large-area growth of single-crystalline organic nanoribbons has been successfully achieved by dip-coating method. Furthermore, 2,7didecylbenzothienobenzothiophene (C10-BTBT) was also (Fig. S1), which indicates the good universality of the dipcoating method.
Morphology Control of Single-Crystalline Organic Nanoribbon Arrays
During the process of dip coating, crystallization of organic molecules is an evaporation-induced procedure, which generates at three-phase contact line. Molecules at the meniscus profile deposit firstly with solvent evaporation. Afterward, the convection flow and capillary force induce molecules from internal solution to flow outward refilling the suspensions at the edges, keeping the continuous provision of the crystals for deposition. More significantly, the pinning of suspensions at the moving contact line is induced by the gradual pulling of the substrate during the dip-coating process, resulting in the continuous deposition of materials and the formation of a uniform aligned nanoribbons in a large area. It was found that dip-coating speeds played a critical role in controlling the morphologies of nanoribbon arrays. As shown in the optical microscope and SEM images (Figs. 3 and 4), relatively steady contact line can be achieved when lifting rate is faster than 30 lm s -1 (for BPEA) and 10 lm s -1 (for TIPS-PEN). Nearly continuously aligned nanoribbon arrays with length up to several hundreds of micrometers (for BPEA) or even several millimeters (for TIPS-PEN) were fabricated at an optimum dip-coating speed of 80 lm s -1 (for both BPEA and TIPS-PEN). Their morphologies were further investigated by tapping-mode AFM, as shown in Fig. 5. The thickness of BPEA and TIPS-PEN nanoribbons is *150 and *50 nm, with a width of 3-5 and 7-10 lm, respectively. All the nanoribbon arrays have faceted edges and smooth surfaces (*1.2 nm), indicating the high quality of the nanoribbon arrays.
In control experiments, when dip-coating speed was increased to be higher than 80 lm s -1 (e.g., 120 lm s -1 ), non-continuous and defective nanoribbon arrays with inferior crystallinity were observed (Fig. S2). In addition, if the dip-coating speeds were decreased to be lower than 30 lm s -1 for BPEA or 60 lm s -1 for TIPS-PEN, periodically aligned short nanoribbons were formed (Figs. 3 and 4). At lower lifting rates, gradual accumulation of organic semiconductors at the contact line made the meniscus too heavy, which induced an increase in the depinning force. As a result, the contact line would slip to a new position, leading to the formation of discontinuous nanoribbon arrays [39,40]. All these results strongly suggest that the morphologies of nanoribbon arrays mainly depend on the dip-coating speed. In addition, to maintain a proper dip-coating speed, solvents with relatively low boiling point as well as good solubility are preferred, such as the use of dichloromethane as the solvent.
Aligned Single-Crystalline Organic Nanoribbons for Transistors and LEDs Driving
Well alignment of single-crystalline organic nanoribbons can greatly facilitate OFETs fabrication, since electrodes can be easily deposited perpendicular to the aligned crystals [22,23], ensuring the high performance and high reproducibility of the devices. OFETs were constructed based on the well-aligned BPEA and TIPS-PEN nanoribbon arrays in a bottom-gate configuration, by depositing Au top-contact source (S) and drain (D) electrodes through a shadow mask. The dependence of l on the dip-coating speed was systemically investigated. Figures S3a-c and S4a-c show the typical transfer characteristics of nanoribbon array-based OFETs under different dip-coating speeds (i.e., 10-80 lm s -1 ). Twenty devices were tested at each dip-coating speed under ambient environment, and then, the mobilities were calculated. It is shown in Figs. S3d-f and S4d-f that the l of devices obtained at the lowest dip-coating speed (10 lm s -1 ) was almost two orders of magnitude lower than that of the devices obtained at the dip-coating speed of 80 lm s -1 . These results suggest that nanoribbon arrays with improved crystal qualities can be achieved at higher dip-coating speed.
To further optimize the device performance, we also fabricated OFETs based on the single-crystalline nanoribbon arrays with different channel lengths (Fig. S5). As plotted in Fig. S6a, b, the average mobility gradually increased from 0.3 to 1.8 cm 2 V -1 s -1 as the channel lengths increased from 10 to 150 lm for BPEA, while the average mobility increased from 0.1 to 1.35 cm 2 V -1 s -1 as the channel lengths increased from 10 to 200 lm for TIPS-PEN. However, with the channel lengths further increased, the mobility began to drop dramatically. In the short-channel regime, the effect of the parasitic contact resistance decreases as channel length increases, thereby enhancing the calculated mobility [41]. With the further increase of channel length, the quantity of crystal defects in the channel will increase, which will in turn impair the device performance [42,43]. Therefore, appropriate channel length is needed to achieve high-performance devices.
Metallic electrode is regarded as another important factor that may impact the device performance by varying the contact resistance. Figure S6c, d illustrates the electrical characteristics of nanoribbon array-based OFETs with the use of different metallic electrodes, including Au, Ag, Cu, and Al. It is noted that the OFETs with Cu as S- [44][45][46]. It is noteworthy that the valance band position of Cu 2 O is higher than Au (5.1 eV), Ag (4.2 eV) and also aligned with the highest occupied molecular orbital (HOMO) levels of BPEA (5.49 eV) [30] and TIPS-PEN (5.34 eV) [47], leading to a striking reduction of the hole-injection barrier and consequently the lower contact resistance. At optimized device configurations, 50 devices based on BPEA and TIPS-PEN nanoribbon arrays were examined, respectively. Figure 6 displays the typical device characteristics of the BPEA and TIPS-PEN nanoribbon arraybased OFETs on SiO 2 /Si substrates. For BPEA nanoribbon arrays, mobility as high as 2.0 cm 2 V -1 s -1 (average l of 1.2 cm 2 V -1 s -1 ), I on /I off [ 10 9 , and V T about 25 V were obtained (Fig. 6a-c). Notably, this mobility value is higher than that of previously reported BPEA-based OFETs (usually under 1.0 cm 2 V -1 s -1 [33,34]). For TIPS-PEN nanoribbon arrays, mobility as high as 3.2 cm 2 V -1 s -1 (average l of 2.0 cm 2 V -1 s -1 ), I on /I off [ 10 9 , and V T about 10 V were obtained (Fig. 6d-f). The mobility value is also higher than most previously reported TIPS-PEN-based OFETs, which were normally under 3.0 cm 2 V -1 s -1 [32,39]. Moreover, these devices display excellent operating cycle stability with continuous on/off cycles for a period of 1300 and 1500 s for BPEA and TIPS-PEN nanoribbon arrays, respectively (Fig. S7). In addition to the high device performance, it is noteworthy that the deposition area of the single-crystalline organic nanoribbon arrays in this work (50 cm 2 ) is much larger than that in the previous reports, in which a small area of lower than 10 cm 2 was usually demonstrated [39]. Large-area fabrication of the singlecrystalline organic nanoribbon arrays offers opportunities for high-performance, low-cost OFET applications.
The potential applications of the OFETs in drive circuits were also investigated. OFETs based on TIPS-PEN nanoribbon arrays were used as drivers to control the switching of LEDs. Figure 7a presents the circuit schematic of the nanoribbon array-based OFET-LED system. Figure 7b proves that single OFET has a good current modulation for LED, and the LED can be switched on when the OFET is at the ON state. Also, the OFETs are capable of modulating the light emission of multiple LEDs, forming the different emission patterns (Fig. 7c, d). In addition, the visual brightness of LEDs can be readily controlled by tuning the gate voltages of the OFETs. This Fig. 7 a Circuit schematic of the ribbon array-based OFET-LED system. b Photograph of one LED driven by the OFET based on TIPS-PEN ribbon arrays. c, d Show the photographs of the LED pixels driven by the OFET based on TIPS-PEN ribbon arrays result demonstrates that the large-area aligned singlecrystalline organic nanoribbon array-based OFETs are promising for electronic applications, such as driving LED pixels, sensors, and basic logic circuits [48][49][50][51].
Conclusions
We successfully fabricated large-area single-crystalline organic nanoribbon arrays via a simple solution-processed dip-coating method. During the growth process, organic molecules tended to crystallize at the three-phase contact line. Through carefully controlling the coating speed, continuous and well-aligned organic nanoribbon arrays could be obtained. Moreover, we demonstrated that the performance of the OFETs based on the organic nanoribbon arrays could be remarkably improved by optimizing the channel lengths and using appropriate metallic electrodes. Under the optimal device configurations, OFETs based on aligned BPEA single-crystalline nanoribbon arrays gave rise to a mobility up to 2.0 cm 2 V -1 s -1 , while OFETs based on aligned TIPS-PEN nanoribbon arrays achieved a maximum mobility of 3.2 cm 2 V -1 s -1 . These values are superior or comparable to the best results reported for these two small molecules, while the growth area of the aligned nanoribbon arrays (50 cm 2 ) represents the largest size reported up to now. Moreover, the organic nanoribbon array-based OFETs exhibit long-time cycle stability, enabling the control of light emission of different pixel patterns of LEDs. This method offers a means for the fabrication of large-area, aligned single-crystalline organic nanoribbons. Their applications for OFETs and LED driving open up opportunities for future high-performance, low-cost organic electronic and optoelectronic devices. | 2018-11-15T16:51:26.277Z | 2017-08-16T00:00:00.000 | {
"year": 2017,
"sha1": "0f3ce9a3d13e281198aa0d6cad6d0f203626d51b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s40820-017-0153-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b7c2457e9e31d5a3a560925ab7610a7b378285e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
231969024 | pes2o/s2orc | v3-fos-license | Survival outcomes in esophageal cancer patients with a prior cancer
Abstract To achieve a deeper understanding of patients who developed esophageal cancer (EC) as a second primary malignancy, which may help guide in clinical practice for these patients in the future. In the primary cohort, EC patients with a prior malignancy were identified from the surveillance, epidemiology, and end result 18 database. The 5 most common types of prior cancers were picked out based on the frequency of occurrence. In addition, Kaplan–Meier and log-rank tests were performed to investigate the survival impacts of prior cancers on EC patients. Besides, a competing-risk model was constructed to explore the relationship between EC-treatment and EC-specific mortality. In the secondary cohort, patients with stage I–III (N0M0) EC from 2004 to 2014 were enrolled. After propensity score matching, univariate and multivariate Cox analyses were developed to determine the prognostic factors for EC patients. A total of 1199 EC patients with a prior cancer were identified in the primary cohort. The 5 most common sites of prior cancers were prostate, female breast, bladder, lung and bronchus, and larynx. Kaplan–Meier analyses revealed that EC patients with prior prostate cancer and bladder cancer had the best overall survival (OS), while those with prior cancers of larynx and lung and bronchus had the worst OS. Fine and Gray competing risks analysis indicated that the administration of surgery was closely associated with better EC-specific survival (P < .001). In the secondary cohort, multivariate Cox analyses found that age at diagnosis, race, tumor grade, tumor extent, nodal status and metastasis stage, histology, and the administration of surgery were prognostic factors for OS and cancer-specific survival in EC patients. Besides, the existence of a prior cancer was an independent prognostic factor for cancer-specific survival. EC remains to be the most important cause of death in EC patients with a prior cancer. EC related treatment should be actively adopted in patients with a prior cancer, as they were more likely to die from EC than the prior cancer. EC patients with a prior cancer had comparable OS than those without.
Introduction
Esophageal cancer (EC) is one the most common malignancies, the incidence rate (IR) ranked ninth of all malignant tumors worldwide in 2018. [1] In 2020, the estimated new cases and deaths were 18,440 and 16,170 in the United States (US). [2] Surgery and radiotherapy have been the standard treatment types of EC for many years. Nowadays, rapid development of immunotherapy and targeted therapy (such as trastuzumab) of EC has brought a tremendous promise in the treatment of EC. [3,4] Moreover, the 5-year survival rate of EC patients has increased from 10% to 25% due to the advancement of cancer detection and treatment. [5,6] Hence, more and more cancer survivors developed a second primary malignancy (SPM) because of the increasing IRs and improvement of survival outcomes. [7,8] SPM is defined as a cancer which develops in a new tissue or organ after the initial diagnosis of the prior malignancy with a 6month latency. Previous studies mainly focused on the risk of developing an SPM after a known malignancy. Liao et al [9] discussed the main prognostic factors for oral cavity cancer patients with simultaneous SPM, and then developed a riskstratification. Vassilev et al [10] provided a historical risk estimation of developing an SPM in patients with metastatic castration-resistant prostate cancer. However, as far as we know, survival outcomes of patients with 1 known tumor as an SPM have not been well studied. Only a few published studies have discussed the risk of developing an SPM in primary cancer survivors. [11,12] Saad et al [13] investigated the impact of the prior cancer on survival outcomes of stage IV EC patients, they found that prior cancers did not adversely impact survival of EC patients with stage IV diseases. Besides, Chen et al [14] explored clinicopathological characteristics and prognosis of patients with EC as an SPM, they demonstrated that lower M stage, the administration of surgery, and chemotherapy were tightly related to better overall survival (OS) for patients with EC as an SPM.
In this study, patients diagnosed with EC as an SPM were extracted from the surveillance, epidemiology, and end result (SEER) database retrospectively. We aimed to achieve a deeper understanding of the outcomes of patients who developed EC as an SPM, which may help guide in clinical practice for these patients in the future.
Database
Data were extracted from the SEER database retrospectively. It is a population-based registry sponsored by the US National Cancer Institute. The SEER database collects relevant information of cancer IR, baseline characteristics, treatment types and long-term follow-up, and covers approximately 34.6% of the US population till now (https://seer.cancer.gov/about/overview. html). We signed the Research Data Agreement before this study and got access to the database with the username of 11015-Nov2019. In addition, use of SEER registry was exempt by Institutional Review Board approval.
Primary cohort
In this section, we extracted EC patients with a prior malignancy from the SEER 18 program using the "multiple primary-standard incidence ratio" function by the SEER * Stat software (version 8.3.6; US National Cancer Institute, Bethesda, Maryland, USA). EC was diagnosed as the SPM with positive pathology. Furthermore, the exclusion criteria were as follows: (1) patients with more than 2 malignancies in total, (2) data were from autopsy or death certificate only, (3) year of diagnosis was not from 2004 to 2014, (4) patients with missing or unknown data, (5) interval between diagnosis of EC and the prior cancer was less than 6 months.
Then, demographic characteristics and clinical data for each patient were collected, including age at diagnosis (both prior cancer and EC), sex, race, histological type, primary sites of EC, American Joint Committee on Cancer 6th tumor extent, nodal status and metastasis (TNM) stage, diagnosis intervals, the administration of surgery, radiotherapy and chemotherapy, vital status, cause of death (COD) and follow-up. Age at diagnosis was categorized into <65 and ≥65 years old. Furthermore, CODs were classified into 3 groups: died from EC, died from the prior cancer, and died from other causes.
First of all, we picked out the 5 most common types of prior cancers based on the frequency of occurrence. Then, Kaplan-Meier and log-rank tests were performed to investigate the survival impacts of prior cancers on EC patients. Afterward, the percentage of EC-related and prior cancer-related deaths in patients with different prior malignancies were calculated, and the ratios of EC deaths to prior cancer deaths were obtained, further stratified by EC TNM stage and histological type. Finally, to explore the relationship between the administration of surgery and EC-specific mortality (ECSM), we constructed a competing model after taking died from other causes/prior cancers as a competing event.
Secondary cohort
In the secondary cohort, we identify patients with stage I-III (N0M0) EC from 2004 to 2014 in the SEER 18 database using the "case listing session" function. Based on the existence of a prior malignancy, all patients were then divided into "primary esophageal cancer (PEC)" and "subsequent esophageal cancer (SEC)." Propensity score matching (PSM) method was used to balance the basic characteristics of PEC and SEC patients with a ratio of 1:1. Survival discrepancies between PEC and SEC patients were compared before and after PSM. Lastly, univariate and multivariate Cox analyses were developed to discuss the prognostic factors which were significantly related to OS and cancer-specific survival (CSS) in patients with EC.
Statistical analysis
Student t test and Mann-Whitney U test were used for the comparisons of continuous variables. Chi-square analysis was utilized to make comparisons between categorical variables. The whole analysis was based on SPSS 23.0 (SPSS Inc, Chicago, IL) and R software (Version 3.4.1). A 2-sided P < .05 was considered significant.
Baseline characteristics of the primary cohort
A total of 1199 EC patients with a prior cancer were eventually enrolled in the primary cohort. As shown in Table 1, the median (interquartile range [IQR]) ages at EC and the prior cancer diagnosis were 73.00 (66.00-80.00) and 64.00 (57.00-71.00) years old, respectively. Most patients were White (85.99%) and male (78.73%). The most common site of EC was lower esophagus (61.38%). 54.38% of the EC patients were with AC. The median (IQR) diagnosis interval between the prior cancer and EC was 91.00 (43.99-151.00) months. Moreover, the median (IQR) follow-up since EC diagnosis was 12.00 (4.00-30.00) months.
Survival outcomes in the primary cohort
The 5 most common sites of prior cancers were prostate (35.36%), female breast (8.42%), bladder (7.84%), lung and bronchus (5.75%), and larynx (4.50%) ( Table 2). OS was significantly different in EC patients with different prior malignancies (P < .0001, Fig. 1). EC patients with prior prostate cancer and bladder cancer had the best survival outcomes (3-year OS rates were 27.7% and 29.2%, respectively), while those with prior cancer of larynx and lung and bronchus had the worst OS (3-year OS rates were 12.5% and 11.0%, respectively).
In the analysis of COD, 65.51% of EC patients died from EC and 16.75% of patients died from the prior cancer (Fig. 2). EC patients with prior cancers of lung and bronchus had the highest prior cancer-related death rate (26.15%) and the lowest ECrelated death rate (58.46%). Furthermore, the ratios of prior cancer-related deaths to EC-related deaths were calculated. As shown in Figure 3, the ratios were less than 1 regardless of the histological type (Fig. 3A) or TNM stage (Fig. 3B) of EC. Hence, conclusion could be drawn that EC patients were more likely to die of EC regardless of the cancer types of prior cancers and EC.
Compared with patients who died from the prior cancer, those who died from EC had older ages at cancer diagnosis (both EC and the prior cancer) (all P < .05, Table 3). In addition, the proportions of AC and N1 diseases (all for EC) were significantly higher in patients who died from EC. The median interval between diagnosis of 2 cancers was significantly longer in patients who died from EC than that in patients who died from the prior cancer (92.00 vs 66.00 months, P < .001). Notably, the percentage of radiotherapy in patients who died from EC was significantly higher than those who died from the prior cancer (62.93% vs 53.85%, P = .031). To explore the prognostic role of cancer treatments, Fine and Gray competing risks analyses were developed. As shown in Figure 4, the administration of surgery was tightly related to better EC-specific survival (P < .001).
3.3. Survival of patients with EC as the prior cancer or subsequent primary cancer in the second cohort (Table 4). SEC patients had significantly older age than PEC patients (≥65 years old: 77.68% vs 58.97%, P < .001). Furthermore, the proportions of male patients, lower esophageal tumors, histology of AC, higher stage (II-III) diseases, and the administration of surgery/radiotherapy/chemotherapy were significantly higher in PEC patients when compared with these in SEC patients (all P < .05). Therefore, a 1:1 PSM was applied to minimize the difference between SEC and PEC patients in baseline characteristics and treatment types. Eventually, a total of 1949 pairs of EC patients were included.
Supplemental Digital Content ( Figure S1, http://links.lww. com/MD/F706) shows the comparisons of survival outcomes between SEC and PEC patients. After matching, there was no significant difference in OS between patients in 2 groups (Fig. 5A, P > .05). However, SEC patients had better CSS than PES patients ( Fig. 5B, P < .05). Furthermore, subgroup analyses based on different histological types (AC and SCC) revealed the same results ( Fig. 5C-F).
Multivariate Cox analysis indicated that age at diagnosis, race, tumor grade, TNM stage, histology, and the administration of surgery were prognostic factors for OS and CSS in EC patients (Tables 5 and 6). Besides, the existence of a prior cancer (PEC vs SEC) was an independent risk factor for CSS (P < .001).
Discussion
In recent years, the number of cancer survivors is rapidly increasing due to the improvement of cancer screening and treatment. Hence, the risk of developing an SPM in cancer survivors has also been increasing. [7] It was reported that there was a 2% annual increase for the cancer survivor population in the US, and about 18% of cancer survivors developed an SPM during the rest of their lifetime according to the SEER registry. [15] Furthermore, the history of a prior cancer played a critical role in making clinical decision, especially for those who participated in clinical trials. In many clinical trials, history of a prior cancer was a strict exclusion criterion for potential candidates, which may be due to the survival impacts of the prior cancers. [16] Although there was no powerful evidence supporting the hypothesis that exclusion of these patients could balance the outcomes and validity of clinical trials, [13] many published trials excluded patients with a prior cancer routinely. [17][18][19] A previous study revealed that there were approximately 20% of lung cancer patients were excluded because of this restrictive exclusion rule. [18] This study was to investigate the survival outcomes of EC patients with a prior cancer and to identify prognostic factors for EC patients. In this study, the most common prior malignancy in EC patients was prostate cancer, followed by female breast cancer, bladder cancer, and lung cancer. Interestingly, these cancers are also the most common cancers as single malignancy in general. Hence, we guessed that there was no enrichment for a cancer type that may increase the risk of developing EC as an SPM. Similarly, Zhu et al [20] reported that the most common types of prior cancers in larynx cancer patients were from prostate, lung and bronchus, urinary bladder, and breast. Laccetti et al [21] found that prostate, gastrointestinal, breast, and other genitourinary were the most common types of prior cancer in locally advanced lung cancer.
Comparisons in survival outcomes of EC patients with different prior cancers showed significant statistical difference. EC patients with prior cancers of prostate cancer and bladder cancer had significant better OS than those with prior cancers of lung and bronchus. The survival discrepancy may be due to the level of threat to life of prior cancers. Moreover, EC patients were more likely to die of EC regardless of the cancer types of prior cancers and EC. Lastly, multivariate Cox analyses found that age, race, tumor grade, TNM stage, histology, and the administration of surgery were independent prognostic factors for OS and CSS in EC patients, and the existence of a prior cancer was an independent risk factor for CSS.
Most patients died from EC rather than the prior cancer (65.51% vs 16.75%) with a median follow-up of 12.00 months. Furthermore, subgroup analyses based on TNM stage and histology (AC and SCC) revealed the same results. Moreover, Kaplan-Meier analysis showed that PEC patients had similar OS compared with SEC patients. Saad et al [13] found that stage IV EC patients with a prior cancer had comparable OS with those had EC as their only malignancy. In that study, Saad et al only focused on the survival impact of prior cancers on the advanced EC patents, rather than all EC patients. Similarly, Chen et al [14] investigated the clinicopathological characteristics and survival outcomes of EC patients with a prior cancer, they found that the most common prior malignancy in EC patients was from genital system (about 43.5%). Moreover, EC patients with a prior cancer had comparable OS when compared with only primary EC patients. However, previous studies did not investigate the ECspecific survival. In our study, SEC patients had significant better CSS than PEC patients after matching. Better CSS could be attributed to the fact that cancer survivors receiving a stricter screening and care or being more cautious on healthy problems. Furthermore, Wang et al [22] reported that nasopharyngeal carcinoma patients with a prior cancer had better CSS than those without a prior cancer. However, study conducted by Ji et al [23] and Al-Husseini et al [24] reached the opposite conclusions that breast cancer or glioblastoma patients with a prior malignancy had worse CSS than those had breast cancer or glioblastoma as their only malignancy. In our study, the proportion of surgery was comparable in patients who died from EC with that in patients who died from the prior cancer. Interestingly, Fine and Gray competing analysis showed that the administration of surgery was closely related to a reduction of ECSM. Our findings strongly indicated that surgery was still an optional alternative for EC patients with a prior cancer. First, most EC patients with a prior cancer died from EC rather than the prior cancer, regardless of the clinical characteristics of the prior cancer and EC. Second, prolonged CSS was detected in SEC patients when compared with PEC patients. Dinh et al [12] found that treatment for patients with high stage and high-grade prostate cancer was related to a decreased risk of prostate cancer-specific mortality.
Cox regression analyses revealed that age at diagnosis, race, tumor grade, TNM stage, histology, and the administration of surgery were prognostic factors for OS and CSS in EC patients. However, there were some limitations that should not be ignored. First, numerous data were lacking or missing in the SEER registry. Second, the nature of retrospective research led to the inevitable selection bias. Moreover, treatment strategies of prior cancers may have something to do with the occurrence and survival of SPM. [25,26] Therefore, further prospective and well-designed studies are needed to validate our findings.
Conclusions
In EC patients with a prior cancer, EC is the most important COD regardless of the clinical characteristics of the prior cancer and EC. Surgery for these patients decreased the risk of ECSM. These finding suggested that EC related treatment should be actively adopted in patients with prior cancers, as they were more likely to die from EC than the prior cancer. Lastly, age at diagnosis, race, tumor grade, TNM stage, histology, and the administration of surgery were found to be prognostic factors for OS and CSS in EC patients. | 2021-02-21T06:16:04.331Z | 2021-02-19T00:00:00.000 | {
"year": 2021,
"sha1": "c61c05db289fcd3c6106213f7cbd3434895a60c2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000024798",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0bd69d80c92b4ecc11a9735d0d608820795d97a5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233697866 | pes2o/s2orc | v3-fos-license | Prevalence of Diastolic Dysfunction in Non Diabetic Patients of Metabolic Syndrome
Background :The purpose of the present study was to study the prevalence of metabolic syndrome in non diabetic patients of metabolic syndrome Materials and Methods: 100 patients of non diabetic metabolic syndrome were screened using 2-D Echocardiogram. Results: 34% of non diabetic patients of metabolic syndrome had diastolic dysfunction, with no association found between the components of metabolic syndrome and diastolic dysfunction. There was a strong correlation between a past history of hypertension and dyslipidemia with diastolic dysfunction. Conclusion: Our ndings suggest that long standing metabolic syndrome is a risk factor for diastolic dysfunction, rather than short term elevation of the metabolic syndrome parameters. Also it is likely that Diabetes and Prediabetes itself is responsible for most of the diastolic dysfunction that is seen in metabolic syndrome
Introduction
The term metabolic syndrome was coined by Haller in1977.Also known as Syndrome X its main features are central obesity, dyslipidemia, hypertension and hyperglycemia. The cardiovascular risk factors comprising the metabolic syndrome are now considered the driving force behind the new cardiovascular disease (CVD) epidemic (1).
Presence of diastolic dysfunction (DD) in metabolic syndrome (MS) has been seen in several studies. Its incidence appear to range between 20-40% in the western population (1,2).However, several south east Asian studies show a much higher percentage of DD in MS, up to 73% (1,3,4).
These alarming numbers maybe due to the higher risk that Asian Indians have for coronary artery disease (CAD) because of their unique lipid pro le. The dyslipidemia in South Asians is most importantly characterized by elevated levels of triglycerides, low levels of HDL-C, elevated Lp(a) levels, and a higher atherogenic particle burden despite relatively normal LDL-C levels. HDL particles appear to be smaller, dysfunctional, and proatherogenic in South Asians, thus leading to earlier CAD independent of other metabolic risk actors (5). Diastolic dysfunction occurs early in the ischemic cascade. (6). Thus it appears reasonable to postulate that the higher incidence of DD in MS in this region might be due to the higher incidence of early CAD in South Asians. In this context it is interesting to note that low HDL is the only variable not signi cantly associated with DD in a study from Serbia (7) while Khan et al, reporting on a similar study from Pakistan, showed a very strong association between low HDL and DD (4). DD is characterized by left ventricular (LV) stiffness and impaired relaxation due to LV myocardial brosis. Several mechanisms are responsible for the same, from the cardiac remodeling seen in advanced hypertensive and diabetic patients to the insulin resistance and in ammation seen in asymptomatic patients of metabolic syndrome (8).DD accounts for 50% of all admissions for acute heart failure (9) and has a cardiovascular mortality similar to systolic heart failure (10).For the asymptomatic patient, impaired exercise capacity limits activities of daily life. (11). Grade 1 DD causes 2 fold increase in all cause and cardiac mortality (12,13).It is now imperitive,from a medical and economic perspective to identify the patients of DD While studies have been performed which show association between various components of MS with DD, with clear evidence that worsening grades of diastolic dysfunction are associated with increasing burden of metabolic syndrome ( 4,8), the contribution of each variable towards DD is not clear. Central obesity (de ned as waist circumference of greater than or equal to 90 cm in men and 80 cm in women) Plus any two out of three of the following i. TG Levels > 150 mg/dl or speci c treatment for this lipid abnormality ii. HDL < 40 mg/dl in males or < 50 mg/dl in females iii. Systolic BP greater than or equal to 130 mm Hg and diastolic BP greater than or equal to 85 mm Hg or previously diagnosed hypertension Patients with established diabetes mellitus or on treatment for same as well as impaired fasting glucose were excluded from the study as were patients with any myocardial disease other than diastolic dysfunction.
A total of 100 patients of non-diabetic MS were selected. Eligible patients were enrolled into the study after taking informed consent at visit 1. A detailed history and physical examination was done which included anthropometry, blood pressure measurement, signs and symptoms of diastolic heart failure.
Echocardiogram was performed by Tissue Doppler Imaging and the parameters were noted. Chi square test was used for determination of statistical signi cance
Results
Out of hundred patients 30 were females and 70 were males. The minimum age was 21 years and maximum of 60 years with a mean of 43.23+/-8.75.Weight ranged between 58 kgs to 117 kgs with a mean of 78.65+/-12. 17 patients had a normal BMI (b/w 18-24.99kg/m2),49 patients were overweight (BMI b/w 25-29.99 kg/m2) and 33 patients were obese( BMI >30 kg/m2) 42 out of hundred patients had a past history of hypertension, out of the rest 15 patients were normotensive at the time of examination and the remaining 43 had never been diagnosed with hypertension but had systolic and/or diastolic blood pressures elevated as per the IDF criterion. There was no correlation found between the other components of metabolic syndrome and diastolic dysfunction.
Discussion
Our study showed a prevalence of 34% DD in non-diabetic patients of MS. This is signi cantly lower than the prevalence seen in other studies from the Indian subcontinent (1,3,4). All of these studies were done in diabetic MS patients. A study by Dinh et al in 2011 was done on 166 patients, dividing them into 3 groups; Impaired Glucose Tolerance (IGT), Diabetic and Non Diabetic group (NGT). The prevalence of DD was 81% in the IGT group, 96% in the diabetic group and 61% in the NGT group, respectively (P< 0.001). Twelve percent of subjects with NGT, 28% of patients with IGT and 35% of the diabetic group were classi ed as having a more severe form of LVDD (14).
Given the high prevalence of DD in diabetic patients, with some studies showing a prevalence up to 100% (15), it would be tempting to conclude that Diabetes is the predominant contributor to DD in MS.
The review of literature places the prevalence of DD in hypertensive patients in a variable range. While in the Caucasian population, the incidence appears to be in the range of 40-45% (16), the prevalence in South East Asian and African populations are higher. Independent small sample size studies from India report the prevalence of DD in hypertensives to be between 55-70% (17,18) while a Nigerian population study placed the incidence of DD in hypertensive patients at 82% (19). The E-ECHOES study done in the United Kingdom on hypertensive patients of South East Asian ethnicity put the prevalence at 73%,which was comparable to that seen in the African-Caribbean population (72%).However, the parameters of DD were worse in the South East Asian group translating to worse clinical outcomes. (20) In our study 42 patients had past history of hypertension, 57% of whom had DD. 43 were newly diagnosed,20 % of whom had DD. There are con icting reports regarding gender distribution of DD in MS ( 21). Our study did not reveal any gender inequality which is similar to several other studies ( 17,18) MS is a complex constellation of various diseases which is greater than the sum of its whole. How the components interact with each other and whether one component is more important than others, is a question which has not been answered yet. It would require studies both long term and well de ned before we come close to the answer. While the signi cantly lower prevalence of DD in our study after excluding diabetics, certainly appears to suggest that diabetes is a major contributor to DD in MS, more such studies are required before we can reach a conclusion. Limitations: One of the major limitations of our study is the comparatively younger group of patients studied. The average age of our patients was 43years. Since age has a well known association with DD in MS (17,18) it is a possibility that the lower prevalence of DD in our study could be explained by this.
Conclusion
1) The prevalence of diastolic dysfunction was found to be 34% in the patients of nondiabetic/prediabetic metabolic syndrome,which is less than that found in the diabetic metabolic syndrome group.
2) No correlation was found between parameters of diastolic dysfunction and components of metabolic syndrome. There was signi cant correlation between past history of dyslipidemia and hypertension with diastolic dysfunction indicating that prolonged exposure to metabolic syndrome parameters are responsible for the development of diastolic dysfunction. K.K collected the material. P.A analysed the data. K.K and P.A wrote the manuscript in collaboration. | 2021-05-05T00:08:18.266Z | 2021-03-24T00:00:00.000 | {
"year": 2021,
"sha1": "afff1c509c0756529718614ca943ccdfdebdc471",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-331297/v1.pdf?c=1631894172000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4a2166adf5fefdcf58c3a257a92b0ae217e2a74a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73489674 | pes2o/s2orc | v3-fos-license | Use of Mobile and Computer Devices to Support Recovery in People With Serious Mental Illness: Survey Study
Background Mental health recovery refers to an individual’s experience of gaining a sense of personal control, striving towards one’s life goals, and meeting one’s needs. Although people with serious mental illness own and use electronic devices for general purposes, knowledge of their current use and interest in future use for supporting mental health recovery remains limited. Objective This study aimed to identify smartphone, tablet, and computer apps that mental health service recipients use and want to use to support their recovery. Methods In this pilot study, we surveyed a convenience sample of 63 mental health service recipients with serious mental illness. The survey assessed current use and interest in mobile and computer devices to support recovery. Results Listening to music (60%), accessing the internet (59%), calling (59%), and texting (54%) people were the top functions currently used by participants on their device to support their recovery. Participants expressed interest in learning how to use apps for anxiety/stress management (45%), mood management (45%), monitoring mental health symptoms (43%), cognitive behavioral therapy (40%), sleep (38%), and dialectical behavior therapy (38%) to support their recovery. Conclusions Mental health service recipients currently use general functions such as listening to music and calling friends to support recovery. Nevertheless, they reported interest in trying more specific illness-management apps.
Introduction
The number of smartphones owned by people with serious mental illness (81%) has been approaching the rate of ownership in the general population recently (91%) [1,2], and the use of these devices by people with serious mental illness does not seem to differ substantially from that of the general population [3]. A survey of 457 people with serious mental illness found that a large majority have access to the internet via computers (89%), smartphones (54%), and tablets (32%) [4], which affords communication, socialization, and the opportunity to obtain information. Qualitative research has suggested that people with serious mental illness are interested in, and some may already be using, electronic devices without clinical team involvement to support their recovery [5]. For example, some people with serious mental illness reported using the internet to understand the precautions and side effects of medications; Instagram, to follow people who post daily positive messages; and YouTube, to watch videos that provide guided progressive muscle relaxation [5].
The concept of recovery began among mental health service users, but mental health personnel, including researchers, have adopted the term [6]. Although recovery is an idiosyncratic concept, qualitative research has identified common themes: personal control over illness management, striving towards one's goals, meeting one's needs, and having a sense of responsibility [7][8][9]. People use routines and activities, such as employment and education, and engage with social support systems to promote their recovery [10]. Currently available technology may further empower people to manage their own recovery.
People with serious mental illness have recognized that technology could become a larger part of their recovery in the next few years [4,11]. Researchers are exploring the integration of smartphone and computer-based apps into psychiatric care for medication management, symptom monitoring, and shared decision making [12][13][14]. However, not all integrations have been adopted successfully by clients and mental health practitioners. Our study examined current use and interest in future use of specific features and apps of electronic devices by people with serious mental illness to support their recovery.
Participants
This study included a convenience sample of people with serious mental illness who were receiving mental health services from one small and one large mental health agency in New Hampshire. Of the 68 people who completed the survey, we excluded five participants from data analysis: two participants had inconsistent responses and three participants did not own an electronic device. The final sample consisted of 63 participants (31 men and 32 women). Participants ranged in age from 19 to 75 years (mean 41.6; median 42; SD 13.3), and the majority were white (84%, n=53) and never married (63%, n=40). The highest level of education attained by participants was some college or technical school education (38%, n=24), followed by a high school diploma or equivalent (30%, n=19). Nearly two-thirds of the participants were unemployed and not attending school (60%, n=38).
Measures
We developed a survey (Multimedia Appendix 1) based on findings from interviews of mental health service recipients [5]. This survey first assessed electronic device ownership and frequency of use. For those who owned a computer, tablet, or smartphone, we asked questions differentiating between general everyday use of these electronic devices and specific use for supporting recovery. The questions addressed several topic areas: frequency of use for general purposes, frequency of use for supporting recovery, ease of use, use of technology within mental health care, interest in trying new technologies, and interest in agency-based technical support services. We know from the qualitative study, which informed the development of this survey, that mental health service recipients use general/nonhealth apps (eg, Instagram, Facebook, and online games) to help them with recovery.
The participants rated whether their clinician discussed technology for supporting recovery, on a scale from 1 (no, never) to 10 (yes, at every visit). Participants also rated how comfortable they felt seeking/searching for help in using their electronic devices, on a scale from 1 (very uncomfortable) to 10 (very comfortable).
Because recovery is a highly individual experience [6], we allowed people to use their own understanding of the recovery process rather than an explicit definition. We pretested the survey for clarity and understanding with two consumers and revised the survey based on their feedback.
Procedure
The Dartmouth College Committee for Protection of Human Subjects in Hanover, New Hampshire, approved this study, which followed the principles outlined in the Helsinki Declaration. Over 4 weeks, we recruited participants from a community mental health center and a dual diagnosis treatment program in New Hampshire. The community mental health center, located in a rural area, serves approximately 1500 adults with serious mental illness each year. The city-based dual diagnosis treatment program serves between 30 and 40 men with co-occurring serious mental illness and substance use. A researcher and a research assistant approached clients in the waiting room and common areas of these centers, explained the study, and asked whether they would be interested in participating. In a few instances, the case managers approached the clients. Only clients who could provide informed consent and were receiving services at one of the two sites were eligible. We provided eligible, interested clients a tablet or paper-based survey, which included a description of the study and a consent statement to complete the survey. Consenting participants completed the survey within 5 minutes. The researcher and research assistant were available to answer any questions and help those who requested assistance in completing the survey.
Data Analysis
We identified the five most frequently and least frequently used functions/apps for general everyday purposes. We then identified the five most frequently used functions/apps for supporting recovery. We also identified the top five functions/apps that participants were most interested in using to support their recovery in the future. We then used the Fischer exact test to identify whether these frequencies differed between the two sites. We used the Spearman rank correlation to assess whether age was associated with (1) the extent to which clinicians discuss technology with their clients and (2) the client's level of comfort with seeking help for using electronic devices.
Use to Support Recovery
To specifically support their recovery, participants most commonly listened to music (60%, n=38), accessed the internet (59%, n=37), called (59%, n=37), texted (54%, n=34), and used the clock feature to track time (41%, n=26). Participants' frequency of use of the apps for either general every day or recovery purposes did not significantly differ between the two sites.
Interest in Incorporating Technology Into Mental Health Recovery
Participants averaged a score of 3.2 (SD 2.6) on a scale from 1 (no, never) to 10 (yes, at every visit) when describing how frequently their case manager or clinician discussed the ways technology can support their recovery. The frequency of discussing technology with case managers or clinicians did not vary with age (r s =-.19, P=.22). Among the two-thirds of participants (67%, n=42) who indicated that they would "probably" or "definitely" try new apps or technology to support their recovery, the most popular areas of interest included anxiety (45%, n=19), mood management (45%, n=19), mental health symptom monitoring (43%, n=18), cognitive behavioral therapy (40%, n=17), sleep (38%, n=16), and dialectical behavior therapy (38%, n=16).
A total of 48% (n=30) of participants found it easy to use new technologies; 29% (n=18) reported that it was sometimes easy and sometimes difficult to use new technologies. Participants reported that they either searched online or solicited help from family or friends when they need help using their device (75%, n=47). They rated their level of comfort with these approaches as 7.6 (SD 3.0) on a scale from 1 (very uncomfortable) to 10 (very comfortable). The level of comfort in seeking support did not vary with age (r s =-.17, P=.27). Further, 60% of the participants indicated that they would "definitely" or "probably" work with an agency staff member who could help them use their devices, if such a person were available.
General Findings
Nearly all participants had access to devices that could connect to the internet. Between 40% and 60% identified specific features/apps they were currently using to support their recovery, namely, listening to music, accessing the internet, calling, texting, and keeping track of time. Two-thirds of the participants indicated that they were interested in trying new technologies to support their recovery. Participants were most interested in learning how to use apps that addressed anxiety, mood management, mental health symptom monitoring, cognitive behavioral therapy, sleep, and dialectical behavior therapy. Participants were moderately comfortable searching the internet or asking family or friends when they needed assistance using their device but were open to using technical support services if they were made available at the mental health center.
Supporting Recovery
The majority of participants routinely used nonmental health features/apps, specifically those built automatically into electronic devices (eg, internet browser, texting apps, calling apps, and time tracking apps) to support their recovery. For example, one participant used the alarm on his phone for medication reminders, while another used the internet browser to learn more about mental health diagnoses [5]. In the present study, a substantial number of participants were interested in using mental health apps. These apps are publicly available; therefore, the following question arises: Why were the participants not using these apps? First, participants with low income have budget constraints that limit the brands of devices they can own and data plans they can afford, which directly impacts access to electronic resources [5]. Second, clients may view these apps as clinical tools that require support from a clinician. Our study participants reported minimal discussion with their clinicians about using technology to support recovery. Third, clients may not know where to find specific apps or how to decide on which ones to use. Mental health centers have a clear opportunity to involve a staff member with expertise in the field of mental health apps, such as a technology specialist, who can inform both clients and clinicians of vetted tools that may help support recovery efforts [5,15]. Evidence suggests that low-level support from professionals and the involvement of peers in a technology-supporting role would be helpful [16]. The majority of participants in this study were open to using such types of resources.
Between 38% and 45% of participants endorsed interest in apps in six target areas related to mental health, indicating that more than half of the participants are not interested in these apps. Consistent with the principle of shared decision making, researchers and clinicians could begin by taking advantage of the apps people are already using. Based on the study findings, participants found the apps that connect them to others or provide information most helpful for recovery. Clinicians may consider supporting their clients in using these features. Technology specialists may narrow their search to apps that have a social component or provide the latest news in mental health. Researchers developing mental health apps may consider including social networking and components that provide new and changing content about mental health. Researchers and clinicians may also consider social factors that influence the use of electronic devices, such as education and employment.
Limitations
Our study used a convenience sample in New Hampshire that lacked ethnic/racial diversity. We did not collect information on participants' diagnoses. Behaviors described here were based on self-report, and people's self-reported attitudes may not predict their behaviors.
Conclusions
People with serious mental illness use common features of smartphones, personal computers, and tablets to support their recovery, independent of the care they receive from mental health clinics. Clinicians and researchers may consider including a discussion of the apps clients are already using to monitor how effectively these tools support recovery efforts over time. A large minority of participants expressed interest in mental health-specific apps. Because the combination of interest, support, and acceptance is a key driver of adoption, clinicians and researchers may find successful adoption of these apps by starting with these clients and their choices rather than with all clients and specific apps. | 2019-02-28T18:24:57.427Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "39f463dec9f86e1a4ce1f0afa9695b469dd34573",
"oa_license": "CCBY",
"oa_url": "https://jmir.org/api/download?alt_name=mental_v6i2e12255_app1.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8416c7973e6b56e8efe452cbef25386195cff14d",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
17044411 | pes2o/s2orc | v3-fos-license | Multiple stellar generations in massive star forming complexes
The formation of massive stars is an outstanding problem in stellar evolution. However, it is expected that they are (predominantly) born in heirarchical environments within massive young clusters, which in turn are located within larger star forming complexes that reflect the underlying structure of the natal molecular cloud. Initial observations of such regions suggest that multiple generations of stars and proto-stars are present, necessitating a multiwavelength approach to yield a full (proto-)stellar census; in this contribution we provide an overview of just such an observational approach for Galactic examples, focusing on the G305 complex.
Introduction
Imaging of external galaxies reveals that stellar formation yields large star cluster complexes of 10s-100s of parsec in size, and >> 10 4 M ⊙ in integrated mass. These are luminous across the electromagnetic spectrum; with emission at radio wavelengths from ionised gas, far-IR & submm from cold molecular material, IR from heated dust, optical-UV from the stellar population and X-rays from both pre-MS and massive stars. Therefore a multiwavelength approach is required to understand the ecology of such regions -and hence infer masses for unresolved regions from their integrated spectral energy distributions (SEDs) -as well as the evolution of massive (>40M ⊙ ) stars from cold molecular cores through to the Main Sequence. The latter goal is particularly important, since our knowledge of this process suffers from few current observational contraints and yet very massive stars play an inordinate role in the excitation of their environment via their UV radiation field and wind energy. Consequently, in order to address these interelated issues we are undertaking such a study of several Galactic star forming regions, of which the G305 complex is of particular interest given current estimates for its stellar content (Clark & Porter 2004). In this contribution we briefly review the observational dataset acquired for it as a result of this program and highlight some initial results arising from it. the presence of >30 canonical O7 V stars (Clark & Porter 2004). Morphologically, it appears as a tri-lobed wind blown bubble with a maximal extent of ∼30 pc centred on the Young Massive Clusters Danks 1 &2 (Fig. 1). Vigorous ongoing star formation is present on the periphery of the region as evidenced by significant IR-radio emission and the presence of numerous masers (Sect. 2.2).
The recent star formation history of G305
The location of ongoing star formation within the complex is indicative of triggered, sequential activity initiated by Danks 1 & 2. The presence of at least one Wolf-Rayet -the WC star WR48a -suggests that star formation must have been underway for at least ∼2.5 Myr while, following the arguments presented in Clark et al. (2009) for W51, the lack of a population of RSGs suggests an upper limit to the duration of the 'starburst' of ≤10 Myr. In order to more fully constrain the properties of Danks 1 & 2 and hence to determine whether they could have triggered the subsequent generations of star formation, we have undertaken near-IR imaging & spectroscopic observations of them with the HST & VLT/ISAAC and present a subset of the data focusing on Danks 1 in Figs. 2 & 3.
A full analysis of these data will be provided in Davies et al. (in prep.) but we highlight that both clusters appear to have integrated masses >>10 3 M ⊙ . Surprisingly, given their apparent proximity (a projected separation of ∼3.5 pc) there appears to be a notable age difference (∼2-3 Myr) between them, evident in both the spectral types of cluster members and the location of the Main Sequence turn on. Focusing on Danks 1, we identify a number of emission line objects with spectra consistent with O Iafpe/WN7-9h stars; the cluster being reminiscent of the Arches in the Galactic Centre. The presence of such stars is of interest since they are expected to be massive core-H burning objects in which very high mass loss rates cause them to present a more evolved spectral type. Combined with their prodigious UV-fluxes, they are likely to be significant sources of feedback and detailed non-LTE model atmosphere analyses of these objects is currently underway in order to quantify this. In contrast such stars are absent in Danks 2, with the presence of a WC star and O supergiants of a later spectral type indicating an older spectral population.
However, massive (post-)MS objects are not restricted to these clusters. As well as the dusty WC star WR48a, recent IR observations have located a further 3 WC and 1 WN stars within the wind blown bubble (Shara et al. 2009, Mauerhan, van Dyk & Morris 2009, suggesting that an additional dispersed population is present within the complex, although their origin -e.g. ejected from a cluster or formed in situ -is uncertain. In this regard it closely resembles 30 Dor, which Walborn & Blades (1997) showed hosts a young central cluster and a diffuse, older population distributed across the wind blown cavity with an additional (pre-MS) component located on the periphery. Massive stars also appear present on the perimeter of G305, with Leistra et al. (2005) showing that the young cluster found within the cavity G305.254+0.204 contains at least one early O star. Moreover, early OB pre-MS stars are also found in the bubble PMN1308-6215 to the NW of the complex; the spectrum presented in Fig. 4 being dominated by H I line and CO bandhead emission, indicative of a hot ionising source surrounded by a cool accretion disc/torus, respectively. A full presentation and analysis of these and other data on the pre-MS population of G305 will be provided in Clark et al. (in prep.).
Earlier phases of (triggered) star formation
We next turn to the more deeply embedded massive protostars and the reservoir of cold molecular material. The former may be identified with ultracompact H II regions, very bright mid-far IR sources and H 2 O & methanol maser emission, while the latter may be mapped via molecular tracers such as NH 3 or sub-mm continuum emission from cold (≤50K) dust. Hill et al. (2006) presented a survey of cold dust for selected regions within G305, finding a total of ∼23,000M ⊙ material located in clumps with masses up to ∼4,500M ⊙ , although it is expected that these will comprise lower mass subclumps at higher spatial resolution. Recently, Hindson et al. (2010) undertook a molecular survey of the whole complex which revealed a total reservoir of cold gas of ∼6×10 5 M ⊙ (Fig. 5); even allowing for a relatively low star formation efficiency (< 10%) this is sufficient to yield a substantial stellar population. In order to provide a higher resolution map of this material and to determine its properties such as clump mass function and temperature, we have obtained both APEX/LABOCA and Herschel far-IR -submm observations. A preliminary reduction of the 870µm LABOCA data is provided in Fig. 6, which shows the 'skeleton' of cold molecular material upon which current star formation is occuring (see Clark et al. in prep. & Thompson et al. in prep. for a full analysis).
Finally, SEDs constructed from the full near-IR to sub-mm datasets allows the identification of Massive Young Stellar Objects (MYSOs) via their characteristic colours (e.g. Hoare et al. 2005), as well as a determination of their integrated bolometric luminosities. We show the location of such MYSOs in a subfield of G305 in Fig. 7 (as well as H 2 O and methanol masers; Hindson et al. 2010). Clearly significant star formation that will result in a new population of massive stars is currently underway, and appears to be located on the surface of the molecular cloud adjacent to the nearby stellar cluster, suggesting that it has been triggered by the action of the OB stars contained within.
Concluding remarks
A multiwavelength approach to the study of star forming complexes allows us to locate the different stellar populations within these regions and hence determine the propagation (or otherwise) of star formation through the host GMC. Model atmosphere analysis of the massive stellar population constrains the feedback from such stars as well as helping date the onset of star formation via comparison to theoretical evolutionary tracks. Full analysis of the near-far IR SEDs of embedded sources yields their bolometric luminosity and hence an estimate of mass, while the far-IR -sub-mm SED will play a similar role for cold molecular cores -the first stage of massive star formation. Finally a synthesis of these data will provide a complete census of star formation within the cluster complex, an estimate of the efficiency of this process and -via comparison of the mass functions of the differing populations -constraints on the physics governing GMC fragmentation and subsequent cluster/star formation. | 2010-12-15T11:54:17.000Z | 2010-12-15T00:00:00.000 | {
"year": 2010,
"sha1": "4b91c334d1d6b34bd7af324786bedcf684938d9b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "36cf2c1e19ad4c74d0978de292d8d1a4209991df",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234089108 | pes2o/s2orc | v3-fos-license | The Potential of Heart Risk Score to Detect the Existence and Severity of Coronary Artery Disease According to Syntax Score at the Emergency Department
Background: Patients presenting with chest pain (CP) at the emergency departments are challenging cases for the physicians to make valid decisions with regard to acute coronary syndrome, which needs urgent medical intervention while the majority of the admitted patients are free from serious cardiac problems. The present study was done to investigate the potential of Heart Risk Score in detecting the existence and severity of coronary artery disease in CP patients based on Syntax score. Methods: Among CP patients who were admitted at the emergency department, 100 participants were selected. Heart Risk Score was calculated for each participant on admission. Two independent cardiologists also calculated the Syntax score after angiography was done for each patient. Statistical analysis was performed to assess the correlation between Heart Risk Score and Syntax score. Results: The median age of participants was 58.42±12.42 with the majority (65%) being male. The mean Heart Risk Score of the patients was 5.76±1.56 (min=3, max=9) and the mean Syntax score was 14.82±11.42(min=0, max=44.5). Pearson correlation coecient was 0.493 (P<0.001) between Heart Risk Score and Syntax score which was statistically signicant (P<0.001). According to our ndings a Heart Risk Score of more than 6 has a 52% sensitivity and a 74.7% specicity to detect extensive coronary artery involvement (Syntax score>22). Conclusion: We found that there is a positive and signicant correlation between Heart Risk Score and Syntax score which underlines the importance of using Heart Risk Score in emergency departments to reduce unnecessary invasive interventions in patients presenting with chest pain.
Page 3/11
Chest pain (CP) is one of the most prevalent reasons of admission in emergency departments (ED) (1).
As there is a possibility that CP relates to life-threatening events such as acute coronary syndrome (ACS), precise diagnosis and e cient treatment improves prognosis signi cantly (2). In real practice, physicians at the ED assume CP cases as coronary artery disease until proved otherwise, possibly because of legal medical concerns and uncertainties, starting a series of invasive and non-invasive treatments while only less than 25% of such patients truly have ACS. Patients with CP are hospitalized and undergo different testing, imaging, and even invasive procedures like coronary angiography (2).
This approach leads to unnecessary hospitalization, and unnecessary cost. Reduction in the burden of hospitals relies on the ability to differentiate patients with ACS from those without ACS.
Normal levels of troponin or normal ECGs do not warrant absence of ACS completely (2). Risk stratifying tools are advised by international guidelines to be implemented in patients with CP (3). Heart Risk Score is one of the valuable risk scores in risk strati cation of patients presenting with chest pain. Heart Risk Score is based on ve convenient elements in ED including history, ECG, age, coronary risk factors, and troponin (3). Patients with risk score of 3 are considered low risk, more than 7 high risk and between these two values are considered moderate risk for major adverse cardiac events (MACE) (3). While there are also other scoring systems like TIMI and GRACE and they are bene cial in patients with proven ACS, they are less practical in low-risk patients presenting with CP to ED (1,(4)(5)(6).
As stated, studies on Heart Risk Score mainly sought to nd the incidence of MACE during early and late follow-up period and the correlation of Heart Risk Score with angiography ndings was mainly ignored in previous studies. (7)(8)(9)(10)(11)(12). In this study we tried to assess the ability of Heart Risk Score to identify the extension of coronary artery disease according to Syntax score -which is a standard tool used to determine the extent and severity of coronary artery disease-in patients with CP admitted to ED (13).
Materials And Methods
This study was performed on all CP patients above the age of 18 who referred to the ED of Al-Zahra Heart Hospital, Shiraz, Iran. All the participants were asked to sign an informed consent. All methods were carried out in accordance with relevant guidelines and regulations. Patients who were discharged from ED immediately, or had a diagnosis of ST-elevation myocardial infarction or patients with non-coronary etiology of CP like aortic dissection, pneumothorax, and pneumonia were excluded from the study. We made sure that the study and the participation of patients did not in uence diagnosis and therapeutic approaches.
A 12-lead ECG was obtained from all the participants upon entry to the ED. ECGs were interpreted by a cardiologist according to Manchester scoring criteria (14). A prede ned questionnaire was lled in order to obtain the patients' characteristics including demographic data, presence of risk factors like smoking, hypertension, diabetes, hypercholesterolemia, obesity (BMI > 30), and prior stroke or MI or peripheral atherosclerotic diseases. Serum troponin level was measured in all the participants.
Heart Risk Score was calculated for each patient. They were followed up until after angiography. Two expert independent cardiologist, who were blinded to the patient's characteristics, calculated their syntax score separately by an online software (www.syntaxscore.com) and the mean value was considered for the following analysis. Syntax score of ≥ 23 was considered as signi cant occlusive coronary artery disease. Patients who were not candidate of angiography or did not perform angiography due to personal dissent or any other reasons that resulted in inaccessibility of angiogram were omitted from the study.
Finally, statistical analyses were done on 100 patients by SPSS software, version 16. Categorical and continuous variables were presented as number (%) and mean ± SD, respectively. Pearson correlation coe cient, ROC curve, and AUC were measured.
Results
The age range was 20-87 years old with a median of 58.42 ± 12.42 years. Male was the dominant gender (65%) among participants. Baseline characteristics of patients are demonstrated in Table 1. Patients were classi ed according to the severity of Heart Risk Score into low, moderate, and high with 6%, 62%, and 32% of total patients in each category, respectively. The syntax score in 75% of patients were below 22. The angiography results were also analyzed according to the number of main vessels involved ( Table 2). Data were presented as mean ± sd or n (%). SVD: single vessel disease, SF: slow ow, MB: muscle bridge, CABG: coronary artery bypass grafting, PCI: percutaneous coronary intervention. Figure 1 demonstrates the changes in Heart Risk Score and Syntax score according to the severity of coronary artery disease classi ed as normal, single vessel disease, two vessel disease, and three vessel disease based on angiography ndings. With increasing complexity of the disease, both scores increased, although the Heart Risk Score in patients with 3VD was lower than patients with 2VD. Data were presented as mean ± sd and number(%) for continues and categorical data, respectively.
The correlation between Heart Risk Score and Syntax score was found to be 0.493 (P < 0.001) (Fig. 2). This signi cant positive correlation revealed that both indices are changed in association with each other and in the same direction.
ROC curve was used in order to assess the prediction value of Heart Risk Score based on Syntax score. Figure 3 show that at a cut-off point of 6, the sensitivity of Heart Risk Score is 52%, and the speci city is 74.7% for the prediction of extensive coronary artery disease as evidenced by high Syntax score. AUC was statistically signi cant (67%).
Discussion
This study aimed to evaluate the correlation of Heart Risk score with Syntax score thereby evaluating the ability of Heart Risk Score to predict the existence and severity of coronary artery disease in an Iranian population who were admitted to ED with CP. As far as we know, this is the rst time that a study has been done to directly correlate Heart Risk Score with Syntax score. Our study showed that the early diagnosis of patients with complex coronary artery disease is possible by using Heart Risk Score at ED. High Heart Risk Score indicates severe CAD substantiated by a signi cant positive correlation with Syntax score (p < 0.001, R = 0.493). Syntax score is an approved scoring system that considers number of lesions, functional importance, and complexity of lesions. This score classi es patients into low (≤ 22), medium (23-32), and high risk (≥ 33) (15). Syntax score is a suitable indicator for early and long-term clinical outcomes (13,15,16). Also, it helps cardiologist to choose the appropriate revascularization modality (17). However, its use is restricted because it is an angiography-based scoring system.
In the present population, coronary angiography was performed in 6 patients who had Heart Risk Score of ≤ 3. All these patients had normal or nearly normal coronary arteries with respect to atherosclerotic plaque formation (syntax of < 15). We showed that a Heart Risk Score of ≥ 6 identi es coronary artery disease patients with syntax score ≥ 22 with sensitivity, speci city, and negative predictive value of 52%, 74.7%, and 82.3%, respectively. Of all the patients with normal angiography results only one of them had a Heart Risk Score more than 7 which shows that Heart Risk Score can differentiate patients with extensive coronary artery disease from those without extensive coronary artery involvement. There is consistency between our study and prior ndings on implementing urgent and detailed interventions in patients with Heart Risk Score of ≥ 7 (7)(8)(9)(10)(11)(12). These ndings reinforces the need for a valid and reliable tool like Heart Risk Score to reduce unnecessary angiography and consequently increased burden.
Heart Risk Score was initially developed to identify patients who bene ted from early discharging. Low Heart Risk Score indicates low-risk patients and is useful for decreasing the duration of hospitalization and relevant costs (1,18,19). Heart Risk Score was reported to be a good to excellent indicator for determining risk of MACE in patients with CP at ED (1). In a retrospective study on 29196 patients who were referred to ED because of CP, a Heart Risk Score of 5 was considered for early discharging. They reported that the probability of repeated cardiovascular events in those with a Heart Risk Score of < 5 was only 1.1% (20). De ning an accurate cut-off value is useful in postponing administration of clopidogrel and ticagrelor, ADP-receptor inhibitors, for patients who may undergo CABG after primary examinations.
In a prospective study on 2440 patients with CP in ED, Heart Risk Score of nearly one third of patients was 0-3 with a risk of 1.7% for MACE showing the feasibility of quick discharge without any serious concerns about upcoming adverse events. Also, this strategy saves incurred unnecessary costs. Those with Heart Risk Score of 7-10 constituted 17.5% of the population with 50.1% risk of MACE who were referred for quick coronary intervention (1). Risk of MACE in a population of low-risk (Heart Risk Score of ≤ 3) CP patients was reported to be 0.6% (21,22). This shows the substantial potential of Heart Risk Score as a reliable tool in reducing cardiac testing.
In some studies, the association of other risk score systems like GRACE and TIMI were evaluated with regard to Syntax score (23)(24)(25). TIMI and GRACE are among scoring systems that were developed for risk strati cation of ACS patients in CCU (1). Sometime clinicians use these scoring systems for CP patients in ED which includes an undifferentiated population, despite the fact that they are not tailored for this purpose (26)(27)(28)(29). Heart Risk Score is superior to TIMI and GRACE in predicting risk of cardiovascular events for all-cause CP patients in ED. It helps care providers to choose appropriate treatment. Screening of 1748 patients presenting with CP at ED revealed that the ability of Heart Risk Score to identify low-risk individuals as well as prediction of MACE was higher than GRACE and TIMI (11). Also, GRACE score calculation needs a computer which limits its use. In contrary, Heart Risk Score, which could be calculated from admission data typically within 1 h, is speci cally designed for patients with CP in ED. The strongest scoring system should identify maximum number of true low-risk patients along with lowrisk patients who are at risk of developing MACE. Available clinical data and computer-independent calculation of Heart Risk Score make it a valuable tool for early evaluation of patients with CP admitted to ED with respect to prognosis, clinical outcome, and applying therapeutic choice (1). Our Study further added evidence for the utility of this score by incorporating Syntax score and showing the correlation of Heart Risk Score with Syntax score.
Limitations Of Study
The main limitation is the low number of patients with low Heart Risk Score who underwent coronary angiography. Another study with longer duration to include more of such patients could be especially useful. Also despite the fact that Heart Risk Score increased with Syntax score but the Heart Risk Score of patients with 3VD involvement was lower than patients with 2VD, but because of the low number of cases we could not analyze the reason for this unexpected nding.
Conclusion
There is a direct and positive correlation between Heart risk Score and Syntax score showing the higher the Heart Risk Score the more extensive the involvement of coronary arteries in the process of atherosclerosis.
Declarations
Ethics approval and consent to participate: This study was conducted in accordance to Helsinki declaration. We also received approval by the Research Ethics Committee of Shiraz University of Medical Sciences. All the patients signed an informed consent.
Consent for publication: Not applicable
Availability of data and material: The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Competing interests: The authors declare that they have no competing interests.
Funding: this study was supprted by Shiraz University of Medical Sciences. The funding body had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Authors' contributions: FD, AZ, PI, and HBD contributed substantially in design and conducting the study. FD, AZ, PI, MB, and, HBD acquired data. FD, IRJ, and HBD had roles in data interpretation. All authors read and approved the nal manuscript | 2021-05-10T00:04:27.017Z | 2021-01-28T00:00:00.000 | {
"year": 2021,
"sha1": "ad65e9916c367deab6d7463d581c69cb00f69433",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-143921/v1",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "230d51e257919c12c4d38eaeb48d25ba93ba799a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258862994 | pes2o/s2orc | v3-fos-license | Deferoxamine in the management of COVID-19 adult patients admitted to ICU: a prospective observational cohort study
Background: COVID-19 infection is associated with high mortality, and despite extensive studying the scientific society is still working to find a definitive treatment. Some experts postulated a beneficial role of Deferoxamine. Aim: The aim of this study was to compare the outcomes of COVID-19 adult patients admitted to the ICU who received deferoxamine to those who received standard of care. Methods: Prospective observational cohort study, in the ICU of a tertiary referral hospital in Saudi Arabia to compare all-cause hospital mortality between COVID-19 patients who received deferoxamine and standard of care. Results: A total of 205 patients were enrolled, with an average age of 50.1±14.3, 150 patients received standard of care only, and 55 patients received deferoxamine additionally. Hospital mortality was lower in deferoxamine group (25.5 vs. 40.7%, 95% CI=1.3–29.2%; P=0.045). Clinical status score upon discharge was lower in deferoxamine group (3.6±4.3 vs. 6.2±4, 95% CI: 1.4–3.9; P<0.001), as was the difference between discharge score and admission score (indicating clinical improvement). More patients admitted with mechanical ventilation were successfully extubated in the deferoxamine group (61.5 vs. 14.3%, 95% CI: 15–73%; P=0.001), with a higher median ventilator-free days. There were no differences between groups in adverse events. Deferoxamine group was associated with hospital mortality [odds ratio=0.46 (95% CI: 0.22–0.95); P=0.04]. Conclusions: Deferoxamine may have mortality and clinical improvement benefits in COVID-19 adults admitted to ICU. Further powered and controlled studies are required.
Introduction
More than 2 years since the first cases of viral pneumonia caused by a novel coronavirus emerged from China [1] , the worldwide spread of the COVID-19 viral infection, and its declaration as a pandemic by the WHO [2] . Apart from the protective benefits of highly efficacious vaccines developed by different companies [3] , no definite treatment of COVID-19 is approved or recommended, perhaps with the exception of remdesivir and dexamethasone in certain conditions [4] .
HIGHLIGHTS
• To our best knowledge, this is the first clinical trial to evaluate the effects of deferoxamine on COVID-19 adult critically ill patients. • Deferoxamine was associated with lower all-cause hospital mortality rate. • Deferoxamine group showed improvement of clinical status, more frequent extubation, and ventilator-free days (VD). • Larger randomized clinical trials are required to ascertain the benefits of deferoxamine in COVID-19 patients.
Recently, several publications postulated a possible beneficial role of iron-chelating agents, particularly deferoxamine, in the treatment of COVID-19 patients [5][6][7][8] . The authors of those commentaries and reviews built their hypotheses on understanding of pathophysiologic mechanisms, such as formation of a complex with porphyrin by COVID-19 virus dissociating iron [6] , the increased serum iron is implicated in the induction of oxidative stress due to the formation of reactive oxygen species, which may lead to lung damage and deterioration of pulmonary functions [8] . Reactive oxygen species also cause an upregulation of proinflammatory mediators such as interleukin (IL) 1B, IL-6, and tumor necrosis factor-α [9] . Furthermore, iron may be required for viral replication of COVID-19 as it is the case for other RNA viruses [10] , and chelating iron may reduce viral replication. Consequently, deferoxamine, an iron-chelating agent approved for the treatment of iron overload, may have a beneficial impact on COVID-19 patients. In addition to its possible role in immune-modulation, as seen as the upregulation of B-lymphocytes and neutralizing antibody titers in animal models [11,12] . No matter how compelling these hypotheses are, they remain opinions of their authors, based solely on in vitro observations, or at best animal models' results. Clinical studies of any design are currently lacking, with regards to the role of deferoxamine in the management of COVID-19 infection, and the only available patients' data are those that correlate serum iron levels, or ferritin levels with the severity or outcomes of COVID-19 patients [13] .
Intrigued by the promising role of deferoxamine, we conducted this study under the hypothesis that deferoxamine may improve outcomes of COVID-19 patients, with the main aim of comparing all-cause hospital mortality between patients who receive deferoxamine and those who do not.
Methods
This was a single-center prospective observational cohort study conducted at the ICU of King Saud Medical City (KSMC), Riyadh, Saudi Arabia. KSMC is the largest government hospital in the central region of Saudi Arabia. It has a capacity of 1200 inpatient beds, the ICU originally included 100 beds but was expanded during the COVID-19 pandemic to include 127 beds, half of which are single-room beds, and the rest are open cohorting areas. All ICU beds are fully equipped with capabilities of invasive and noninvasive monitoring and Ventilation. The ICU is run 24/7 by intensivists, with a 1:1 nurse-to-patient ratio. During the COVID-19 pandemic, KSMC became the tertiary referral center of positive cases, only transferring stabilized patients to other hospitals when there were boarding new cases in the emergency department. The ICU generally follows the COVID-19 management guidelines issued by the Saudi Ministry of Health [14] . The study was conducted between 1 October and 31 December 2021. The work has been reported in line with the STROCSS criteria [15] (Supplemental Digital Content 1, http:// links.lww.com/MS9/A65).
Inclusion and exclusion criteria
Any patient admitted to the ICU during the study period was eligible for enrollment, as long as they fulfilled the following criteria: At least 18 years of age, confirmed positive COVID-19 infection by reverse transcriptase PCR through a nasopharyngeal swab within less than seven days, in addition to at least one of the following: • Peripheral oxygen saturation less than 90% for 10 min on room air. • Respiratory rate more than 30/min. • Partial pressure of oxygen to fraction of inspired oxygen ratio (P/F ratio) less than 300. • Requirement of supplemental oxygen to maintain oxygen saturation of at least 95%, through nasal cannula, face mask, nonrebreathing mask, or high flow nasal oxygen. • Noninvasive mechanical ventilation, including biphasic or continuous positive airway pressure. • Invasive mechanical ventilation via endotracheal intubation or tracheostomy tube. We excluded pregnant or lactating women, known cases of HIV, known cases of pulmonary tuberculosis, history of receipt of deferoxamine within the last 6 months, refusal to participate in the trial, admitted to ICU with Do Not Resuscitate (DNR) order or expected to die within 24 h of ICU admission according to the treating consultant intensivist. We divided enrolled patients into two groups, the deferoxamine group, and standard of care (SOC) group.
Outcomes
The primary outcome was the percentage of all-cause hospital mortality between the deferoxamine and SOC groups. Whereas, secondary outcomes included ICU length of stay (LOS), hospital LOS, newly grown bacterial cultures (from any source), any adverse events (defined in Supplementary File, Supplemental Digital Content 2, http://links.lww.com/MS9/A66), and the difference of clinical status of the patients between ICU admission and hospital discharge according to progression scale previously used [16] (details in Supplementary Table S1, Supplemental Digital Content 2, http://links.lww.com/MS9/A66), calculated as the clinical status of hospital discharge minus that of ICU admission (higher differences indicate worsening). Other subgroup outcomes were the need for endotracheal intubation (for patients admitted spontaneously breathing), and successful extubation and VFD (for patients admitted on invasive mechanical ventilation). Patients transferred to other healthcare facilities were censored at discharge and were not followed further.
Patients' management
In this prospective observational study, the decision to administer deferoxamine (or not) to any of the enrolled patients was entirely up to the treating consultant intensivist, the study team had absolutely no role in the treatment assignment, we only kicked off the study period with a journal club, where we discussed and presented the various publications postulating a beneficial effect of deferoxamine in the management of COVID-19 patients, but afterwards, the team never interfered with decisions of the treating consultant. Apart from deferoxamine, all COVID-19 patients received the SOC, as per the ICU protocols.
Deferoxamine regimen
In our ICU, deferoxamine is administered as a loading dose of 1000 mg by intravenous infusion, diluted in sterile water for injection (500 mg/5 ml water), and a rate of infusion of 15 mg/kg/h. To be followed after four hours by a total of four doses of 500 mg (administered similarly to the loading dose) every four hours.
Data management
De-identified data were recorded for all enrolled patients, including demographics (age, sex, body weight, comorbidities, and smoking status), presenting complains, clinical status score upon ICU admission and hospital discharge, supplemental oxygen requirement, the need of intubation for spontaneously breathing patients, or extubation for mechanically ventilated patients and the duration since extubation to hospital discharge, ICU and hospital LOS, hospital outcome, initial laboratory investigations upon ICU admission (including hemoglobin, total white blood cell count, platelets count, serum creatinine, liver function tests, serum lactate, serum ferritin, in addition to Sequential Organ Failure Assessment (SOFA) score upon ICU admission. Missing data were completed by multiple imputation method.
Statistical plan
Continuous variables were summarized as mean SD as well as median and interquartile range. Discrete variables were summarized as frequency and percentage. We compared continuous variables between groups by Student t-test or Wilcoxon rank-sum test as appropriate. If the Student t-test was used for the comparison, we accounted for unequal variance due to differences in group sizes (Welch t-test). Discrete variables were compared between groups by chi-square test or Fisher's exact test as appropriate.
As a sensitivity test for the primary outcome, we performed logistic regression for in-hospital mortality, using the backward elimination method (if P > 0.15) to retain significant predictors in the model, and presented its results as odds ratio with corresponding 95% CI, we explored goodness of fit of the model by Hosmer-Lemeshow test, and examined fulfillment of logistic regression assumptions by Box-Tidwell test for linearity of the logit of the outcome and continuous predictors, as well as correlation coefficients of independent variables for the absence of multicollinearity. Furthermore, we visually presented the survival of patients in both groups (censored at hospital discharge) by Kaplan-Meier curve, along with log-rank test P-value.
Potential bias of the study outcomes may arise by the fact that we considered all patients transferred to other hospitals as 'Alive' discharge, accordingly, we performed three hypothetical scenarios, best case scenario, worst-case scenario, and equivocal case scenario (details in Supplementary File, Supplemental Digital Content 2, http://links.lww.com/MS9/A66).
We did not calculate a sample size as we intended to enroll all eligible patients within the study period, and there was no correction for multiple testing. All statistical tests were considered significant if P-values were less than 0.05. Commercially available statistical software (STATA) was used in all statistical tests (StataCorp. 2019, Stata Statistical Software: Release 16; StataCorp LLC).
Ethical Considerations
The study was approved by the local institutional review board (under the registration number: H1RI-16-Jul20-04). Written informed consent was signed by all enrolled patients or their legal guardians for the enrollment in the study and data collection, but not treatment assignment, as that was at the discretion of the treating consultant. The study was retrospectively registered at Research Registry (http://www.researchregistry.com) under UIN: (researchregistry8652) and follows the general principles outlined by the declaration of Helsinki.
Results
During the study period, we screened 317 COVID-19 admissions to the ICU, we excluded 112 patients, while 205 patients were enrolled in the study. Figure 1 shows enrollment flow and reasons of exclusion. 150 patients received the SOC and 55 patients received deferoxamine and SOC at the discretion of the treating consultant. The deferoxamine group included a higher percentage of males and presented more frequently with cough (Table 1). Otherwise, both groups were similar. Deferoxamine group had a mean age of 52.2 14.1 years compared with a mean age of 49.4 14.4 years for the SOC group. We observed a statistically nonsignificant higher percentage of mechanically ventilated patients in SOC group, and the distribution of received medications (other than deferoxamine) was similar between both groups. Missing data were mainly of lab investigations, with a maximum percentage of 9.8% missing for neutrophils, data were completed by multiple imputations (Supplementary Table S2, Supplemental Digital Content 2, http://links.lww.com/MS9/A66).
Outcomes
The primary outcome of in-hospital mortality was significantly different between both groups, sixty one patients (40.7%) died in the hospital from the SOC group, as compared with 14 patients (25.5%) of the deferoxamine group (P = 0.045, 95% CI: 1.3-29.2%). (Table 2). All in-hospital mortalities took place in the ICU, and all patients underwent cardiopulmonary resuscitation. Notably, more patients were transferred to other hospitals in the SOC group, and were considered alive at hospital discharge (Supplementary Table S3, Supplemental Digital Content 2, http:// links.lww.com/MS9/A66). The hypothetical case scenarios indicate significantly lower mortality of the deferoxamine group in the best and equivocal scenarios, whereas, the worst-case scenario showed numerically lower mortality for the deferoxamine group, however, it was not statistically significant (Supplementary Table S4 The secondary outcomes showed a significantly higher clinical status score upon discharge in the SOC group compared with deferoxamine group (6.2 4 vs. 3.6 4.3, 95% CI: 1.4-3.9, P < 0.001). Likewise, the difference between the clinical status score upon hospital discharge and ICU admission was higher in the SOC group compared with deferoxamine group (0.3 3.7 vs. − 2.2 4.2, 95% CI: 1.3-3.7; P < 0.001). There were no significant differences between both groups with regards to ICU LOS, hospital LOS, grown bacterial cultures, and adverse events ( Table 2 and Supplementary Table S5, Supplemental Digital Content 2, http://links.lww.com/MS9/A66).
The multivariable logistic regression model showed that being in the deferoxamine group is associated with decreased odds of hospital mortality [odds ratio = 0.46 (95% CI: 0.22-0.95); P = 0.04], other significant variables retained in the model were age and mechanical ventilation upon ICU admission. The model was well fitted (Hosmer-Lemeshow P = 0.3), with fulfilled assumptions of logistic regression (Table 3 and Supplementary Tables S6-S8 Kaplan-Meier curve of survival (Fig. 2) shows a significantly higher survival of patients in the deferoxamine group compared with the SOC group (log-rank test P = 0.009), the median survival of patients in deferoxamine group was 40 days (95% CI: 24-40 days), whereas that of patients in the SOC group was 22 days (95% CI: 17-31 days).
Discussion
In this study we found a lower hospital mortality rate in the deferoxamine group compared with SOC group, the deferoxamine group had a reduction in the ordinal scale of clinical status from admission to discharge, which was significantly lower at discharge compared with SOC group, indicating clinical improvement. More patients in the deferoxamine group were successfully extubated with more VFD. There were no differences between groups in ICU and hospital LOS, the requirement of intubation, newly grown cultures, and adverse events. Deferoxamine was associated with a reduction of mortality odds by 54% in a well-fitted multivariable logistic regression model adjusted for age and mechanical ventilation status upon ICU admission.
Deferoxamine, an iron-chelating agent, possibly ameliorates the consequences of COVID-19 infection and mitigates the cascade of events that ultimately lead to clinical deterioration and death. Beginning with the reduction of viral replication, deferoxamine reduces the available iron needed for viral replication, as was observed with other RNA viruses such as HIV type 1 [17] . Dysregulated immune response and hyperinflammation are commonly implicated in the pathophysiology of severe forms of COVID-19 infection and multiple organ failure and are almost always associated with high levels of proinflammatory cytokines such as IL-6 [18] , deferoxamine may have a role in reducing IL-6 as well as other cytokines, subsequently preventing patients' deterioration and development of lung injury as seen in animal models [19] , and in vitro studies on closely related viruses such as influenza A virus [9] . High levels of iron increase the production of reactive oxygen species [5] , which impose an oxidative stress that promotes the development of acute respiratory distress syndrome [20] , a characteristic picture of severe cases of COVID-19 infection [21] . This study to our best knowledge is the first to explore patientcentered outcomes in COVID-19 patients who received deferoxamine, and our results seem to be in agreement with the hypotheses of its beneficial role. There was a significantly lower mortality rate in the deferoxamine group in our study, and despite being barely significant, and in view of an obvious underpower, this result may not be conclusive. However, it should be taken in account that the mortality rate in the SOC group may have been underestimated by the higher proportion of patients transferred to other hospitals and subsequently censored in that group, as evident by the best and equivocal hypothetical case scenarios. Accordingly, this lower mortality rate in the deferoxamine group at least can be considered as idea generating for further investigations in an adequately powered controlled trial, since the mortality difference was not statistically significant in the worstcase scenario. Furthermore, we observed more successful liberations from mechanical ventilation with more VFD, as well as clinical improvement evident by the reduction of the clinical status score in the intervention group. Both, observations could be interpreted in view of the proposed ability of deferoxamine to ameliorate tissue inflammation. Receiving deferoxamine was associated with a substantial reduction of mortality odds, again, possibly reflecting its role in downregulating IL-6 and other proinflammatory cytokines implicated in the development of acute respiratory distress syndrome, patients' deterioration, and death.
We believe that our study could be the building foundation to investigate a new frontier in the management of COVID-19, despite its numerous limitations. This was an observational singlecenter study, carrying all the inherent limitations of such designs, mainly the lack of randomization. The small sample size undoubtedly renders the study underpowered. We cannot definitely exclude confounding effects either of patients' characteristics or other modalities of treatment, due to the uncontrolled nature of the study, such as the vaccination status of enrolled patients which we did not record in our study, or the different modalities of supplemental oxygen when patients were spontaneously breathing. We cannot be sure if the wide spectrum of clinical severity of enrolled patients has undermined or exaggerated the results of the study, as we did not perform subgroup analyses by admission severity in view of the small numbers in each subset that would have made any statistical comparison meaningless.
Conclusions
Deferoxamine could decrease mortality and improve clinical evolution in adult COVID-19 patients admitted to ICU. We recommend further exploration of the role of deferoxamine in the management of COVID-19, in adequately powered controlled trials (Table 3).
Ethical approval
This study was approved by the institutional review board of King Saud Medical City, Riyadh, Saudi Arabia, under number H1RI-16-Jul20-04. All participants or their legal guardians signed an informed consent form.
Consent
Written informed consent was signed by all enrolled patients or their legal guardians.
Sources of funding
No personal or institutional funding was received by any of the authors during this study. Assumption of linearity between logit (outcome) and predictor variables fulfilled (Box-Tidwell P = 0.2). MV, mechanical ventilation; OR, odds ratio. | 2023-05-25T05:09:17.261Z | 2023-04-12T00:00:00.000 | {
"year": 2023,
"sha1": "a8f02ed66547bb7b382230b5df1c13b9a446ddc5",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a8f02ed66547bb7b382230b5df1c13b9a446ddc5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118532086 | pes2o/s2orc | v3-fos-license | Simulation of planet detection with the SPHERE IFS
Aims. We present simulations of the perfomances of the future SPHERE IFS instrument designed for imaging extrasolar planets in the near infrared (Y, J, and H bands). Methods. We used the IDL package code for adaptive optics simulation (CAOS) to prepare a series of input point spread functions (PSF). These feed an IDL tool (CSP) that we designed to simulate the datacube resulting from the SPHERE IFS. We performed simulations under different conditions to evaluate the contrast that IFS will be able to reach and to verify the impact of physical propagation within the limits of the near field of the aperture approximation (i.e. Fresnel propagation). We then performed a series of simulations containing planet images to test the capability of our instrument to correctly classify the found objects. To this purpose we developed a separated IDL tool. Results. We found that using the SPHERE IFS instrument and appropriate analysis techniques, such as multiple spectral differential imaging (MDI), spectral deconvolution (SD), and angular differential imaging (ADI), we should be able to image companion objects down to a luminosity contrast of ? 10-7 with respect to the central star in favorable cases. Spectral deconvolution resulted in the most effective method for reducing the speckle noise. We were then able to find most of the simulated planets (more than 90% with the Y-J-mode and more than the 95% with the Y-H-mode) for contrasts down to 3 \times 10-7 and separations between 0.3 and 1.0 arcsec. The spectral classification is accurate but seems to be more precise for late T-type spectra than for earlier spectral types. A possible degeneracy between early L-type companion objects and field objects (flat spectra) is highlighted. The spectral classification seems to work better using the Y-H-mode than with the Y-J-mode.
Introduction
A large number of extrasolar planets have been discovered in the last fifteen years through indirect methods such as radial velocities and transits . Although in the past few years some objects with planetary mass have been imaged around stellar and substellar objects like HR 8799 (Marois et al. 2008), Fomalhaut (Kalas et al. 2008), 2M 1207 (Chauvin et al. 2009), and β Pictoris (Lagrange et al. 2010), imaging of extrasolar planets is still very challenging because of the high planet vs star luminosity contrast (10 −6 for young giant planets and down to 10 −8 -10 −10 for old giant and rocky planets) and the small separation with respect to the central star (few tenths of arcsec for a planet at ∼10 AU at some tens of pc). The next generation of instruments aimed at imaging extrasolar planets will exploit extreme adaptive optics (XAO) systems to correct aberrations up to a high order, providing a high Strehl ratio (SR) and high-efficiency coronagraphs to attenuate the onaxis PSF and reduce its diffraction pattern. The combination of these two devices should be able to reduce the stellar background down to a value of around 10 −5 at separations of a few tenths of arcsec. The residual background will be given mainly by the speckle noise generated by the atmosphere and the telescope pupil-phase distortion. To further improve the contrast achiev-able with these instruments, it will be mandatory to apply differential imaging techniques, such as angular differential imaging (ADI) ), simultaneous spectral differential imaging (S-SDI) (see e.g. Marois et al. 2005), and spectral deconvolution (SD) (see Thatte et al. 2007). In the next years in particular, three instruments will be able to exploit these techniques to image extrasolar planets. These are the Gemini Planet Imager (GPI) at the Gemini South Telescope , SPHERE at the ESO Very Large Telescope (VLT) (Beuzit et al. 2006), and Project 1640, which is already working at the 5 m Palomar telescope (see Crepp et al. 2010). In particular, SPHERE will include three scientific channels: (i) a differential imager and dual band polarimeter called IRDIS that will operate in the near infrared between the band Y and K s (Dohlen et al. 2008); (ii) a polarimeter called ZIMPOL that will perform differential imaging exploiting the polarized light reflected from the planetary atmosphere in the visual band ; (iii) an integral field spectrograph (IFS) that will supply simultaneous images at different wavelengths in the near infrared between the Y and the H bands (Claudi et al. 2008). Integral field spectrographs also have the potential of providing the spectra of the detected faint companions at close separation, thus allowing much better characterization. IFSs similar to the one designed for SPHERE are also present in GPI and in Project 1640, and are foreseen for future planet imagers like EPICS designed to work for the future E-ELT (Kasper at al. 2010). In this paper we present the results obtained from the simulations we developed to evaluate and to optimize the performances of the SPHERE IFS. In Section 2 we give a very short summary of the SPHERE IFS instrument, in Section 3 we describe the simulation tools that we used in our work, in Section 4 we describe the methods used for the data analysis of the output of our simulations, in Section 5 we present the results of the various simulation runs, in Section 6 we describe the software that we wrote for the IFS data analysis and the results obtained testing it on the output of our simulations, while in Section 7 we report our conclusions.
SPHERE IFS description
The SPHERE IFS is designed to work in two different wavelength ranges: (i) 0.95-1.35 µm (Y-J-mode) with a resolution of R=50 and (ii) 0.95-1.65 µm (Y-H-mode) with a resolution of R=30. These two ranges and resolutions are achieved through two different dispersers (two Amici prisms -see Oliva 2000). The IFS is composed of several subsystems: -the integral field unit (IFU) -the collimator optics system -a filter wheel -the disperser optics system -a camera optics system that can be moved to focus spectra on the detector or to produce dithering to reduce noise related to the flat fielding -a 2048×2048 Hawaii II detector with pixel of 18 µm housed in a cryostat The novel lenslet IFU concept upon which this spectrograph is based (BIGRE, Antichi et al. 2009) allows the entrance slits plane to be made of images of the telescope focal plane and not of images of the telescope pupil, as ordered in the classical TIGER design (Bacon et al. 1995). In this design, each lenslet is an afocal system with two powered surfaces. The thickness of the array is then given by the sum of the focal lengths of the lenslets of the two arrays. The main advantage of the BIGRE configuration over the TIGER one is that it allows a strong reduction of the cross-talk between adjacent lenslets as demostrated by Antichi et al. (2009). The microlens array is composed of 145 × 145 hexagonal lenslets with a pitch of 161.5 µm (corresponding to ∼0.012 arcsec). Each lenslet is masked with a circular aperture with a factor of 0.98 to avoid straylight. The full field of view (FOV) of the instrument is a square with a side of 1.77 arcsec. The total length of the whole instrument from the first surface of the IFU to the detector plane is 1061.89 mm. A more detailed description of the whole instrument can be found, e.g., in Claudi et al. (2010).
Simulation description
We exploited two software tools for our simulations: -the SPHERE package of the CAOS software -the CSP code 1 The CAOS system (Carbillet et al. 2004) is an IDL based software that aims to simulate the behavior of a generic adaptive optic (AO) system from the atmospheric propagation of light to the sensing of the wavefront aberrations and the corrections through a deformable mirror. This is done with a Fraunhofer approach, so that it cannot be used to properly evaluate the impact of Fresnel propagation (see Section 5.1). An end-to-end numerical tool has been developed for the simulation of the whole SPHERE instrument within the CAOS environment. It contains detailed instrumental modeling of the Extreme adaptive optics systems, of IRDIS and ZIMPOL (Carbillet et al. 2008). A module simulating the SPHERE IFS has been also developed to properly take both the real and the imaginary parts of the image forming on the lenslet plane into account. In principle, this could allow a complete treatment of the cross-talk among the lenslets when studying the impact of light propagation through the BIGRE. However, the execution of this module turned out to be very time consuming so that it was not possible to use it for a large number of detailed simulations. To overcome this difficulty, we used a shorter code that calculates the impact of the cross-talk between adjacent lenslets (coherent) and adjacent spectra (incoherent) by providing the beam propagation over a sub-sample of 7 hexagonal lenslets. This code is described in detail in Antichi et al. (2009). After running this code we concluded that a value of the cross-talk equal to or less than 10 −2 was completely adequate for meeting the objectives of our instrument. We then decided to use our (IDL oriented) code called CSP to perform all the simulations of light propagation within the IFS, while we decided to use the SPHERE CAOS package to provide real intensities over the IFU entrance focal plane as input for CSP. For this, we performed simulations using the CAOS IRDIS module with 100 atmospheric phase screens at 64 different wavelengths ranging between 0.95 and 1.35 µm (or between 0.95 and 1.65 µm in the Y-H-mode case). There are enough atmospheric screens is large enough to ensure that static speckles dominate noise, as expected in real cases, and to ensure that the PSF has an overall shape representing a realistic stellar halo. In Figure 1 we display a monochromatic PSF obtained from the CAOS simulations. Even if the SPHERE package of the CAOS system allows simulating different types of coronagraphs, we preferred to use only a 4-quadrant one for our simulations . In this way, however, our results are still representative as we were not interested in investigating the performances of all the SPHERE instrument coronagraphs. The choice of making a simulation with only 64 wavelengths was determined because for more wavelengths, the program saturates our computer memory. However, CSP requires 269 PSFs at different wavelengths as input and to obtain them, we performed interpolations starting from the ones resulting from the CAOS simulation 2 . An early version of the CSP code, described in Berton et al. (2006), has been deeply modified to take variations in the instrument optical design into account. CSP only considers the real part of the image on the lenslet plane and then propagates it through the IFS spectrograph using a Fraunhofer approach, but it can include a treatment of the cross-talk through a parametric approach. The code can be divided into different parts: Fig. 1. Monochromatic PSF resulting from the CAOS simulation. The bright corona corresponds to the outer working angle of the XAO system. Its radius is roughly 0.5 arcsec at the working wavelength. This image has been obtained using a 4quadrant coronagraph. The cross structure centered on the center of the image is the signature of this type of coronagraph.
-The image formation part simulates the propagation of the light through the instrument and its main goal is to produce a final image with all the spectra. For each spectral step, the exact number of photons passing through every microlens is calculated, as well as the correct position projected on the detector of the center of each microimage. The intermediate images generated by this process are then convolved with a microlens PSF prepared in advance. All the monochromatic images are properly shifted to account for the spectral dispersion due to the Amici prisms and are then summed up to create the spectra. Finally the code adds noises to the image: Poisson noise and all the detector noises. An example of the output of this part of the code is given in Figure 2. -The calibration part performs the same procedure as is described at the previous point using a monochromatic uniform illumination of the IFU (and not a PSF) as input. This part simulates the wavelength calibration lamps and is performed at three different wavelengths. The code, then, reads from the images resulting from these procedures the spectra (having a template that indicates the position of the spectra at the minimum wavelength). Every spectrum is fitted with a Gaussian curve (using the IDL routine GAUSSFIT) and the code finds the center of the Gaussian and its error. Finally, through an appropriate interpolation, the code calculates the shift for each lenslets, using the positions of the previuosly calculated three centers and the theoretical position of the center (well known because we know the wavelength of the calibration lamp). At the end, these results are saved in a wavelength map file where at every pixel of the image is associated a well-defined wavelength. -The last part of the simulation procedure is to create of the datacube with the monochromatic images that will be the instrument final output. To this aim we derive a rectangular grid from the original hexagonal pattern of the IFU. This is done by creating a square grid of points at the same dis- tance from each other. For every wavelength, the flux value associated to every point of the grid is calculated by considering the three nearest points at the given wavelength as calculated in the calibration step of the procedure and saved in the wavelength map. The calculation is made with a mean of the fluxes of these three points weighted according to the distance from the grid point considered.
Where not specified, the simulations were performed assuming a G0 spectral type central star with an absolute magnitude of J=3.75 and at a distance of 10 pc from the Sun. A total exposure time of 1 hour was generally simulated even if, in some cases, longer exposure times were simulated. For these simulations we assumed a readout noise of 10 e−, a dark current of 0.1 e−, and a flat field error of 10 −4 (hereinafter detector noise).
Data analysis methods
High-performance coronagraphs within an extreme AO system like the one adopted in SPHERE, which gets its sampling frequency equal to 20 cycles/pupil, allows imaging of companion objects down to a contrast of 10 −5 within the whole FOV of its IFS (2.5 arcsec diagonal), except for separations smaller than ∼0.1 arcsec from the central star. However, to fulfill the goal of the SPHERE instrument to image giant planets around young nearby star, contrasts of ∼ 10 −6 − 10 −7 are requested. To this aim, speckle noise has to be reduced by a further factor of 10 -100. This is done by applying some differential imaging analysis techniques to the final datacube extracted from the IFS data, such as the simultaneous spectral differential imaging (S-SDI) (Marois et al. 2000) and the spectral deconvolution (SD) (Thatte et al. 2007). A natural evolution of the S-SDI technique, using an IFS, has been defined during our work and is described in detail in Section 4.1. Another possible technique, normally applied with other analysis techniques, is angular differential imaging (ADI) . In this section we briefly present the algorithms we used to implement these methods.
Multiple differential imaging
As previously said, this techique is an extension to more spectral channels of the previous S-SDI techniques, such as the single differential imaging for two channels and the double differential imaging for three channels (Marois et al. 2000). The final result of the CSP simulation code consists of a datacube composed of 33 (for the Y-J-mode) or 38 (for the Y-H-mode) monochromatic images. On these images we apply the following steps: 1. The images are divided into two groups: planetary images (monochromatic images at wavelengths where the planet signal is potentially present) and reference images (monochromatic images at wavelengths where the planet signal is very weak or absent) according to giant-planet atmosphere models. 2. We then distinguish two different cases: -Single differences: -A reference image is assigned to each planetary image. -For each pair, the reference image is spatially scaled (through an interpolation procedure) to the planetary image according to the wavelength ratio between the wavelengths of planetary and reference image. -The scaled reference image is subtracted from the planetary one. -Double differences: -Two reference images are assigned to each planetary image, with the wavelength respectively shorter and longer than the planetary one. The two reference images are chosen in such a way that their wavelength separations from the image containing the planet signal are the same. -For each group of three images, the reference images are spatially scaled to the planetary image according to the wavelength ratio between the wavelengths of planetary and reference images. -The three images are combined according to the double-difference formula defined by Marois et al. (2000). 3. The procedure at step 2 should eliminate most of the speckle pattern. If the pairs are selected so that the planet image is only present in one of the two images, the planet will not be canceled out. 4. A weighted average of the cleaned differential images provides the best final result suitable for the planet search. We adopted a weight for each single differential image, which is the reciprocal of the wavelength difference between the two (or three) images subtracted to obtain the considered one. In this way we give a greater weight to differences between images with a smaller wavelength separation, where the speckle pattern has a stronger correlation. Since the planetary images are not scaled, the planet position will not shift with wavelength.
There are three critical issues in this procedure: 1. To properly work we have to make an assumption about the spectra of the companion objects that we are looking for. Moreover, this works much better for spectra with large absorption bands such as for methane-dominated planets. 2. Each interpolation introduces noise. In our approach, the number of interpolations is effectively reduced to only one per pair (two for each group of three images when using the double differential imaging).
3. Pairing of monochromatic images, and optimal weighting should be given according to the main noise source: -If errors are dominated by photon noise, the best procedure is to assign the same weights to all pairs. In this case, pairs should be selected to have similar (or even constant) wavelength separations. -If errors are dominated by calibration errors (speckle residuals), the best procedure in single differential imaging is to create pairs having the smallest possible wavelength separation, compatible with the gradients present in the planetary spectra. In this case weights should be assigned according to the inverse of the square of wavelength separation. -For what concerns double differences, this last approach is limited by the intrinsic width of the emission peaks in the planetary spectrum. Practically, we expect a very small advantage by creating groups of three images with the smallest possible wavelength differences. It should then be more advantageous to have various groups of three images with the same wavelength difference and give the same weight to all of them.
Spectral deconvolution
This method was proposed for the first time in Sparks & Ford (2002) and further developed in Thatte et al. (2007). It exploits that speckles are expected to change regularly with wavelength. Outside a given separation, defined as the bifurcation radius (BR), the speckle spatial excursion over the spectral range is larger than the planet size so that the speckle pattern associated to the star can be reconstructed and eliminated using regions unaffected by the planet image. Differently from the MDI described in the previous section, no assumption about the spectra of the companion objects is needed. Spectral deconvolution should offer some advantage over the differential imaging approach, at least outside the BR, because it uses the companion spectrum as a whole. The value of the the SPHERE IFS BR is around 0.20 arcsec for the Y-J-mode and about 0.12 arcsec for the Y-H-mode. The procedure we followed is composed of four steps: -We scaled single images provided by the CSP data extraction algorithm to a reference wavelength (in this case we chose the central wavelength among those of the 33 or 38 monochromatic images). Because of this rescaling, the planet will be in different positions in every image. -We plotted the spectrum for every spaxel of the rescaled datacube (see Figure 3) and calculated a polynomial fitting function using 1/λ as independent variable. The polynomial degree depends on the distance from the center of the image, in units of the BR. The value of this fitting function is then subtracted from every spectrum. The fit allows the modulation of a given stellar halo speckle brightness with wavelength to be taken into account but its degree is small enough not to fit a potential planetary signal. This should eliminate, or at least reduce, the speckles or diffraction residuals. -The subtracted images are then rescaled back to the original scale according to their wavelength, in order to mantain fixed the planet position in all of them. -To search for planet signal, the three-dimensional datacube is collapsed to a bi-dimensional image given by the crosscorrelation of the spectra in each spaxel with a template planet spectrum. This procedure enhances the signal-to noise of the final image. In general, in our simulations we use a methane-dominated spectrum. However, as we see in Section 6, this procedure works well with either a flat or an L-type spectrum, too.
Angular differential imaging
In general, we assume that observations are done with the field fixed with respect to the IFU. In this case, the pupil rotates with time on an alt-az telescope, a typical value being 30 • over a 1 hour exposure time. In this framework 3 , angular differential imaging (ADI) can be applied to reduce the speckle noise further. Various codes have been written to perform ADI on real images (see ). Here, we considered a variant of this method that we defined as azimuthal filtering ("azimuthal", meaning along arcs at a constant radius). This procedure is composed of the following steps: -For each given pixel, we searched for all spaxels that have similar separation (distance from the center). In our procedure, the annulus width was set at 1 pixel. -We plotted the value of the intensity at the selected wavelength for each of these spaxels against the azimuth angle. -We drew a fitting line through these points using a cubic spline curve through the averages of these points within arcs of length 4λ/D. After various tests, we chose this value to avoid canceling the planet signal. -We subtracted the intensity value on the fitting line from the intensity at the selected wavelength in that spaxel. -The procedure was then repeated for all wavelengths.
-The procedure was iterated over all spaxels.
While this procedure does not completely eliminate the impact of static speckles, it also works well for quasi-static speckles, Fig. 4. Comparison between 5σ contrast obtained with single difference (yellow line), with multiple single differences (red line), and with multiple double differences (green line) from IFS simulations. These results were obtained for a simulation where no detector noise, no cross-talk, and no rotation were considered.
which are speckles having a lifetime longer than field rotation but shorter than the total exposure time.
Simulations results
In this section we review the most important results obtained from our simulations. As said in previous sections, we expect a significant improvement in the contrast using the MDI method compared to a simple S-SDI, when exploiting all of the many monochromatic images provided by an IFS. In particular we expect that the contrast scales with the square root of the number of independent single differences that we can realize when using the whole spectrum. A further improvement can be obtained by correctly coupling the images at different wavelengths. Since the contrast scales with the wavelength separation, we can pair monochromatic images in ascending order of wavelength separation and weight them in descending order according to their wavelength separation. In Figure 4 we display these results for a simulation where no detector noise, no cross-talk, and no rotation were considered. In particular, we can see that no further gain is instead obtained using the multiple double-difference method. This is mainly because realistic double differences should be made using a rather large wavelength separation, because of the intrinsic width of the methane bands. In this simulation, as in all the following ones, the jump in the plots around 20λ/D is given by the coronagraph outer working angle effects. We can further improve the contrast obtained with our instrument by exploiting the rotation of the field with respect to the pupil. As seen in Figure 5, the improvement is better at large separations, as expected because more noise realizations can be sampled. If quasi-static speckles dominate, the results improve with the square root of the angle (and of the separation), since the planet images sample different noise realizations while rotating around the stellar image. In this case the azimuthal filtering procedure (described above in Section 4.3) can be applied. SD should provide better results than MDI, at least for separations larger than the BR. This is confirmed by the plots displayed in Figure 6 where we show the run of the 5σ calibration limit for a very bright star. The case shown is for 30 • field rotation with 6. Run of the 5σ calibration limit with separation for a very bright star. The case shown is for 30 • field rotation with azimuthal filtering. Detector noises and a cross-talk with a value of 10 −2 were introduced too. Red line is the result obtained with multiple differential imaging while the green is with the spectrum deconvolution method. azimuthal filtering. In this case we introduced the detector noises using the values indicated at the end of Section 3 and a crosstalk total amount of 10 −2 , too. Results obtained with the spectral deconvolution are slightly better than those obtained using the multiple differential imaging, with difference on the order of 0.2 dex (∼ 0.5 mag). As expected, better results are obtained when the Y-H-mode is considered. In this second case the difference is on the order of 0.3 dex (∼ 0.7-0.8 mag), and the gain is appreciable even at a small separations (0.15 arcsec). From all these plots we can see that, by using SPHERE IFS and an appropriate combination of the analysis methods described above, we should be able to reach contrasts on the order of 10 −7 or even better at large separations from the central star.
A synthesis of the results from our simulations is presented in Table 1 where we listed the contrasts obtained at different separations from the central star using the two analysis methods for Y-J and Y-H-modes.
Impact of Fresnel propagation
Out-of-pupil optics could have a strong impact on the performances of any differential technique adopted in highcontrast imaging due to Fresnel propagation, as described by . An optic that is not conjugated to a pupil plane will modify the light distribution in a chromatic way because at this location the beam intensity distribution depends on wavelength through diffraction effects. The closer the optic is to a focal plane, the larger this chromaticity. Even more severe is the fact that this chromaticity is no longer smooth, but cyclic along the spectrum, when the optic is conjugated to a height that is several times the Talbot length defined as where λ is the light wavelength and Λ the period of a single sinusoidal component of the wavefront across the pupil. For an aberration with a given period, the pupil complex amplitude presenting the electromagnetic field changes from a pure wavefront error to a pure amplitude error over a quarter of the Tal bot length. Since the Talbot length is different for different periods, a decorrelation occurs that depends on angular separation. The farther an optic is from the pupil plane (in multiples of Talbot length), the more the decorrelation along spectral domain will be, and speckle correlation will be broken.
In the case of SPHERE, the Talbot effect was expected to be strong for those optical components located before the lenslet array, such as the entrance window, the ADC, the derotator and the coronagraphic mask (Yaitskova et al. 2010).
To evaluate the impact of the Fresnel propagation, we cannot use the CAOS package that is based on the Fraunhofer propagation. We then exploited the PROPER code (Krist 2007) to create new PSFs that are then used as input for the CSP code. In Table 2 we list the parameters used to calculate the Fresnel propagation in all the simulations. We report the values of the conjugated distance and the wavefront error (WFE) rms for all the considered optical surfaces. To save computing time, we performed all these simulations without considering the effects of the atmosphere (using only 1 atmospheric phase screen). Of course, this is not realistic, because it yields contrasts that are to optimistics. However, the comparison is still meaningful for evaluating the impact of Fresnel propagation itself. From comparison of the plots in Figs. 8 and 9 resulting from simulations that do not include and that do include the Fresnel propagation effects, respectively, one can see that the differences between the two cases are very small, as confirmed by the data reported in Table 3 where the values of the contrast at different separations for simulations performed not considering (second column) and considering (third column) the Fresnel propagation are compared. From the results of these simulations, we can then conclude that the effects of the Fresnel propagation are not too great for our instrument. This is because of the large conjugated distances of the optics (see Table 2) and the not very large pupil size. Fresnel propagation is much more a concern for extremely large telescopes, like E-ELT (see Antichi et al. 2010).
Data analysis software for companion detection and spectral classification
Through the simulations described in the previous sections, we have demonstrated the capability of the SPHERE IFS instrument to image extrasolar planets down to a contrast of ∼ 10 −7 at a separation of a few tenths of arcsec. However, to fully characterize the newly discovered planets (temperature, chemical composition of the atmosphere, etc.), it is important to be able to reconstruct their spectra at a high-fidelity level.
To test this capability, we prepared a pipeline for a data analysis of the calibrated datacube resulting from our simulations. This procedure is composed of five different steps: 1. Speckle noise subtraction from the original datacube using the spectral deconvolution algorithm. 2. Sum of all the resulting images to create a single multiwavelength image. 3. Search for companion objects on the summed image. 4. Extraction of a spectrum for every object found. 5. Spectral classification of every object.
The simulation was performed using the same PSFs as in the previous simulations. The FOV is rotated by 30 • during the observation and the central star is a G0V star at a distance of 10 pc (this corresponds to a magnitude J=3.75). Every simulation contains five planets in different positions but at the same separation from the central star (the planets in the same simulation are identical). We perfomed simulations with planets at a separation of 0.3, 0.5 and 1.0 arcsec from the central star. To avoid overlapping of the planets PSFs at the smallest separation, we decided to replace these single simulations with three different ones containing two different planets each (for this reason only for the case at 0.3 arcsec we have six planets for every single case and not five). We then performed different simulations with different luminosity contrasts between the planets and the central star, and adopted values of 10 −5 , 3 × 10 −6 , 10 −6 , and 3 × 10 −7 . Finally, we performed different simulations with different input spectra: we used a late type T-dwarf spectrum (T7), an early type T-dwarf spectrum (T2), a late type L-dwarf spectrum (L8), and an early type L-dwarf spectrum (L0), taken from the spectra libraries described in Section 6.1. To test the capability of our procedure to distinguish between a companion object and a background star, we performed simulations using a flat spectrum (M2) at the low resolution of our instrument as input. Indeed, at this resolution, all the stellar spectra are expected to be flat. Moreover, we did not include any faint background galaxies because they are expected to be spatially resolved as extended objects by our instrument. While all these spectra come from objects in the solar neighborhood (old objects), our results do not lose generality because, as shown in Section 6.2.3, the detectability of the companion objects at a fixed effective temperature is not determined by the gravity effects, which in turn is the main difference between young and older substellar objects.
Procedure description
In this section we will describe our reduction procedure in more detail. The first two steps are performed using the spectral deconvolution method in the same way as described above in Section 4.2. The search for companion objects (third step) is composed of three different steps: -For each pixel of the image we compare the flux included in a circle centered on the analyzed pixel and the flux into an external annulus. The radii values of the circle and of the annulus can be chosen by the user, but for our analysis, we always adopted the values of 1.5, 2, and 4 (pixels). The user can choose the type of statistic to be performed on these regions: a mean or a median. From our tests we find that the second one was more effective in finding companion objects, so we always adopted it for all subsequent analysis. The procedure finds an object if the value found for the inner circle is greater than for the outer annulus plus the standard deviation (on the outer annulus) multiplied by a factor that can be chosen by the user and that has to be considered carefully case by case. -If finding more than one object into a radius of 3 pixels, the procedure then retains only the most luminous. -Finally a two-dimensional Gaussian fit is performed on a small region around the newly discovered object to find its precise position (in 1/1000 of pixel -no evaluation of the error on the position is done in this procedure). We try to minimze the difference between the extracted PSF and the fitting function by performing an iterative procedure to search for the minimum of the difference by changing the parameter of the Gaussian fitting function.
We then extracted the spectrum of the newly found object simply summing the flux of the pixels at a distance less than 1 pixel on every subtracted monochromatic image and subtracting from this value the median from the external annulus. We made the same extraction for two positions at a distance of ±λ/D from the object position along the azimuth (and then at the same separation of the found object) to evaluate the spectral noise. Subtracting the mean of these two spectra from the object spectrum can then improve the final spectral classification that is the last step of our procedure.
To this aim, we compared the output spectra of our simulations with a template spectra grid. We considered T-dwarfs from T0 to T8 and L-dwarfs from L0 to L8, with the spectral type L7 replaced by L7.5, because we could not find such a spectrum in the literature. In addition, we also considered F1V, G0V, K5V, M2V and M8V type star spectra. The data for the T-dwarfs template spectra were taken from Looper et al. (2007) for T0, from Burgasser et al. (2004) for spectra from T1 to T5 and for T8 and from Burgasser at al. (2006) for T6 and T7. The data for the L-dwarfs spectra were taken from Testi et al. (2001). The stellar spectra have been taken from the IRTF online Spectral Library (http://irtfweb.ifa.hawaii.edu/˜spex/IRTF Spectral Library/index.html).
The spectral classification was obtained by a cross-correlation (using the IDL routine C CORRELATE) between the output spectrum of each simulation and the template spectra. The spectral type with the highest cross-correlation coefficient is the one assigned to the simulated planet.
Companion detection
In Tables 4 and 6, we display the numbers and the percentages of found objects divided according to the spectral type of the input spectra of the simulations (second and third columns) for the Y-J and the Y-H modes respectively. In the fourth and in the fifth columns of the same tables, we instead report the numbers and the percentages of the spurious objects found with our procedure. It is apparent that we are able to find most of the simulated objects both for the Y-J and the Y-H modes, but the method works better in the second case. Moreover, the number of spurious objects found is much lower for the Y-H-mode case than in the Y-J-mode case. We do stress that almost all the simulated objects that we are not able to find in the final image are for the cases at a separation of 0.3 arcsec where the background noise from the central star is greater. Indeed, it is 100% complete for companions down to a contrast of 3 × 10 −7 and separations of 0.5 arcsec, while it is complete at more than 90% for contrasts other than the worst case with a contrast 3 × 10 −7 and separation 0.3 arcsec.
To confirm this we report in Tables 5 and 7 the number and the percentage of those found and of the spurious objects as in Tables 4 and 6 but, in this case divided according to the luminosity contrast of the simulated objects. It is apparent from these tables that we are able to find almost all the simulated objects down to a contrast of 10 −6 , while we lose more than 25% of the simulated objects with a 3 × 10 −7 luminosity contrast using the Y-J-mode. On the other hand, we are able to find more than 90% of the objects with a contrast of 3 × 10 −7 using the Y-H-mode.
Spectral classification
In Figure 10 we show the spectral classification of all real objects found, and the spectral classification of the spurious objects. For the real objects, we can see three high peaks corresponding to the . 10. Histogram with the number of objects (red) and of spurious objects (blue) found for every spectral type in the Y-J-mode case.
M8, T1, and T8 spectral types. The M8 peak is given by the contribution of objects with both an M2 and an L0 input spectrum. The T1 peak is given by the objects with L8 and T2 input spectra. In this case, however, the peak is quite low and the objects classification is more dispersed. Finally, the T8 peak is given by T7 input spectra objects. In general, apart from the case of the L0 spectral type, it seems that our procedure tends to classify the objects with later spectral types rather than the actual ones. We do not have any particular peak in the final distribution for the spurious objects. A similar histogram, but for the Y-H-mode, is displayed in Fig. 12. Even in this case, we have three peaks in the distribution of the real objects. The first one again corresponds to the Figure 11, but for the Y-H-mode case.
M8 spectral type and comes from the contribution of the simulations with M2 and L0 input spectra objects. This means that, as for the Y-J-mode case, these two spectral types seem to originate in a degeneracy. The second peak is around the T4 spectral type and is mainly given by the T2 input spectra simulations, but from the L8 simulations too. The L8 simulations do not give Table 8. Cross-correlation coefficients considering the effects of the gravity with Y-J-mode. log(g) = 4.0 log(g) = 5.5 log(g) = 4.0 0.88 0.40 log(g) = 5.5 0.80 0.51 Table 9. Cross-correlation coefficients considering the effects of the gravity with Y-H-mode. log(g) = 4.0 log(g) = 5.5 log(g) = 4.0 0.89 0.77 log(g) = 5.5 0.72 0.90 in general a correct identification. Indeed, these objects are recognized alternatively as L2 type or early T type. The last peak is at the T7 spectral type and it is given exclusively by the T7 simulations objects (the T8 detections are given by the simulations with a separation of 0.3 arcsec). In Figs. 11 and 13 we display the same histograms, divided in these cases according to the different luminosity contrasts of the simulated objects indicated with different colors in the figures. From these figures we can see that the overall distribution of the spectral classification is very similar to the global one displayed in Figs. 10 and 12 down to a contrast of 10 −6 , while they are much more dispersed for the 3 ×10 −7 contrast where the spectral classification becomes much less effective.
Effects of the gravity
To further test the capability of our procedure to distinguish different objects, we performed different simulations using as input the synthetic spectrum of one object with T e f f = 800K and log(g) = 4.0 and of another one with the same temperature and log(g) = 5.5. All the simulations were performed for five different objects (with the same characteristics) at a separation from the central star of 0.5 arcsec and a contrast of 3 × 10 −6 . Furthermore, we performed simulations both for the Y-J and the Y-H-modes. For the simulations with the Y-J-mode, all the objects with log(g) = 4.0 were recognized as T8 spectral type (with values of the cross-correlation coefficients around 0.75) while the objects with log(g) = 5.5 were recognized as T7 (4 cases) or T8 (1 case). In this second case, the values of the cross-correlation coefficients are on the order of 0.77. On the other hand, for the simulations with the Y-H-mode all the objects with log(g) = 4.0 were recognized as T8 spectral type but with higher values of the cross-correlation coefficients (more than 0.93), while all the objects with log(g) = 5.5 were recognized as T6 spectral type (cross-correlation coefficients on the order of 0.92).
In Tables 8 and 9 we report the values of the mean coefficients from the cross-correlation between the output and the input spectra for the Y-J-mode and for the Y-H-mode, respectively. From these results it is apparent that, in the case of the Y-H-mode we are able to correctly classify the objects for the gravity effects, while for the Y-J-mode all the simulated objects are classified as log(g) = 4.0, .
In conclusion, from our analysis it seems that the Y-H-mode is the best solution for correctly distinguishing between objects with different gravities.
Conclusions
We performed detailed simulations of the performances of the SPHERE IFS instrument and considered different data analysis methods that can be exploited to reduce data coming from the instrument. In particular, we exploited the multiple spectral differential imaging (MDI), the spectral deconvolution (SD), and the angular differential imaging (ADI). This latter seems to be especially useful associated with one of the other two methods. It turned out that SD is slightly more effective in reducing the speckle noise than the MDI, and it is less sensitive to the characteristics of the planetary spectrum. From our analysis, however, we can now conclude that, in the best cases, the IFS channel of SPHERE should be able to image companion objects around nearby stars down to a contrast of almost 10 −7 at a few tenths of an arcsec. We then performed detailed simulations to test the possible impact of Fresnel propagation on the final performances of the instrument. This issue created some concerns especially the presence of optics before the lenslet array but, from our simulations made under the same IFS optical setup, a negligible difference results between the achievable constrasts with or without considering the effects of Fresnel propagation. Because the SD method, as said above, allows better results, we used it to perform a new analysis fn the capability of the instrument to find and to characterize companion objects of the central star.
We then prepared a pipeline with the aim of reducing the datacube resulting from our simulations. To test the effectiveness of this procedure in finding and characterizing planets, we performed a series of simulations with different companion objects' input spectra, different separations, and different contrasts between the simulated planets and the central star. From these simulations and exploiting the spectral deconvolution method combined with some ADI, we were able to image extrasolar planets down to a luminosity contrast with respect to the central star of 3 × 10 −7 . In this way we confirmed the results obtained with the previous run of simulations. We have generally been able to find almost all the simulated objects at the larger separation considered (0.5 and 1.0 arcsec), while the method is less effective at a separation of 0.3 arcsec. However, even in this case, we were able to find more than the 90% of the simulated objects using the Y-J-mode and more than the 95% of the objects using the Y-H-mode. For the spectral reproducibility of our procedure, we can adopt the following conclusions: -The greater the separation from the central star, the larger the possibility to reconstruct the planets' spectra with precision (considering planets with the same luminosity contrast). -Planets with greater luminosity contrast more easily have a precise spectrum reconstruction. -This method allows us to reconstruct and to classify the T type spectra very well while spectral reconstruction and classification seem to be less precise for earlier spectral types. However, even in these cases, the spectral classification generally has a precision of a few spectral types (4 or 5 in the worst cases). -Stellar spectra (M and earlier spectral types) are clearly distinguished from T-type spectra while some ambiguity is present for L-type companions. This implies that, in most cases, the characterization of the nature of the detected objects (companion vs. field star) can be obtained from discovery data alone without waiting for common proper motion confirmation. Ambigous cases of L-type companions can be disentangled in several cases from the properties of the object and the parent star (e.g. L-type companion are expected only above a given contrast threshold). -The Y-H-mode allows a better spectral classification than for the Y-J-mode. -For what concerns the effects of the gravity, they are better disentangled using the Y-H-mode than using the Y-J-mode. | 2011-03-24T13:49:56.000Z | 2011-03-24T00:00:00.000 | {
"year": 2011,
"sha1": "f2d793009a99fd93da9e57ac54efd5678e5101cd",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2011/05/aa16413-10.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f2d793009a99fd93da9e57ac54efd5678e5101cd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
145013745 | pes2o/s2orc | v3-fos-license | The Education Level and Socio-Demographic Determinants of Physical Activity in Czech Adults
Purpose. Previous research has shown that physical activity (PA) is determined by several variables, such as gender, social economic condition (SES) and place of residence. The main purpose of this study was to study the association between education and PA of the Czech adult population as well as discovering any other socio-demographic factors that may influence PA. Methods. A population-based survey conducted in 2008 resulted in 6,989 International Physical Activity Questionnaires (short version) from Czech adults aged 26–69 years. This survey included all regions in the Czech Republic. The data were analysed using frequencies and binomial logistic regression separately for gender and education level. The dependent variables were classified as either the “healthy minimum” and “health promotion” according to the amount of PA criteria the individuals met. Results. People with a university education had less PA than other groups of different education levels. The “health promotion” category was met by 9.9% of women and 6.5% of men with elementary education, 67.4% of women and 71.3% of men with a secondary education, and 22.7% of women and 22.2% of men with a university education. The “health promotion” category is also more likely to be met by males (OR 1.33, CI 1.20–1.48, p < 0.001), people with elementary (OR 1.67, CI 1.36–2.06, p < 0.001) and secondary education (OR 1.60, CI 1.42–1.80, p < 0.001), those living with a family with children (OR 1.49, CI 1.07–1.53, p < 0.001), living in villages (OR 1.35, CI 1.14–1.60, p < 0.001) or small towns (OR 1.27, CI 1.10–1.61, p < 0.001), those who have a dog (OR 1.15, CI 1.04–1.27, p < 0.05), and those who participate in organized PA (OR 1.30, CI 1.17–1.44, p < 0.001). Conclusions. There was a surprising low amount of PA among those who studied at a university. Programs that promote PA among university students and future graduates should be considered.
Introduction
The amount of physical activity (PA) that adults perform usually decreases with age [1]. Other factors that play a role in the decline of PA include socioeconomic status, financial conditions, health, psychologi cal and behavioural variables [2] and educational attain ment. The positive effect of education on health comes from the fact that higher educated people usually have better job opportunities, higher annual income, improved housing, better access to nutritious foods and more health insurance. In addition, "higher levels of education could also have direct effects on health through greater health knowledge acquired during schooling and greater personal empowerment and self-efficacy" [3, p. 1503].
The association between the education level and the level of PA in an adult population has been reported by Sallis and Owen [4] and Trost et al. [5], where the relationship between education and PA has found to be positive; the higher education an adult obtains the higher level of PA he/she performs [6]. A general interest in PA, through the use of pedometers, was found in educated people as part of a multi-strategic communitybased intervention [7]. This can be explained by possessing better knowledge and understanding of the effect PA has on a healthy lifestyle. As research showed, higher education attainment is related to an improvement in overall health which may increase the probability of performing PA [8]. However, bergman et al. [9] discovered that having a university or college degree was negatively associated with higher PA according to the IPAQ scoring protocol they used in Sweden. It was postulated that those with higher education levels may participate in more leisure-time exercise, but due to their less physically demanding professions the total amount of PA was in fact lower.
Therefore, the aim of this study was to define what factors influence an individual's PA level, with emphasis placed on the level of education from a sample population of Czech adults. Regarding this study, the societal, economic and political situation of the Czech Republic before 1989 and after the "Velvet Revolution" is an important factor that needs to be taken into consideration. These changes significantly influenced various spheres of life for Czech citizens as they did for many postcommunist countries. Most Central European countries tended to generally copy the societal development of western countries in various economic, health and sociological indicators with 10 to 20 years of delay. And, except for technological development, there is even a repetition of the undesirable trends found in Western society such as the rise of obesity, time spent watching television or on the computer, a general decline in PA and unhealthy eating habits. Contemporary trends from Western Europe lead to more time spent at work, higher income, and more possibilities in ways to spend leisure time. For example, in 1984, only 27% of adults (20 to 69 year-olds) practiced PA, however, in 2007 this number increased to 45% [10]. Nonetheless, the problem of balancing time between work, family life, and leisure occurred in many post-communist countries.
In the Czech Republic, a study was conducted [11] on PA with university students. This population segment was found to be sufficiently active with more than 85% meeting their PA recommendation, yet most university students do not have a family or work responsibilities. In a study on Icelandic youth [12], researchers found that lower bMI, overall PA and good dietary habits were associated with higher academic achievement. However, the possibility of a mutual association between PA and education was not considered. Similar results were obtained in a Texas study [13], where students who were physically active were more likely to do well academically, have better attendance and to have fewer disciplinary actions.
In contrast to students, adults who work and have families lead busy lives. In addition, someone with a university education may have more responsibility and may spend more time at work. Such time is often spent sitting at a desk and participating in more sedentary activities such as writing, planning, and consulting. The free time that such individuals may have could conflict with family responsibilities, individual wishes and personal chores, and the PA necessary for a healthy life does not factor as a priority. The potentially stressed lifestyle of those with a university education is justifiably a matter of concern and must be addressed. Roberson and babic [14] described how adults in central Europe (Croatia) have problems with finding time for PA. Their research also showed the effect urban areas can have on health. The level of physical activity of Czech adults was previously found to be significantly influenced by the size of the locality where one lived -the larger the size of the city the lower total PA [15].
Therefore, the purpose of this study was to find which factors, such as the attained education level, have an effect on the level of PA of Czech adults. We assumed that with an increasing level of education, the amount of actual PA in leisure-time would also increase [16]. In addition, we wanted to know whether adults with different education levels (elementary, secondary and university education) adhere to their PA recommendations (judged by how many of the PA criterion they met) in the Czech Republic. We were also interested in other socio-demographic variables that may influence individuals of different education in meeting their PA recommendations.
Material and methods
A survey was conducted in the Czech Republic during the spring of 2008. The participants were randomly chosen based on their residence and represented all Czech regions. A computer program randomly selected 400 participants from an address database from the Ministry of the Interior of the Czech Republic; after the data was updated a representative sample of 250 remained. Trained coordinators visited those living at those addresses and handed out envelopes with the International Physical Activity Questionnaires -Short Version (IPAQ-SV). If they failed to meet the selected individual, they were advised to visit the nearest neighbour. The coordinators explained the meaning of the survey, how to complete the questionnaire as well as the deadline for handing back the completed questionnaires. Participation in the study was voluntary. The coordinators did not compel the questioned individuals to complete all the information and did not check for correctness and completeness.
The questionnaire used was the official Czech short version of the IPAQ [17], used to determine the frequency, type and duration of physical activity of Czech citizens and considered reliable and standardised [18]. It was translated by professional translators and followed the "Guide to Cultural Adaptation and Translation of the IPAQ Instruments". The collected physical activity data is self-reported and considered suitable for monitoring a population [19]. The sample characteristics are presented in Table 1.
The information collected included the length (in minutes) and frequency (days) of PA (walking, moderate PA and vigorous PA) in different domains (as part of their occupation, transportation, leisure-time, domestic chores and gardening). They also stated the amount of time spent sitting per day, however, this data was not subject to analysis in this study. People also listed personal information (see Appendix), such as gender, age, height and weight, years of education, whether they smoked, place of residence (location), living status, type of living arrangement, whether they owned a dog, car, bike or cottage and the level of participation in organized PA (whether yes or no and if yes how many times per week).
From 10,571 completed questionnaires (IPAQ-SV), we only analysed adult participants who were 26 to 69 years old. In addition, all participants with missing information were excluded from the analyses. After an adjustment of the obtained data according to the Guidelines for Data Processing and Analysis of the IPAQ, a total of 6,989 completed data sets remained. For data analysis we decided not to use the study's original classification of PA based on three levels of physical activity (IPAQ scoring protocol), because it does not meet the requirement for countries with a higher level of PA in its citizens. For more details see bauman et al. [20], where 62.9% of adults in the Czech Republic are classified as belonging to a highly active population. Therefore, we oriented our findings on the physical activity recommendations on the analysis done in Healthy People 2010 [21]. A similar study on PA recommendations was published by bergman et al. [9].
Following this example, we classified three criteria for individuals meeting their PA recommendations according to the results from the questionnaire: 3 × 20 minutes of vigorous PA per week, 5 × 30 minutes of moderate PA per week, and 5 × 30 minutes of walking per week. Then we established one category as a "healthy minimum" for those adults who met only one PA criterion (no matter which one), and one category as "health promotion" for those who met two or three of the PA criteria. These categories were the dependent variables.
We categorized our sample according to gender and the self-reported length of education according to Czech education system -elementary ( 9 years of education), secondary (10-13 years of education) or university educated ( 14 years of education). We also categorized the sample according to four age groups (26-34, 35-44, 45-54, and 55-69 years old); bMI (less than 25 kg/m 2 and 25 kg/m 2 ), and smokers and non-smokers. We classified the sample as those living in a metropolis (more than 100 thousand inhabitants), city (30,000 to 100,000 residents), town (1,000 to 29,999 residents), or village (less than 1,000 inhabitants). In addition, other factors included if one lives alone or with a partner or with a family with children, if they have a dog, and whether he/she participates in organised PA. Data from the questionnaires were analysed using SPSS Statistics statistical software, version 18.0 (IbM, USA). We analysed the frequencies and percentage separately for gender (Tab. 2). We also incorporated binomial logistic regression for data analysis; the dependent variables were the criteria for PA and the independent criteria were the socio-demographic characteristics.
Results
The results of our surveys are presented in four tables. The mean characteristics of the men were: age 43.5 ± 10.6 years, height 179.7 ± 7.3 cm, weight 85.2 ± 11.6 kg and bMI 26.4 ± 3.2 kg/m 2 , and in women: age 43.4 ± 10.6 years, height 166.5 ± 6.2 cm, height 66.6 ± 10.8 kg and bMI 24.0 ± 3.9 kg/m 2 . As shown in Table 1, there were more male participants who were overweight and Healthy minimum -meeting one PA criterion; health promotion -meeting two or three PA criteria; % * -percentage within gender and the level of education Table 3. Unadjusted odds ratio (OR) and 95% confidence intervals (95% CI) for the "Healthy minimum" and "Health promotion" associated with the socio-demographic determinants [22] which used the long version of IPAQ. Table 2 presents information on the level of PA by one's gender and level of education. Women, regardless of their level of education were more likely to meet their "healthy minimum". Meeting the "health promotion" for women was found to be true for those with elementary education. However, more men with secondary education met the "health promotion" level rather than the "healthy minimum". Men and women with uni versity education were the ones who indicated no PA. According to the IPAQ, 33.4% of women met three PA criteria compared to 37.8% of men. One third of respondents met all the PA criteria. In addition, 22.8% of women and 24.4% of men are considered to be sedentary (not meeting any of their PA recommendations). Table 3 shows the results of binomial logistic regression on both PA categories. A significant greater number of males met the "health promotion" category than females. Overweight participants are less likely to meet the "healthy minimum" or "health promotion" category; smokers are also less likely to meet the healthy minimum category. Surprisingly, those with a university degree are less likely to meet the "healthy minimum" as well as "health promotion" category. Those who live in smaller cities, such as 30,000 inhabitants or less, are more likely to meet the "health promotion" category. People living with a family with children are more likely to meet both categories; those who live with a partner are more likely to meet "health promotion" as well as people having a dog. Lastly, the sample showed that those who participated in organized physical activity were more likely to meet both categories.
The "healthy minimum" category is more likely to be met by those whose bMI is below 25 kg/m 2 , who do not smoke, have elementary or secondary education, live in a family with children and participate regularly in organized PA (Tab. 3). The "health promotion" category is more likely met by men, people with a bMI below 25 kg/m 2 , do not have an university education, do not live alone, have a dog and participate regularly in organised PA.
We investigated if the category of "health promotion" (adjusted for gender and education) is associated with the various independent variables obtained from the IPAQ. As presented in Table 4, we separated the men and women according to their education level. binomial regression analysis showed that in those who have an elementary education, the "health promotion" category is met only by women living with families with children as well as having a dog. Women with secondary education most commonly meet the "health promotion" category if they reside in a town of 100,000 or less as well as live with a partner or family with children. Secondary educated men who have a high bMI did not meet the "health promotion" category at all. Concerning women and men with a university education, only those who participated in some organized physical activity met the health promotion.
In addition, elementary educated women are more likely to meet the "health promotion" category if they live with a family with children and have a dog. Elementary educated men were not influenced by any of the examined variables when meeting the "health promotion" category. Meeting the "health promotion" category in secondary educated men is mainly a mutual interaction of body-mass index (the ideal being below 25 kg/m 2 ). Obese or overweight men are less likely to meet the "health promotion" category. In secondary educated women, more variables influenced them meeting their PA recommendations for the "health promotion" category such as place of residence, not living alone and participation in organized PA. With university educated women and men, we found only one independent variable that influenced meeting the "health promotion" category, and this was participating in organized PA. University educated women met the "health promotion" category more likely whether they lived with other adults or with a family with children. Although this result was not statistically significant, it may help persuade people to increase PA in their families or friends.
Discussion
Studying the various determinants of physical activity was the goal of many studies [23] as well as books [4]. To the best of our knowledge, a study on the level of physical activity with adults considering education level and other socio-demographic determinants has not been previously conducted in the Czech Republic. In the previous studies, evidence on the positive influence of specific determinants has been found, but some of the results from these scientific studies are weak or have mixed conclusions. The determinants that were found to have a positive association on overall physical activity from demographic and biological studies are: gender (male), genetic factors, socioeconomic status (income), and education [4, p. 115-116]. The study also mentioned psychological, cognitive and emotional factors, behavioural attributes and skills, social and cultural factors (e.g. social support from a partner or family), physical environment factors [24] and physical activity characteristics that may have positive or negative influence on PA. In some of the determinants of PA, there is a lack of evidence (e.g. size of community, parents' education) or the results were found to be inconsistent. Although a number of demographic determinants were obtained from this study's questionnaire, based on the Czech version of the IPAQ-SV, we mainly focused on the education level of the Czech adult population.
Comparable to our study, Špaček [25] studied exercising and non-exercising adults (N = 1,124) by noting their gender (male or female), age (young or old), size of location (city, town or village), education and father's education (elementary, apprentice, secondary with state exam or university education). Yet, in contrast to our findings, he found that people with a university education are 4.5 times more likely to exercise than those with elementary education. Špaček's study [25] included university students, whereas the sample from our study contained working adults with a university degree. In his study, exercising adults were more likely to be males, those living in cities, of a younger age, and whose father had a university degree. These factors (in a regression model) explained only 40% of the variance, while the rest of the influences (60%) were unknown or not studied. This positive relationship between more years of education and increased physical activities was reported in other studies as well [2,[26][27][28][29][30]. bertrais et al. [31] found this positive relationship between education level and meeting PA recommendations, but only in women. In one Croatian study [22], the level of education showed an inverse association with total PA but a positive association with leisure-time PA. We did not study each domain of the PA practised, but the lower total PA in people with a higher education level is probably connected with their sedentary jobs, resulting in more sitting time [29]. Thus, leisure-time PA cannot substitute for the time spent at work even though university graduates might have more leisure-time PA. This could stem from that fact that they have less physically demanding jobs, and as a result their overall PA is less than those with lower education levels.
On the other hand, Mitáš et al. [15] studied the influence of socio-economic status (SES) on PA and included the number of years of finished education as one criterion of SES (others were way of living, material conditions and income). This is congruent with our findings, where Czech adults with a very high SES, both women and men, performed the least amount of PA (in MET-min/week). However, in a study by Al-Hazzaa [32], using the short version of the IPAQ in Riyadh in Saudi Arabia, found that activity levels did not show significant relationships with education level or job hours per week.
According to bernstein et al. [28], Swiss urban adults (in Geneva) with secondary education are the most sedentary group of men and women (57% of men and 60% of women). Whereas in our research, Czech men with a university degree could be labelled as the sedentary group (31.6%), while sedentary Czech women were those with secondary and university degree (23.2%). Regardless of the education level, PA is evidently less than in Switzerland. Similar to our study, the most active Swiss citizens were those with secondary education (56% of men and 54% of women). The difference between our studies may be explained by the different methods used to collect data. The Geneva study obtained data from persons aged 35-74 years who generally have a higher sedentary lifestyle. In addition, the country of birth may reflect behaviours, genetic factors, cultural habits and social factors.
The "higher physical active" category level of PA in the bergman et al. study [9] can be compared with our "health promotion" category. The bergman et al. study found similar results, where people in the more active category are more likely to be male and those with high school education, which is comparable to the Czech secondary education level. Also, people living in villages or small towns are more likely to be physically active. This may be due to the small distances easily reached by walking or cycling, while people living in cities rely on their own car for transportation. Similar results were found in other studies [22,29,33], where people living in large towns were less likely to be sufficiently active than those living in small towns. In a French study [31], only women not living in urban areas were more likely to meet their PA recommendations.
Living alone has been shown to be negatively associated with the "health promotion" category. This is congruent with the study by Ståhl et al. [34], where people who perceived low social support from their personal environment (family, friends etc.) were more likely to be sedentary. Interpersonal relationships may influence physical activity and establish new social networks and help individuals learn about physical activity and its benefits [8]. Family or peer influences have been found to have positive association with PA and exercise in other research [23,35] especially in spontaneous PA programs during leisure time. but, interestingly, in some studies [5,9] authors also found that having a family or living with a partner may negatively influence the level of PA. Our finding that smokers and obese people are less likely to meet their PA recommendations, regardless of gender, is in accordance with many other studies [9,23,26,28,31].
There are several limitations of this study that should be taken into consideration. One limitation stems from the fact that the IPAQ questionnaire is a self-reported instrument, yet it appears to have acceptable measurement properties [36]. In addition, it is used in many countries for international comparison [29,37]. Although our survey incorporated all regions of the Czech Republic, there was not a consistent amount of returned surveys from each region. For example, Ostrava had a 16.3% participation rate while the Karlovy Vary region only 1.2%.
Conclusion
Our results surprisingly found that adults from the Czech Republic with a university education, regardless of gender, had a lower PA level than those with lower education levels. Those with a university education may have more time constraints, especially those with children. This can be alleviated with more in-depth physical education at schools and sports clubs that stress the lifelong importance of PA. Furthermore, university sports clubs and physical education classes should offer courses in time management as this would help those with time constraints to budget time for PA. Community health and PA programs that can include children would be an added benefit.
Overall, the physical activity and leisure-time PA of adults is an important topic. We would like to include several suggestions as based on the result of this study.
First, since the research shows that more PA is practised by those who live in small towns; future urban planners ought to consider restructuring our cities to appear like a small town. Reliable roads, lighting, and sidewalks all contribute to the feeling of having a safe atmosphere for outdoor activity. Furthermore, parks can help to contribute to the amount of green space as well as offering a convenient place for exercise. Parks and walking areas could also have an education program with information on walking. Tax incentives, carsharing as well as advocating public transportation could all promote walking. Placing parking facilities half of a kilometre away from one's residence could promote a natural way to meet daily PA.
Second, a certain amount of restructuring of the physical education system needs to occur in school systems. Physical education needs to focus its curriculum on lifetime health and wellness. The sport preferences of students must coincide with the needs for PA [38].
Third, universities should encourage some type of wellness or fitness class as a requirement for all students. These classes should demonstrate and encourage fitness and sports, such as walking, Nordic walking and overall physical health for one's entire lifetime.
Lastly, businesses and corporations should take an active role to encourage more PA with their employees. Rewarding employees or offering some kind of motivation for those who maintain PA can be encouraged with vacations or days off. The work of physical education teachers should also be found in the workplace. Weekly classes on general health and PA geared for adults can be offered at work, as well as showing how parents can exercise with their children at home. Goal-oriented individuals may be motivated to use pedometers as way to lose weight and to begin to be physically active. The role of physical education is not to entertain children; physical education should be a viable part of everyone's life and continue throughout one's adult life. | 2019-05-05T13:06:59.607Z | 2012-03-01T00:00:00.000 | {
"year": 2012,
"sha1": "45de080acfd171900ebc543953ab8b3e0268a8f5",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.2478/v10038-012-0005-6",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a884f0d124fe8474ee5026576cf069be00c6332d",
"s2fieldsofstudy": [
"Education",
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
255225925 | pes2o/s2orc | v3-fos-license | NB housing study protocol: investigating the relationship between subsidized housing, mental health, physical health and healthcare use in New Brunswick, Canada
Background Income and housing are pervasive social determinants of health. Subsidized housing is a prominent affordability mechanism in Canada; however, waitlists are lengthy. Subsidized rents should provide greater access to residual income, which may theoretically improve health outcomes. However, little is known about the health of tenants who wait for and receive subsidized housing. This is especially problematic for New Brunswick, a Canadian province with low population density, whose inhabitants experience income inequality, social exclusion, and challenges with healthcare access. Methods This study will use a longitudinal, prospective matched cohort design. All 4,750 households on New Brunswick’s subsidized housing wait list will be approached to participate. The survey measures various demographic, social and health indicators at six-month intervals for up to 18 months as they wait for subsidized housing. Those who receive housing will join an intervention group and receive surveys for an additional 18 months post-move date. With consent, participants will have their data linked to a provincial administrative database of medical records. Discussion Knowledge of housing and health is sparse in Canada. This study will provide stakeholders with a wealth of health information on a population that is historically under-researched and underserved.
Background
Socioeconomic factors are widely accepted as fundamentally linked to health [1][2][3]. Of these factors, income and housing are two of the most pervasive social determinants of health [4,5]. The World Health Organization argues for access to stable, affordable, and adequate housing to decrease health inequities [6]. Further, the Universal Declaration of Human Rights recognizes the right to housing as part of the right to an adequate standard of healthy living [7]. Canada's first National Housing Strategy (2018) aims to remove 530,000 households from housing need, defined as spending 30% or more of income on housing costs [8]. With renewed Federal commitment to affordable housing, it is imperative to investigate the impact of publicly subsidized rental housing, referred to as subsidized housing, on the health of a population that experiences multiple inequities. Although public housing increases affordability, there is limited understanding of the contribution of subsidized housing to health. The primary objectives of this study are to investigate the impact of subsidized housing on 1) mental health; 2) physical health; and 3) health care utilization. The secondary objective of this study is to understand factors related to the wellbeing of renters as they wait for subsidized housing.
Housing and health outcome studies often focus on the built environment [9][10][11] and rehousing programs for persons with severe mental illness [12][13][14][15]. Studies that do investigate relationships between subsidized housing and health focus on jurisdictions outside of Canada [16][17][18]. To date, any studies that systematically investigate the impact of public housing on healthcare use could not be located.
In cross-sectional studies, housing unaffordability is associated with distress [19,20], lower self-perceived mental health [16], poor physical health and increased healthcare use (e.g., emergency care, hospitalization, and walk-in clinic use) [21][22][23]. Increasing housing affordability through subsidized housing, in principle, should improve residents' mental and physical health and decrease avoidable healthcare use; however, there is no longitudinal or quasi-experimental evidence to determine whether commonly used housing affordability programs, such as publicly subsidized housing, are directly associated with improvements in mental health, physical health and healthcare use outcomes.
Although the link between housing affordability and health is established, recent studies indicate that subsidized housing alone may not contribute to health improvements. For example, research from Australia indicates that multiple transitions into subsidized housing are associated with poorer mental health [24].
These findings suggest that, despite increased affordability, a lack of permanency in subsidized housing could produce negative impacts on mental health. Further, evidence from subsidized housing in Chicago indicates that low perceived neighbourhood and housing quality have negative impacts on physical health, despite increased affordability [25].
Renters in New Brunswick, experience high rates of housing unaffordability [26]. In the last decade, the average rent across New Brunswick has increased approximately 40% [27]. Despite large increases in rents, the average provincial income has only increased by 10.2% [28]. Low income and housing unaffordability are the main contributors to housing instability and episodes of homelessness in Canada, which are associated with poor mental and physical health outcomes and higher use of emergency healthcare services [21-23, 29, 30].
Access to subsidized housing increases residual income, which could positively contribute to mental and physical health and changes in rates of hospitalization, walk-in clinic use, and primary care appointments. However, it is unclear as to whether the subsidies are enough to significantly decrease stress in a population that experiences low-income. Further, the act of moving into subsidized housing may produce stress that may negatively impact health and healthcare use [24]. The present study will fill a significant knowledge gap on the relationship between access to subsidized housing, mental health, physical health, and healthcare use.
Study objectives
The study objectives are as follows:
Methods
This study will use a longitudinal, prospective matched cohort design. Research advocates for the use of longitudinal studies to better assess the relationship between mental health and subsidized housing [31,32]. This approach is also useful for understanding physical health and healthcare use, as prospective cohort designs are particularly strong when used to relate an outcome (e.g. mental health, physical health and healthcare use) to an event (e.g. receipt of subsidized housing) [33]. In this case, the study design will allow the research team to associate changes in health to receipt of subsidized housing. Further, any potential cohort effects can be adjusted for by accounting for individual sociodemographic variations within the cohort of housing applicants [33,34].
Primary data collection
The sampling frame for this study is all public housing applicants in New Brunswick, which includes approximately 4750 households at the study start date. Each household will receive a letter mailed from the Department of Social Development (DSD), which will provide information about the study, a link to an online survey, an email, and a phone number for the study team. Online participation will be encouraged; however, participants may choose to complete the survey over the phone with a Research Assistant or via mail. New Brunswick is a bilingual province so all study materials will be available in French and English. Email addresses, mailing addresses, and phone numbers will be recorded during each survey to prevent study attrition. Upon completion of each survey, participants will be mailed or emailed a $10 gift card to Tim Horton's coffee shop. Their names will also be entered into a draw for one of three $500 VISA gift cards. The draw for the gift cards will take place immediately after data collection concludes.
Study participants will enter the study as control group members while they wait for access to subsidized housing. During this time, participants will be asked to complete a baseline survey which asks questions on demographics, self-reported mental and physical health, and a variety of potentially confounding measures, which are described in detail below. After the baseline survey is complete, control group participants will be provided with shorter follow-up surveys at 6, 12, and 18 months following their initial baseline survey that assess changes to the main outcomes (physical and mental health) and variable factors (e.g., experiences of stigma, residential satisfaction, etc.).
The research team will ask participants for their consent to share their names with the provincial DSD. Those who consent will have their name sent to DSD via WatchDox (www. watch dox. com), which is used by the Provincial government to transfer confidential information. Program staff with DSD will check the names provided against offers for subsidized housing each month and will provide the research team with updated information and move dates for those who become housed during the study period. Not all participants will consent to sharing their names; therefore, each survey administered to the control group after baseline will ask participants if they have received subsidized housing. Participants who indicate that they have received subsidized housing will be asked when they moved or started to receive a subsidy and will be moved to the intervention group.
The intervention group will receive additional followup surveys at six, 12, and 18 months after they begin receiving subsidized housing. Participants who are not subsidized within 24 months of their baseline participation date will not crossover into the intervention group and their study participation will be complete. At the start of the study, many of the households will have already been on the waitlist for months. Therefore, households at the top of the waitlist or those who experience conditions that assign them priority status (e.g. homelessness or intimate partner violence) will move into housing faster than others. Recruiting from the entire waitlist will ensure that households from the top, middle, and bottom of the waitlist are contacted for study participation.
It is possible that control group participants may remove their names from the waitlist during the study period. If this happens, the previous data collected from these participants will be kept and their study participation will be complete. It is also possible that participants in the intervention group may receive and then lose or leave subsidized housing. If this happens, the research team will note this, and their study participation will be complete. Their data prior to exiting subsidized housing will be included in analyses. Should a large enough portion of participants leave the wait list or subsidized housing, their data will be compared with others who either stayed on the wait list or continued to receive subsidized housing to see if any significant differences exist between the groups.
In the absence of any data reporting CESD-10 findings and data from the DAD in intervention studies similar to ours, we will estimate the power to compare pre-vs post-intervention CESD-10 total scores and healthcare use at the end of the study, using Cohen's d effect sizes for paired samples [35]. Assuming that there will be 30% attrition by the end of the study, a sample size of 1,138 data pairs achieves 100% power to detect effect sizes ranging from 0.3 (moderate effect size) to 0.8 (large) with a significance level equal to 0.05 using a two-sided paired t-test. As analyses will compare intervention and control periods, the researchers expect that the high power calculated using the paired t-test at the end of the study will approximately hold when we fit mixed models to the data.
Administrative data linking
This study also uses administrative dataset linking to measure differences in physical and mental healthcare use between the intervention and control groups. With each participant's consent at baseline, their name and date of birth will be used to link their survey results with their matched records in the New Brunswick Institute for Research Data and Training (NB-IRDT) database. The NB-IRDT is an organization that houses and links data with large, provincial administrative databases. It provides individual level data on education, health, social services use, and employment. The primary data collected through this study will be linked with participants' healthcare use data from the Discharge Abstract Database (DAD), which provides information on patient billing for hospitalizations, walk-in clinic use, and primary care appointments. The research team will use the date that housing subsidies were received to create a time variable that indicates their receipt of the intervention. The DAD and the time variable will then be used to compare individuals' hospitalizations, walk-in clinic use, and primary care appointments in the 18 months prior to and following their moves into housing. The same analyses will be performed for individuals in the control group to assess differences between the two groups.
Scales and measures
The measures proposed for this survey are discussed below. Additional questions may be added into follow-up surveys if deemed necessary by the research team.
Primary outcome measures
The primary outcomes for this study are mental health, physical health and healthcare use. In this study mental health is conceptualized as the presence or absence of depressive, anxious, and distress symptoms. Depressive symptomology will be measured using the Centre for Epidemiological Studies Depression Scale Short Form (CESD-10) [36][37][38]. The CESD-10 is an abbreviated, validated version of the CESD-R. A scoring algorithm is applied to each of the 10 questions and the values from all the questions are summed to provide a score ranging from 0-30, with 10 points on the scale being the clinical cutoff that is used to indicate the presence of depression. However, the scores are also suitable for use as a continuous variable [39,40]. The Kessler 6 (K-6) will be used to measure distress and anxious symptomatology. The K-6 was designed for the U.S. National Health Interview Survey and measures the presence of distress and anxious symptoms using a simple six item scale [41]. The K-6 is an abbreviated version of the K-10. It is quickly administered and is deemed highly reliable and valid [42][43][44].
Participants will be asked if they have ever received a mental health diagnosis and will be provided with a list of common psychiatric conditions from which to choose. An option to specify a condition that is not listed will be provided.
To assess physical health, the EQ-5D-5L and EQ-VAS will be administered. The EQ-5D-5L is validated measure comprised of five dimensions of health that relate to quality of life. The EQ-VAS is a visual analog scale to measure reported overall health [45,46]. Participants will also be asked to self-report any intellectual, developmental, or physical disabilities.
The DAD, which captures physician billing data on hospitalizations, walk-in clinic use, and primary care appointments, will be used to measure healthcare use. The NB-IRDT has yet to receive data on Emergency Department use, so this measure will not be included in the present study; however, once these data are available, a secondary analysis of Emergency Department use may be conducted.
Demographic and potential confounding variables
Standard demographic information will be collected from each participant (e.g. gender/sexual identity, income, sources of income, work status, marital status, ethnicity, citizenship status, rural or urban residency, and household composition). The NB-IRDT will provide linked data from the Citizen Registry and Vital Stats, which will allow the researchers to account for chronic and comorbid conditions, and movement out of province or death.
New Brunswick's DSD has indicated that their subsidized housing tenants often feel stigmatized, and this negatively impacts their experiences of mental health and wellbeing. Although there is no current data to confirm this, recent studies from other jurisdictions suggest that public housing tenants experience perceived or actual stigma which negatively impacts wellbeing [47][48][49]. To measure stigma, the Self-Stigma Short (SSS) will be administered. This is a 9-item validated scale, typically used to measure stigma of mental illness; however, it allows researchers to replace the condition of interest to meet their own research needs [50]. For the purpose of this study, mental illness will be replaced with public housing applicant (control) and public housing resident (intervention). This will allow the research team to assess whether stigma contributes to mental health in the intervention and control groups.
Data on substance consumption will be collected using six adapted measures selected from the Canadian Tobacco and Drugs Survey [51]. These questions will measure the frequency of alcohol, tobacco, and cannabis consumption over the six-month period preceding each survey. The research team only tracked use of legal substances, as illicit drug use is often associated with secrecy and stigma and the use of illicit substances was not critical to the study [52]. This will allow the research team to control for the impacts of any potential changes in substance use on mental and physical wellbeing.
Social support will be measured using the Oslo Social Support Scale (OSS-3). This scale was selected as it is widely used with a variety of populations; further, it is a brief measure of social support which is important to reduce participant fatigue [53]. The scale consists of three questions which are designed to measure the level of social support that people perceive they have. We will include this measure as social support is highly correlated with physical and mental health [54][55][56][57].
Housing and neighbourhood measures
Previous studies indicate that housing and neighbourhood satisfaction and quality contribute to mental health [58][59][60][61][62][63][64]. The survey will use an abbreviated version of the Residential Environmental Satisfaction Scale (RESS), which is highly correlated with the total RESS scale (0.96) [65]. This scale measures both housing and neighbourhood satisfaction. Participants will also be asked to indicate their housing type (e.g. detached, high rise apartment, etc.), housing tenure, and the number of individuals who live at their primary residence, as these are found to impact mental health [66]. This will allow the research team to determine if potential changes to health and healthcare use can be attributed to perceptions of living environment rather than just the affordability aspect of subsidized housing.
Preliminary data analysis
Random effects regression has the advantage of allowing researchers to explicitly account for within-person changes or unmeasured heterogeneity within individuals across time [67]. Unmeasured heterogeneity can be described as the unmeasured consistencies in individuals that might influence mental health and healthcare use within each wave of data collection. The research team will first explore the longitudinal changes in primary and secondary outcomes using descriptive statistics preand post-intervention, as well as spaghetti plots. To take advantage of the longitudinal nature of our data, we will estimate generalized linear mixed effects models that we predict will take the following form: Y i,t is our outcome variable (see main and secondary outcomes above) and G is an appropriate link function (i.e. logistic for dichotomous variables and identity for continuous variables). X i,t is a vector of variables that we will treat as having fixed effects (β), Z i is a vector of variables and their estimated random effects (u), and ǫ i,t is the remaining error X i,t , which will include variables that can influence mental health or healthcare use and might not be orthogonal to housing status, like time on waitlist, age, etc. We will also explore whether seasonality (month) or interview wave (baseline, six month, 12 month, 18 month) are appropriate to include in our model. Z i is a vector of random effects. We will start by including random intercepts in Z i and their estimated coefficients (u), designed to consider whether individual-specific factors can influence outcomes over time, and potentially include random-slope estimates for variables (like sex) if our summary statistics indicate important differences by covariates.
We will explore the effects of gender, age, housing status and chronic disease morbidity at study entry, and interactions of selected key variables. Without observing the data, the research team cannot commit to more sophisticated modeling approaches, but we have a flexible estimation strategy that allows us to take advantage of the longitudinal nature of the data. Interim analyses will be performed as data are collected.
Study retention
New Brunswick's DSD will partner with the research team to provide access to the study population, recruitment assistance, and monthly updates on receipt of subsidized housing for participants who consent. Prior to obtaining consent at six months, and for individuals who do not consent to share their name with DSD for monthly updates, a screening tool will be used at regular survey intervals to assess whether a participant has received subsidized housing and should be transferred into the intervention group. DSD is committed to using the results of this study to improve the wellbeing of residents who are waiting for and receiving subsidized housing. This study will provide descriptive information on the wellbeing of those waiting for subsidized housing, which may point to the need for additional health supports.
Using a longitudinal study design is advantageous as it allows us to relate any observed mental and physical health effects to exposure to housing affordability concerns. Further, investigating change over time allows us to determine the impact of housing on mental health, physical health and healthcare use when participants move and as they become more settled in subsidized housing. However, a concern with longitudinal cohort studies is study retention. Some attrition is expected in a longitudinal cohort study. To reduce attrition, Scott's Engagement, Verification, Maintenance and Confirmation (EVMC) Protocol will be used [48]. Scott's use of this protocol resulted in a 95% retention rate in their study of individuals who experience high residential instability. The ECVM Protocol involves training research assistants to properly motivate study participants by informing them of the social benefits of their research participation; collecting and updating contact information; scheduling follow-up surveys at the end of each survey; and providing reminder cards with a number for the participants to call should they need to update their contact information.
The social benefits of study participation will be clearly conveyed to participants by research assistants who administer phone surveys or in text through the electronic and mailed surveys. All participants will be asked to provide a mailing address, email address, and phone number each time they participate. Participants who are unhoused while waiting for public housing will be asked permission to contact them at a shelter, agency, or through another mechanism of their choice. All participants will be reminded at the end of each survey that they will be contacted in approximately six months for their next survey. If contact methods are not up to date at their follow-up dates (e.g. phone number is out of service or email bounce back), a reminder card will be mailed to let them know that it is time for their next survey. This letter will provide the research team's contact information and a request to contact the study team to update their information. DSD will update contact information monthly for all unreachable participants who agreed to have their information shared for the research.
Participation will be incentivized with a draw at the end of the study and a gift card following each survey, which may motivate some participants to maintain up-to-date contact information. A systematic review of study retention methods finds that offering incentives is an optimal practice to increase study retention [68].
Discussion
This research study has received Research Ethics Board certification (REB 2020-032) from the University of New Brunswick. Before each survey, participants will be asked to provide electronic (online surveys), written (mail surveys) or verbal (phone surveys) consent. They will be provided with or read a copy of the study information letter. Consent will be collected at each survey interval and consent to participation in the main study is mandatory.
At baseline, participants will be asked to provide consent for the research team to contact them for a qualitative follow-up study in the future. They will also be asked to consent to link their data with the NB-IRDT. At the six month follow-up period, participants will be asked for consent to share their names and addresses with the DSD so they may provide the research team updated information should they receive subsidized housing. Participants may complete the survey if they answer no to any of the optional consents.
Dissemination
The research team will regularly meet with DSD to discuss survey design, recruitment, data use, findings, dissemination, and recommendations arising from the research. For each round of surveys, a two-page plain language summary sheet with key findings will be produced. These sheets will be housed on the Principal Investigator's institutional website and provided to participants who request study feedback via mail or email. All deliverables will be available in French and English. Once the data are analyzed, the research team will work in partnership with DSD to develop recommendations and design evidence-based interventions. Peer reviewed publication of study findings will be sought.
The research team will host community meetings to share the results with members of the public. A meeting will be hosted in each of the three largest cities in New Brunswick-Moncton, Saint John and Fredericton. Virtual and conference call options will be offered for those who live in remote areas or are unable to attend in person. DSD will co-host these meetings. The research team and DSD will send email invitations to public housing providers, study participants, persons residing in subsidized housing, members of local, provincial, and federal government, and members of non-profit organizations who focus on housing instability, health, and/or poverty reduction. During these meetings, the study team will provide all attendees with a copy of the community report and the plain language summary sheets. The study team will deliver a presentation on our research findings and ask the attendees to share their thoughts on or reactions to our findings. The research team will ask attendees to provide their email addresses if they wish to join a community of practice to collaborate on any interventions that arise from our findings. | 2022-12-30T05:09:31.914Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "f6dd4f93580635e542cc17492f1ae31db328e8be",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-022-14923-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6dd4f93580635e542cc17492f1ae31db328e8be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
260610798 | pes2o/s2orc | v3-fos-license | Hepatitis E Virus Outbreak among Tigray War Refugees from Ethiopia, Sudan
We report hepatitis E virus (HEV) outbreaks among refugees from Ethiopia in Sudan during June 2021–February 2022. We identified 1,589 cases of acute jaundice syndrome and used PCR to confirm HEV infection in 64% of cases. Implementing vaccination, water, sanitation, and hygiene programs might reduce HEV outbreak risk.
H epatitis E is a hygiene-and sanitation-related disease caused by hepatitis E virus (HEV), a member of the Hepeviridae viral family (1,2). HEV has 4 genotypes: genotypes 1 and 2, predominantly found in humans, and genotypes 3 and 4, found in both humans and animals (1,2). Main zoonotic virus reservoirs include domestic pigs, wild boars, rodents, and sika deer (2). Risk factors for transmission differ depending on the genotype. However, genotype 1 is associated with maternal mortality, waterborne transmission, and outbreaks in Africa (3,4). In low-and middle-income countries, HEV is mainly transmitted through contaminated drinking water (2). The clinical manifestation of HEV infection is largely genotypedependent (2)(3)(4).
HEV is a common cause of acute hepatitis and jaundice worldwide. The World Health Organization estimates that 20 million HEV infections (16.5% symptomatic) and 44,000 HEV-related fatalities occur annually (2). The public health threat of HEV infection is exceptionally high in Africa, and biennial outbreaks result in ≈35,300 cases of infection and 650 fatalities (3). Pregnant women in Africa are at higher risk for HEV infection than other persons and have an HEV-related mortality rate 10 times higher than the general population (4). Outbreaks of HEV infections in Africa are associated with camps for refugees and internally displaced persons (4). Limited knowledge of the disease is a major challenge for prevention and control of HEV infection in Africa (4).
Gedaref State is in the southeastern region of Sudan, along the borders of Ethiopia and Eritrea (Appendix Figure, https://wwwnc.cdc.gov/EID/ article/28/8/22-0397-App1.pdf). In early 2022, the area was hosting >60,000 refugees who fled from the Tigray War in Ethiopia. After arriving at the reception camp in Hamdayet, Sudan, the refugees were assigned to 1 of 3 long-term humanitarian camps: Tunaydbah, Um Rakuba, or Village 8 (5). During recent years, the region has had severe weather events, including heavy rains and flooding, that increased risks for infectious disease outbreaks (5,6).
On June 2, 2021, cases of acute jaundice syndrome appeared among the refugees in the Um Rakuba camp and were reported from the other humanitarian camps 2 weeks later. Patients were 3 months-64 years of age, and most (50.1%) were 16-30 years of age; 81 (5.2%) patients were <5 years of age, and 95 (6.1%) were >50 years of age. The male to female ratio was 1.9:1. Of 1,589 patients, 100% had jaundice; 83% had yellowish urine; and 78% had anorexia, nausea, and fatigue. Other symptoms included fever (61%), abdominal pain (56%), and headache and vomiting (44%). Among 22 initial acute jaundice syndrome cases, samples from 14 (64%) patients tested positive for HEV at the National Public Health Laboratory in Khartoum, Sudan, by using real-time PCR kits (Altona Diagnostics, https://www.altona-diagnostics.com). The outbreak appeared to peak in July 2021 during which 395 cases were reported (Figure). By February 21, 2022, ≈1,589 cases that included 21 pregnant women and 1 fatality (nonpregnant woman) were identified by using the Rapid Anti-HEV-IgM Test (InTec Products, https://www.intecasi.com) (Figure). Most (75%) cases were reported from the Um Rakuba camp (Appendix).
RESEARCH LETTERS
We report hepatitis E virus (HEV) outbreaks among refugees from Ethiopia in Sudan during June 2021-February 2022. We identified 1,589 cases of acute jaundice syndrome and used PCR to confirm HEV infection in 64% of cases. Implementing vaccination, water, sanitation, and hygiene programs might reduce HEV outbreak risk. The HEV outbreak in Sudan was associated with heavy rainstorms that flooded the humanitarian settlements and destroyed >1,231 latrines and >1,500 family shelters (5). A similar HEV outbreak occurred among refugees from South Sudan hosted in humanitarian camps in western Ethiopia, where >1,000 cases and a 2% mortality rate were reported (7). However, we report a relatively low mortality rate of <0.1% (1/1,589). Among pregnant women attending antenatal clinics in Tigray, Ethiopia, in 2018, lower hygiene and rural residency were associated with a high (43.4%) HEV seroprevalence, suggesting that a large outbreak could have been prevented by improving hygienic conditions (4).
HEV vaccination is recommended for preventing and controlling HEV outbreaks in humanitarian settings, particularly for pregnant women (1,3). However, the success of vaccination is dependent on the HEV genotype. Because of limited resources, we were unable to genotype the HEV that was circulating in the camps.
Recent outbreaks of Rift Valley fever in northern Sudan and dengue fever in western Sudan have occurred (8)(9)(10). These outbreaks highlight the association between massive population displacements because of war or armed conflict and the emergence of infectious diseases (5,6,(8)(9)(10). Most (50%) HEV outbreaks in sub-Saharan Africa have occurred among refugees and displaced persons living in humanitarian crisis settings (3,4). Open defecation and flooding, both of which occur in the camps, are additional risk factors for HEV emergence and can lead to contamination of nearby open sources of drinking water and food (5).
In summary, we report an outbreak of HEV infection among refugees from Ethiopia hosted in humanitarian camps in Gedaref State, Sudan. Implementing HEV vaccination, water, sanitation, and hygiene programs to improve the living conditions and drinking water among refugees and displaced persons in these camps might reduce the risk for HEV outbreaks. In addition, genotyping circulating HEV could clarify virus transmission routes and inform control measures. | 2022-07-23T15:13:22.735Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "653bd3199ae648eb7ace9af68d7af76fe7128520",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/28/8/pdfs/22-0397.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9432060032bda7b8f984b1100a36482fd902972d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204363148 | pes2o/s2orc | v3-fos-license | Effectiveness of Assessment, Diagnostic and Intervention ICT Tools for Children and Adolescents with ADHD
— The major technological leaps that have taken place over the last years, one of which is the creation and increasing use of ICT (Technology, Information and Communication), require a reconsideration of the capability of the computers to meet the expectations of modern education, especially in the field of Special Education. Researches confirm that new technologies offer liberating and amazing opportunities to people with disabilities, as these are not just limited to simple information management but can also operate supportive-ly, improving the learning ability, the academic performance and functionality of the people that have special needs and those with special, educational needs. In this review there is a brief reference on some of the ICT assessment, diagnostic and intervention tools of the past decade, for children with attention and hyperactivity disorders (ADHD). It also refers to the direct connection and interaction between attention and memory capacity as well as, how, with the help of technology, we evaluate, improve memory, and thus attention. The deficit of ADHD in its executive functions and how these can be improved with the help of technology is also brought up in this review.
Introduction -Description of Attention Deficit Hyperactivity Disorder (ADHD)
Attention Deficit Disorder and Hyperactivity Disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development.It is estimated that about 3% -7% of school-age children have ADHD [1].However, according to several researchers, the percentage may range from 2.2% to 17.8% [2].A similar variation in numbers is reported by Polanczyk et al. [3], ranging from 1% to 20% among schoolchildren aged 8-9.They interestingly point out that geographical and demographic determinants may be related to it.The same view is shared by Barkley in his survey [4], who also connects ADHD with gender, concluding that ADHD is more frequent and intense to boys rather than girls.ADHD is regarded as a neurodevelopmental disorder that affects a child's functioning on every level (family, school, and social).Usually, the persons with attention-deficit disorder do not complete their duties and/or often avoid them.They have difficulty following instructions, focusing on the person that is talking to them and they quite often seem to lose personal belongings or other objects, thus showing disorganisation.Unceasing speech, anxiety and nervousness are prevalent.Their attention and concentration are easily disrupted by external environmental stimuli resulting to impulsive behaviour and unbiased mistakes [5].Even the person's impulsivity manifests itself with the tendency to having difficulty in managing time and being impatient, interrupting others and/or answering questions without any consideration beforehand.In order to diagnose this disorder, the symptoms should occur frequently and last, at least for six months, from a very young age (3-6 years), being apparent both at home and at school.Consequently, there must be significant effects on functionality, on school, social and professional life, which most of the times are determined by the age of the individual.There are two different groups of symptoms.The first one focuses on the lack of attention and the second on the hyperactivity-impulsive behaviour.According to each person's symptoms, there are three different diagnostic categories that represent three types of ADHD respectively: A) ADHD that focuses mainly on the more "careless" kind of person (ADD), that shows signs of inattention with greater intensity and frequency, but no impulsivity and/or hyperactivity.B) ADHD that refers to the hyperactive-compulsive type (HD), when the symptoms of hyperactivity-impulsivity occur with greater intensity and frequency.In this category the attention problems are minimal.C) ADHD combined type, where the symptoms of distraction, hyperactivity and impulsivity manifest with the same intensity and frequency [1,2].
Idiosyncrasy and Requirements of ADHD Students
Students with ADHD have always comprised a challenge for education systems, since their typical behaviour obstructs and restricts teaching in the traditional way.Inclusion in general classrooms is the common practice in most Western countries, therefore the need for differentiated teaching is mandatory.As stated by Loe and Feldman [6], children with ADHD are four to five times more likely to get involved with special educational programs and benefits than those of typical development.They are also more disposed to afternoon tutoring and remedial support.The aforementioned difficulties that a child with ADHD faces on a daily basis, usually result in poor school performance.Electronic distance learning tools can be used otherwise than initially designed and aimed at, as Fovet [7] indicates.Furthermore, as Wilkinson et al. [8] point out in their review, video games and off-line computer games have been of therapeutic value since the early 1980's, without overviewing the fact that restricted playing and interaction potential were offered.These restrictions have been surpassed by on line gaming offered on the internet, which is regarded as a means of transferring therapeutic practice.Especially concerning children with ADHD, they claim that these children tend to control their hyperactivity when they are occupied with motivating games, provided that they are not highly demanding in working memory.People with richer WM are proved to be more able to focus on complex tasks, than those with lower WM capacity.This might mean that proper training of a person's WM should improve their ADHD condition.Shaw et al. [9] proved that a group of young teenagers with ADHD managed equal performance with the control group, at Conner's Continuous Performance Test 2, when it was presented as a video game, whereas, the equivalent performance at the traditional form of the test was inferior to the one of the control group [8].
According to Drigas and Ioannidou [10], education systems should create the appropriate conditions to improve learning and to ensure the transfer of skills and knowledge to pupils with special educational needs, such as students with ADHD.To achieve this, however, as recent researches and studies show, the contribution of new technology is needed.The integration of ICT into school helps the child with educational, social, and cultural difficulties, by giving them the experience they need through the virtual reality it creates.However, it should be extended to use at home, but also in society.ADHD is described as a multidimensional phenomenon which has to be taken into consideration along with other cognitive skills and executive functions [11].It is indicated that all ICT procedures described in their article, have proved to be important to every function concerning attention, self-regulation, motivation, working memory and speech acquisition.At this point, all experts agree that Information and Communication Technology (ICT) gives the opportunity to all people with disabilities and special educational needs to have equal chances at learning, improving their daily routine, increasing self-protection and independence.
Assessment and Diagnostic Tools
Following the consensus, as cited in Sanches-Ferreira et al. [12], in order to effectively address ADHD, a multimodal approach, such as a combination of behavioural intervention programs, specialist and parental training, is needed.Sometimes, depending on the severity of the condition, these programs are either executed individually or with the use of appropriate medication.Also, cooperation between parents, teachers and specialists, dealing with the child with ADHD, is very important to cope with the symptoms and bring the child into the wider social environment.Usually, teachers and parents use body interventions that aim at suppressing the symptoms, rather than preventing them, from showing.This is why appropriate tools are needed to monitor an ADHD child in its interaction with the environment, so as to understand what the purpose or function of the problematic behaviour is and to get help when needed, through an intervention program.In the efforts to upgrade the way ADHD is monitored, there has been a tendency towards switching from the conventional ways of evaluating behaviour changes and instead using the more accurate and efficient mobile apps which will be available to parents and teachers.
A great example of such apps is the pioneering software called "WHAAM".Its main focus is to comprise all the different behavioural aspects taken into account when attempting to paint a complete picture of a person's conduct.In addition, it provides people that are involved into the ADHD person's care, with the ability to share with each other the proper way, to interact with the individual and also create a iJES -Vol.7, No. 3, 2019 productive mediatory plan [13].WHAAM (WA) is accessible through both the web (PCs) and mobile devices (the mobile version is called "WMA").A really important feature is the cross-platform communication that allows the two apps to share information.This network monitors the dysfunctional behaviours of the child at school and at home and shares information about the diagnosis, specific medication and schools that are suitable for the child.While the web version is aimed towards establishing the patient's profile by forming the network around him/her, gathering data and overall assessing their behaviour and adjusting the interventions accordingly, the mobile one (WMA), offers a much more direct approach.Given that mobile devices are at hand almost anytime and any-place, they can collect the data instantly, with a variety of ways such as ABC charts, thus making the app an extremely handy tool [13].Moreover, the behavioural intervention plan will be a few taps away from every person that should need it, diminishing the chances of adults (teachers, parents, therapists) mishandling situations, where the child with ADHD might misbehave, or even having different approaches.This will also deal, to an extent, with the problems ADHD children face with their performance at school, which often leads to them dropping out early.WA also enables users to perform a functional evaluation that Horner describes as the use of "a set of strategies used to identify antecedents (those that preceded a negative behaviour) and consequences that control the problem behaviour" in order to reduce negative behaviours and replace them with positive ones [12].In addition, the WA calculates the TAU-U statistical index for behavioural data collected by network members.The statistical index TAU-U estimates the magnitude of the effect of a treatment on unwanted behaviour [12].
Another example of computerised tools that help diagnose issues with a person's WM is the Automated Working Memory Assessment (AWMA).Alloway et al. [5] claim that it is rather difficult to identify probable working memory problems in classrooms, without using special screening tests or tools designed for this purpose.This standardised software allows not only specialists but teachers as well, to easily estimate someone's memory skills with its three level evaluation technique that tests verbal short-term memory, visual-spatial short-term memory, verbal and visual spatial working memory.It is also divided into Short form (AWMAS) for people suspected of having memory problems and Long Form (AWMAL) for people known to have such problems in order to make a confirmation.
Craven et al. [14] utilise Urban Screens as a means of support for communities and of creating new and collaborative observations concerning ADHD and its social "stigma".Their 'The Screens in The Wild (SITW)' project of Snappy App was developed according to this frame of reference.They support the idea of using such platforms to increase public consciousness related with ADHD amongst others, referring to the use of serious games as a means of promoting healthy behaviours ("exergaming").Initially, they integrated a psychometric Continuous Performance Test with an interactive application for smartphones, to enable the evaluation of the three prevalent symptoms of ADHD (i.e.inattention, impulsivity, hyperactivity).The procedure proved to be user friendly, moreover it resulted to the idea of its gamification as an Android smartphone App.The application, called Snappy App, provides the user with a contingent arrangement of letters of the alphabet, following the format of a typical CP test ; the visual or auditory prompts (letters) are presented to the users , demanding response to the "target" and zero response to the "non-target".Subsequently, came the utilisation of the Web-app version of the Snappy App, into a "game Attention Grabber" on The Screens In The Wild platform (SITW), placing emphasis on the detection of impulsivity and inattention.The original app was then re-designed by using graphical objects such as fruit and other animations aiming at making it more tempting, whereas the web-app was forwarded on the urban screens.The research team aimed at play-testing the Game at the four Screens In The Wild locations existing in the UK, in order to evaluate it.
Intervention Tools
Symptoms of ADHD, such as poor attention skills and/or hyperactive and impulsive behaviour, can be observed early on a child's school life.Following the timely detection, the parents and teachers surrounding these kids are called to take cautious yet effective measures.The appropriate information, guidance and of course cooperation between these adults, can actually make the difference between an improved academic performance and an early drop out.ICT use in both regular and special education, is widely thought to not only upgrade the existing system and its components, but also implement new ones.Specifically, Cognitive Assistive Technologies (CAT) use a variety of tools such as smartphones with adapted applications, cognitive training games, audio books, voice recognition software, ear plugs, minimalistic learning environments and graphical user interface (GUI) adjustments [15].CAT stimulate learners, draw their attention on specific tasks and help them retain it.As far as people with ADHD are concerned, studies have confirmed that the new age software will offer a whole other approach to the way diagnostics and interventions are carried out [16].Researches indicate that computer-based activities seem to have a positive impact on a child's cognitive abilities.Especially children with ADHD are extremely benefited from these activities as these combine both acoustic and visual stimulation helping them to break down complicated meanings and comprehend them.
One of the first research teams who attempted to shed light into the abilities of children with ADHD when occupied with computer games, were Shaw et al. [9].They chose computer games available in the market and standardised electronic tools that were initially designed to measure the executive functions in children with ADHD.A game-like version of the Conner's Continuous Performance Test 2 (CPT2) and two games, "The Revenge of Frogger" (set on a laptop) and "Crash Bandicoot 2" (set on a PlayStation console) were given to the children.Moreover, they presented a specially designed game-like adaptation of CPT2, called "The Pokémon Task".The involvement time for each of the games was fourteen minutes.When playing the Frogger, the player had to guide a frog through traffic pathways and a river, to the riverbank where it would rest safe.There was no option of swimming; instead, the player had to patiently wait until moving wooden chunks and river turtles appeared, in order to move the frog by using them.In a different case (moving in traffic or wading into the river), the frog lost a life.In the second game, Crash (the hero) had to be iJES -Vol.7, No. 3, 2019 transferred around the screen, to collect crystals and points.The movements had to take place in certain moments though, in order to be considered successful and gain points.The procedure of CPT2 was done as normally indicated, by asking the participants to press all the letters except for X.In the gamified version -The Pokémon Task-, the player had to catch as many Pokémons as possible, avoiding however to press on Pikachu, which had substituted the letter X.After the players were engaged in all the games, they showed a serious degree of reduction in impulsivity and spontaneous responses.They obviously made less errors when occupied with the Pokémon Task, compared to their performance at the traditional CPT2.Consequently, the initial estimation of Shaw et al. concerning error reduction due to impulsive action on the gamelike activities, was confirmed.They agree, based on previous studies and experimentation, that computer games are highly motivating, enhancing effort and maintaining interest for children with ADHD.According to them, further research involving a bigger sample of children with ADHD is required, together with more specific research on the positive effect of computer games on the executive functions.
Children and adults with ADHD have the tendency to be more focused and concentrated when they are engaged with digital activities, especially gaming [17].They overcome lack of motivation and appear to have a positive tendency towards these activities.After having realised the gap in the availability of game like training programs focusing on skills referring to daily life situations, Bul et al. developed a new serious game, called Plan-It Commander.The specific purpose of its design was to put forward behavioural learning and everyday life skills; namely managing time, being organised, making friends and other skills intended to promote social acceptance, in which children with ADHD often lag behind.The team conducted a research, the findings of which showed great satisfaction among the participants, after having been involved with the game.Plan-It Commander showed high potential of serving as a significant tool for intervention, in accordance with the rationale of its designers; notwithstanding, a clinical trial is still necessary to ascertain the degree of its efficacy.
Craven and Groom [18] present in their survey, three fields on which computer games and tests concerning ADHD focus: human activity in daily situations, education and medical practice.According to them, most of the existing software applies to executive functions with a view to improving them.Throughout their study, they redetermined that frequent gamers establish better cognitive functions compared to infrequent or non-gamers.They present and propose new games based on tasks that involve monitoring and improve both attention and inhibitory activity.The games were designed by incorporating key elements of Continuous Performance Tests and Go/No Go and Stop Signal Tasks.Specifically, they created "Awkward Owls" and "Wormy Fruit".Certain differentiation to existing ones was made by designing colourful cartoon characters, thus making the games more appealing to children with ADHD, simultaneously aiming at training gaze control.Their research showed some potential of therapeutic intervention but they also suggest that further research should be carried out.
A central element of the concept of ADHD is the deficit in executive functions [19].The executive functions include inhibition (self-control, self-regulation), design, working memory, reasoning, cognitive flexibility, problem solving.They are respon-sible for deliberate, continuous, and directed behaviour towards a goal.The ADHD difficulties in organizing, managing time, and planning are due to executive functions deficits.While EFs improve, the difficulties burden the child's functionality and remain in adulthood.In order to improve EF, Weisberg et al. [19] designed TangiPlan, a set of tangible objects that represent the tasks that children with ADHD have to do in their morning routine.Parents together with children divide the morning tasks in smaller steps from the previous night.The next day, each item is placed in the room next to the work to be done.The child activates the item when it starts the job and turns it off when it is completed.At the same time, while the object is active, it also indicates the time spent doing the job, and this helps the child to manage time effectively.At the same time, TangiPlan is connected to a web-based interface, so parents watch through their mobile in real time the completion of the morning tasks by their children.In the future, it would be possible to improve the TangiPlan by giving detailed information about the time the child could spend on some work by collecting the child's performance data.
Chacko et al. [20] used "Cogmed" as a program for memory training (Cogmed Working Memory Training -CWMT).Cogmed is a computerized training program, designed to enhance working memory by increasing memory storage, aiming both at verbal and non-verbal aspects of it.The training takes place through a game-like computer interface.The training period lasted 5 weeks and there were offered 25 sessions, 5 per week.The participants were attended by coaches who provided support and reinforcement.Its efficiency was evaluated compared to a placebo version of it, in a sample of school-age children (7)(8)(9)(10)(11) with ADHD.The working memory of the participants was evaluated by using The Automatic Working Memory Assessment (AWMA) [5].All families took part in a start-up session first, in which the characteristics of CWMT were presented.Then, together with the coaches, they were provided with a system of reinforcement and rewarding throughout the whole training session.After the training period, parents and teachers evaluated the program.They reported improvement in verbal and non-verbal working memory capacity.There was no evident improvement measuring verbal and non-verbal complex working memory (which involves both capacity and processing), or in other ADHD features such as attention, impulsivity and hyperactivity.Concerning academic performance, Chacko et al. [20] suggest longer term follow up evaluation.They also mention that, probably, because of methodological study restrictions, the extent to which CWMT offers positive results to training school children with ADHD is not certain.
Garcia-Zapirain et al. [21] claim that the learning ability of children with ADHD is enhanced through movement and gestures, so they experimented on a system that supports gestures and hand-eye coordination as well.They developed a technological platform with the use of "Net Framework".The aim was to support children with ADHD with their attention deficiency and to develop their learning ability, getting aid from two physiological sensors; namely "The Leap Motion" -a hand movement recognition sensor and the "Tobii X1 Light Eye Tracker".These wearable sensors are categorised into Natural User Interfaces (NUIs), which comprise Human-Computer interplay devices aiming at using skills that already exist, in order to provide reciprocal action with specific content.The users of this dual system had to perform mathe-iJES -Vol.7, No. 3, 2019 matical calculations on the surface of a digital flower (Math Flower Exercise).If the calculation had a correct outcome, the petals of the flower turned green, if not, they turned red.In this way the players-users were provided with immediate visual feedback.An audio feedback was available as well, as a beeping sound was heard at the choice of a petal.At the end of the procedure, the users were given two questionnaires to evaluate the system and the process.The results were unequivocal.The hand-eye coordination proved to be extremely conducive to raising and maintaining the users' attention to the given tasks; there was also an overall improvement at their performance.The gesture based interaction also proved promising as another option, different from the traditional math-solving process, offering the users great entertainment amongst the others.Garcia-Zapirain et al. [21] believe that the dual sensory pattern they experimented on, could serve as a successful basis for further games, exercises or puzzle activities, given that attention and learning ability were significantly improved.
Brain Computer Interface (BCI) is a system which uses transferred brain signals (via EEG), to enable the user to operate a peripheral device.Over the last years it has been used as an alternative therapeutic method for users with ADHD, especially children and adolescents, by providing guidance through feedback from the EEG.The main motivation for the development of BCI technology, as referred by X.Y. Lee et al. [22], was to enable patients suffering from amyotrophic lateral sclerosis to handle objects with the use of their brain, due to their limited kinetic ability.A second serious concern that gave a strong urge towards BCI technology was the realisation that children with ADHD receive a considerable amount of medication to cope with lapse in concentration, the side effects of which cannot be precisely estimated [22].A feature which makes BCI technology fully user friendly is that it has no side effects and it is developed game-like so it retains a certain degree of motivation and benefit for each individual under training, who nonetheless, considers himself a player.We refer below to several scientific studies and experimentation on this field together with positive and promising effects on training attention to children with ADHD.
Based on existing biofeedback researches and relaxation techniques, Amon and Campbell [23] examined in their study whether the biofeedback tool "The Journey To The Wild Divine" would prove effective on managing ADHD symptoms.Three sensors were put on the players' fingers to discover variations in heart rate and skin transmission ability.These variations were transformed through the game into the necessary "pathways" to proceed and finish the game itself.Any frustration or raise of anxiety on behalf of the player would immediately delay or block the "pathway", thus hindering the player from going on and finishing the game.Evidently, players with ADHD found out that only by being calm and concentrated they would proceed in the game.This realisation offered them a strong motive to participate in the whole gamelike treatment.At the end of the study, questionnaires were given to the parents of all the children who took part in the survey.The parents of the experimental group (the children with ADHD) reported amelioration of breathing and relaxation techniques through the biofeedback video game.The outcome of their study, together with support from other biofeedback related researches, showed that The Wild Divine video game can potentially develop positive attitudes and behaviours of children and ado-lescents with ADHD.However, according to Amon and Campbell [23], further research concerning long term effects of biofeedback needs to be conducted.
Having materialised a twenty session BCI attention with positive results on ADHD symptoms, Lim et al. [24] tested a new more demanding training game structure based on BCI again.They adopted EEG based biofeedback practices, to treat ADHD based on evidence that prevalent ADHD symptoms, especially inattention, can be successfully trained through BCI-based games.Their new training game system consisted of a headband with dry EEG electrodes connected to a computer via Bluetooth.The major gaming activity was the video game CogoLand, especially designed with 3D graphics for the purpose.The player is required to move an avatar with the help of signals transferred by the EEG electrodes.The proceeding rate of the avatar depends on the concentration level of the player.The game was developed in three levels, each demanding different task fulfilment from the avatar.This three level intervention program was carried out for eight weeks (three times per week), with a follow up of three monthly sessions.At the end of the sessions, the parents reported improvement in hyperactivity and impulsivity, as well as in attention.Moreover, the children who received extra monthly training sessions maintained these improvements.According to Lim et al. [24], BCI-based attention training through gaming systems has proved to be successful for children with ADHD.
The same gaming activity, CogoLand, was used by Qian et al. [25].Based on recent studies which proved EEG based neurofeedback systems successful, they occupied themselves with a BCI-based attention training program, aiming at examining the extend of reorganisation of large-scale brain networks in children with ADHD.The evaluation of the program included RS-fMRI imaging and clinical assessment as well.The whole procedure lasted eight weeks with a rate of three sessions per week.The methodology used was one of a headband with dry EEG sensors connected to a computer via Bluetooth technology.The avatar of the game was powered by the player's attention, as in the previous study.The results that were extracted after the 8-week intervention period were positive, confirming that attention in children with ADHD was improved.This lead to brain network reorganisation and was connected with further behaviour improvement, since the salience processing system and the efficient regulation between goal directed and stimulus driven attention were brought close to normal standards.Qian et al. [25] present several advantages of BCI-based attention treatment, including safety in use, convenience in the procedure and the place of utilisation and no need for concurrent medical support.Despite the positive findings of their research, however, they agree that further studies are necessary to define to which degree the results of BCI-based treatment are permanent.
Another BCI system focusing on intervention to children with ADHD was developed by Rohani et al. [26].They installed prototype games in a highly motivating Virtual Reality (VR) classroom setting, with reproduction and command of usual, everyday auditory and visual distractions.Two feedback games were used, each requiring precise and timely definition of relevant input.They were based on the P300 potential, 'a large positive voltage in the recorded EEG peaking around 300ms after a cognitive attended rare stimulus' [26], which is indicative of a person's attention or not.The first game, called "ANISPELL", was based on the already existing P300 speller By Fawell and Donchin.It comprises sixteen animal images presented random-like, demanding for specific attention on one of the animals and providing information about it at the end of the procedure.The second game, called the "T-SEARCH", was created after taking inspiration from Frintrop et al.It consists of twelve different pictures presenting an amount of the English letters "X" and "T".They are presented random-like as well, in a rate of 5 per second.The player is asked to spot the blue "T" symbols and finally to select the correct classification square with all the blue "T" letters.Both games resulted in showing the effectiveness of P300 potential in measuring the attention of children with ADHD.Moreover, with the addition of the distractions in the virtual classroom, improvement through repetition and training was achieved.Rohani et al. [26] recommend to those who develop neurofeedback devices, the implementation of P300 potential and interactive BCI systems when focusing on ADHD therapeutic treatment.
Conclusion
As Visser et al. [27] report, one of the characteristic symptoms of ADHD is hyperactivity, which forces the child to get up many times from his place in the classroom.Barkley [4] also claims that deficiency in inhibition and self-regulation have turned out to be important foci in the theories concerning ADHD.Fortunately, with the help of ICT, the situation is changing as far as executive functions are concerned.The software relevant to each case and function, provides tempting and motivating stimuli given through audio-visual methods, while at the same time it improves the person's functionality in daily situations.By providing positive and/or negative feedback to the student, focus on the school duties is maintained.Over the last years, a lot of attention has been placed on the working memory (WM), the cognitive system responsible for behaviour amongst other functions, the level of which, if lower than average, can often be associated with ADHD.In cases of WM problems, timely recognition and therefore, intervention, is crucial.If parents act in time and accordingly, it can make the difference between later academic success and failure.WM training software can be efficient even at early stages, as we have come to realise throughout our review.Bringing our conclusions to an end, we have to make a special reference to Biofeedback and Neurofeedback BCI systems developed in the last decade, which have proved to be effective so far, on both training working memory and decreasing inattention. | 2019-09-26T09:01:02.209Z | 2019-09-25T00:00:00.000 | {
"year": 2019,
"sha1": "712d8803d21804575bc8eb1b8d9f93cee85e5f42",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jes/article/download/11178/5916",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ab56cbc83fe4fc97e28bb6dd3d151545e9c798d0",
"s2fieldsofstudy": [
"Education",
"Computer Science",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Psychology"
]
} |
70056843 | pes2o/s2orc | v3-fos-license | Underwater Image De-nosing using Discrete Wavelet Transform and Pre-whitening Filter
Image denoising and improvement are essential processes in many underwater applications. Various scientific studies, including marine science and territorial defence, require underwater exploration. When it occurs underwater, noise power spectral density is inconsistent within a certain range of frequency, and the noise autocorrelation function is not a delta function. Therefore, underwater noise is characterised as coloured noise. In this study, a novel image denoising technique is proposed using discrete wavelet transform with different basis functions and a whitening filter, which converts coloured noise characteristics to white noise prior to the denoising process. Results of the proposed method depend on the following performance measures: peak signal-to-noise ratio (PSNR) and mean squared error. The results of different wavelet bases, such as Debauchies, biorthogonal and symlet, indicate that the denoising process that uses a pre-whitening filter produces more prominent images and better PSNR values than other methods.
Introduction
Efficient underwater image denoising is a critical aspect for many applications [1].Underwater images present two main problems: light scattering that alters light path direction and colour change.The basic processes in underwater light propagation are scattering and absorption.Underwater noise generally originates from man-made (e.g.shipping and machinery sounds) and natural (e.g.wind, seismic and rain) sources.Underwater noise reduces image quality [1,2], and denoising has to be applied to improve it [3].Underwater sound attenuation is dependent on frequency.Consequently, power spectral density for ambient noise is defined as coloured [4].Many image denoising techniques are described in [5][6][7][8][9].A method based on adaptive wavelet with adaptive threshold selection was suggested in [5] to overcome the underwater image denoising problem.Assume that an underwater image has a small signal-to-noise ratio (SNR) and image quality is poor.The simulation results show that the proposed method successfully eliminates noise, improves the peak SNR (PSNR) output of the image and produces a high-quality image.Light is repeatedly deflected and reflected by existing particles in the water due to the light scattering phenomenon, which degrades the visibility and contrast of underwater images.Therefore, underwater images exhibit poor quality.To process images further, wavelet transform and Weber's law were proposed in [8].Firstly, several pre-processing methodologies were conducted prior to wavelet denoising thresholding.Then, Weber's law was used for image enhancement along with wavelet transform.Consequently, the recovered images were enhanced and the noise level was reduced.In the current study, a novel image denoising method is proposed in the presence of underwater noise using a pre-whitening filter and discrete wavelet transform (DWT) with single-level estimation.
Characteristics of Ambient Noise
The characteristics of underwater noise in seas have been discussed extensively [10].Such noise has four components: turbulence, shipping, wind and thermal noises.Each component occupies a certain frequency band of spectrum.The PSD of each component is expressed as [11][12][13].
where f represents the frequency in KHz.Therefore, the total PSD of underwater noise for a given frequency f (kHz) is Figure 1 presents the experimental noise PSD in deep water under various activities conditions for shipping with a fixed speed of wind of (3.6 m/s).Each noise source is dominant in certain frequency bands, as indicated in Table 1.
Image Model
Noise interference is a common problem in digital communication and image processing.An underwater noise model for image denoising in an additive coloured noise channel is presented in this section.Numerous applications assume that a received image can be expressed as (6): where () is the original image and () denotes underwater noise.Hence, denoising aims to eliminate the corruption degree of () caused by ().The power spectrum and autocorrelation of additive white Gaussian noise (AWGN) are expressed as (7, 8) [14]: The PSD of AWGN remains constant across the entire frequency range, in which all ranges of frequencies have a magnitude ofσ v 2 .The probability distribution function () for AWGN is specified by [15] () = 1 where represents the standard deviation.With regard to autocorrelation functions, the delta function indicates that adjacent samples are independent.Therefore, observed samples are considered independent and identically distributed.
Underwater noise is dependent on frequency [16,17]; hence, the assumption that it is AWGN is invalid, and instead, it is suitably modelled as coloured noise [1,2,18].The PSD of coloured noise is defined as [19,20] However, the [] of coloured noise is not like a delta function, but, it is takes the formula of a () function [14,19].In contrast to AWGN, noise samples are correlated [20].
Whitening Filter and Inverse Whitening Filter
A linear time-invariant whitening filter can be used to transform coloured noise into white noise [14,21].Through the transfer function (), the prediction error filter (PEF) is used for whitening purposes [20,22].The output of PEF is determined as the difference between the actual and estimated sequences of the linear predictor.The one-step-forward predictor filter is expressed as (11) where p shows the length of the designed filter. () represents the coefficient of filter.The forward prediction error is defined as [20] () = () − () = () + ∑ ()( − ) =1 (12) The filter coefficients can be estimated by minimising mean squared error (MSE).The transfer function of a filter can then be defined as If the order of the PEF is suitably large, then the output of filter becomes white noise [20].
The output of PEF filter denotes the process of convolution between a noisy image and the impulse response of filter used in the whitening process .Therefore, the output of PEF is a coloured version of the original image in white noise.
After the filter coefficients are determined, the noise term () * ℎ () is minimised through a denoising process, thereby producing a clean version of the transformed image as follows: An inverse whitening filter (IWF) can be used to recover the original image [23].ℎ () denotes the impulse response of the IWF.The recovered image is () = ̂() * ℎ () = () * ℎ () * ℎ () (16) represents the relationship between the whitening and inverse whitening filters.The image recovered in the z-domain can be defined as
Image Denoising
Wavelets are used in image processing for sample edge detection, watermarking, compression, denoising and coding of interesting features for subsequent classification [24,25].The following subsections discuss image denoising by thresholding the DWT coefficients.
DWT of an Image Data
An image is presented as a 2D array of coefficients.Each coefficient represents the brightness degree at that point.Most herbal photographs exhibit smooth colouration variations with excellent details represented as sharp edges among easy versions.Clean variations in colouration can be strictly labelled as low-frequency versions, whereas pointy variations can be labelled as excessive-frequency versions.The low frequency components (i.e.smooth versions) establish the base of a photograph, whereas the excessive-frequency components (i.e. the edges that provide the details) are uploaded upon the low-frequency components to refine the image, thereby producing an in-depth image.Therefore, the easy versions are more important than the details.Numerous methods can be used to distinguish between easy variations and photograph information.One example of these methods is picture decomposition via DWT remodelling.The different decomposition levels of DWT are shown in Figure 2.
The Inverse DWT of an Image
Different classes of data are collected into a reconstructed image by using reverse wavelet transform.A pair of high-and low-pass filters is also used during the reconstruction process.This pair of filters is referred to as the synthesis filter pair.The filtering procedure is simply the opposite of transformation; that is, the procedure starts from the highest level.The filters are firstly applied column-wise and then row-wise level by level until the lowest level is reached.
Proposed Method
The following steps describe the image denoising procedure that uses a pre-whitening filter.1) The pre-whitening process is performed on a noisy image using PEF to convert coloured noise to white noise.2) The DWT of a noisy image is computed.
3) Noise variance is estimated by using the following robust median estimator: 4) where (, ) represents all the coefficients of the wavelet detail in level k. 5) A soft threshold is applied to the sub-band coefficients for each sub-band, except for the low-pass or approximation sub-band.6) where denotes the threshold value in level k, and , (, ) represents the wavelet detail coefficients after the thresholding process in level k. 7) The image is reconstructed by applying inverse DWT to obtain the denoised image.Figure 3 shows the data flow diagram of the image denoising process.
. Performance Measures
Common measurement parameters for image reliability include mean absolute error, normalized MSE (NMSE), PSNR and MSE [26].An SNR over 40 dB provides excellent image quality that is close to that of the original image; an SNR of 30-40 dB typically produces good image quality with acceptable distortion; an SNR of 20-30 dB presents poor image quality an SNR below 20 dB generates an unacceptable image [27].
The calculation methods of PSNR and NMSE [28] are presented as follows: where MSE is the MSE between the original image () and the denoised image ( ̂) with size M×N:
Results and Discussion
MATLAB is used as the experimental tool for simulation, and simulation experiments are performed on a diver image to confirm the validity of the algorithm.The simulations are achieved at PSNR ranging from 30 dB to 60 dB by changing noise power from 0 dB to 15 dB.The applied order of the whitening filter is 10.Different denoising wavelet biases (i.e.Debauchies, biorthogonal 1.5 and symlet) are tested on an image with underwater noise via numerical simulation.As shown in Figure 4, soft thresholding and four decomposition levels are used.
Tables 2, 3 and 4 show the performance of the proposed method on various noise power based on the Debauchies, symlet and biorthogonal wavelet biases, respectively.The PSNR and MSE values are calculated based on each noise power value.
Conclusion
Underwater noise is mainly characterised as non-white and non-Gaussian noise.Therefore, traditional methods used for image denoising using wavelet transform underwater 2629 are inefficient because these methods use only a single level for noise variance estimation and then apply it to other levels.However, noise variance at each level should be independently estimated in coloured noise.The traditional wavelet denoising method can be efficiently used with PSNR and MSE within an acceptable range by using a pre-whitening filter that converts underwater noise to white noise, as demonstrated by the results.
TELKOMNIKAFigure 1 .
Figure 1.Overall diagram of the denoising method that uses whitening and pre-whitening processes.
Figure. 3 6. 1
Figure. 3 Data flow diagram of image denoising using a pre-whitening filter.
Figure. 4
Figure. 4 Simulation results on diver image using different wavelet biases.
Table 2 .
Performance Results of PSNR and MSE on Diver Image Based on Debauchies Wavelet Bias
Table 3 .
Performance Results of PSNR and MSE on Diver Image Based on Symlet Wavelet Bias
Table 4 .
Performance Results of PSNR and MSE on Diver Image Based on Biorthogonal Wavelet Bias | 2019-02-19T14:08:30.227Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "09f2099d6cacb2895730e44605a3d1384efd9fad",
"oa_license": "CCBYSA",
"oa_url": "http://journal.uad.ac.id/index.php/TELKOMNIKA/article/download/9236/5899",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f63a40dac8564e29b5aabab3d81dd4a989efaca9",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
199212949 | pes2o/s2orc | v3-fos-license | Muslim Character in Dealing with Rumors in Light of Surat Al- Nur
Praise be to Allah, Lord of the Worlds, for the End is for those who are righteous, peace and blessings be upon our Prophet Muhammad, his family and companions. Rumor is a social phenomenon that exists in all societies in both ancient and modern times. It is a syndrome that threatens individuals, groups, institutions, communities and nations. At the beginning of the Islamic era, the incident of invented falsehood was carried out by the hypocrites. It almost had a significant impact on the morale of some Muslims until when the Almighty Allah revealed the Noble Quran showing the innocence of Aisha (mother of believers). The research problem lies in the lack of knowledge on the moral approach to confront and restrict the spread of rumors. This study aims to determine the moral approach established by the Noble Quran in dealing with rumors through the incident of invented falsehood that appeared in Surat al-Nur. Inductive and analytical approach was used to address this issue. The study concluded that in order to protect the society from this deadly syndrome, Islam has established a strong moral approach to confront rumors and restrict its spread and influence in society. As stated in Surah Al-Nour, the following steps are taken; First, good thought amongst the International Journal of Academic Research in Business and Social Sciences Vol. 8 , No. 11, Nov, 2018, E-ISSN: 2222-6990 © 2018 HRMARS 1118 believer. Second, non-circulation of false stories, Third, Establishing evidence and proof, Fourth, tracing the source of the rumor. Fifth, strict warning of punishment in this world and the Hereafter.
believer. Second, non-circulation of false stories, Third, Establishing evidence and proof, Fourth, tracing the source of the rumor. Fifth, strict warning of punishment in this world and the Hereafter. Keywords: Muslim Character, Rumors, Invented Falsehood.
DEFINITION OF RUMOR
Literal meaning: It is mentioned in Lisan al-Arab: Sha'a al-shayb (the grey hair circulated), intashar (it circulated), sha'a al-khbar (the news circulated), dha'a dhikr al-shai (something is known), ash'ata almaal (the money circulated). Therefore, Isha' (rumor) refers to circulated stories, and Mishya' means a person who does not keep secrets (IbnManzur, 1414). Intishar and dhuyou' means spread and circulation of rumor.
Technical meaning: There are many definitions of rumor, including: It refers to stories, words, or news conveyed and repeated in the society without verifying their accuracy (Noufal, 1987:16).
It is not an exaggeration to say that what the Prophet (peace be upon him) faced in the Hadeeth of invented falsehood is one of the most difficult incidents in his biography. Muslims have been seriously deceived during the incident, and it is merely an obvious fabrication. If not because of Allah's will, there could be a great calamity and atrocity. The Muslim community spent a month in Medina in a terrible situation and ruthless rumor, until when the revelation came to end this tragedy. It is a wonderful informative lesson for that society and Muslim society in general (Noufal, 1987:128).
Objectives of rumors:
They spread fear, trouble, desire, and hatred, manipulate facts, and distort the opponent's image.
Rumors are used:
As a means of disrupting morale, concealing facts, questioning the sources of accurate stories, and distorting the reality
Rumors also have a negative impact on individuals and society, which is evident in the incident of invented falsehood:
The central issue in the story of the incident of invented falsehood is the accusation of Aisha (may Allah be pleased with her). It is an occurrence that might be reflected in every generation and its major objective is leadership possession. If the leadership is not earned by force, the enemy has is left with no option other than destroying the leadership through psychological warfare by using the method of trick, deception, and fabrication of lies. It is a war of propaganda circulated by the enemy against the legitimate leadership (Ismail, 2001).
The incident aimed at
Challenging to the integrity of the Prophet (peace and blessings of Allah be upon him) by spreading infidelity and falsehood among the believers, creating doubt and suspicion in the Islamic sect, fueling disparity within the Muslim community, employing the weak believers and hypocrites in a battle in which all methods of psychological influence are used Therefore, due of the seriousness of this matter, and to protect society from this deadly syndrome, Islam has established a strong moral approach represented in the following steps:
Good thought amongst the believers
This is evident in the Qur'an: Why, when you heard it, did not the believing men and believing women think good of one another and say, "This is an obvious falsehood"? This is what al-Saheed Sayyid Qutb (may Allah have mercy on him) called the "sub-emotional guide" (Qutb, 1412H: 2), and it is the first step in prevention and protection.
It was applied by some of the Prophet's companions in the incident of the infidels invented falsehood, such as Abu Ayyub al-Ansari and his wife (may Allah be pleased with them). Abu Ayyub's wife, Ummu Ayyub, told him: O Abu Ayoub! Do you hear what people say about Aisha (may Allah be pleased with her)? He said yes, and that is a falsehood, are you doing it Ummu Ayyub? She said: No, I swear by Allah I am not going to do that. He said: Aisha is better than you (if you do that).
In another narration: She said to him: If you were in the position of Safwan, do you think something bad about the integrity of the Messenger of Allah? He said: No, she said: If I were in the position of Aisha (may Allah be pleased with her) I am not going to betray the Messenger of Allah, then Aisha is better than me, and Safwan is better than you (Al-Sabouni, 1981:591 & 592).
Non-circulation of false stories
Circulation of false stories, even if not trusted, is the cause of widespread violence in the society. It is used as means of using the weak believers to cause trouble to the entire people. Therefore, Allah has forbidden the Muslim community to spread this falsehood in the society as stated in the Qur'an: "When you received it with your tongues and said with your mouths that of which you had no knowledge and thought it was insignificant while it was, in the sight of Allah, tremendous" (Al-Nur: 15).
Allah has revealed the circulation of falsehood as one of the greatest sins and crimes, and the Almighty warned about them on three things: First: to accept it with tongues, ie, to ask about it. Second: to talk about it, and third: to denigrate it, as they thought it was insignificant while it was tremendous (in the sight of Allah). The rationale behind the mention of "with your tongue" and "with your mouth" is that the story was conveyed by tongue other than hearts (IbnJauzi, 2013:1031).
Therefore, the Almighty Allah said: you "thought it was insignificant while it was, in the sight of Allah tremendous" Therefore, the Muslim community, which carries the message of Surah al-Nur to all mankind, must be of virtuous tongue. They must know when to speak and what to speak about, and when to keep silent and why. This is because they know the danger of the statement and its implication. The Almighty Allah said: Man does not utter any word except that with him is an observer prepared [to record]. Therefore, the Almighty Allah specified the solutions in these cases precisely, and who amongst the Muslim community should be responsible for such incidents and how to deal with those malicious rumors. Allah says: "And why, when you heard it, did you not say, "It is not for us to speak of this. Exalted are You, [O Allah ]; this is a great slander"? (Al-Nur: 16). This is another chastisement for encouragement of positive thinking among people, i.e., if someone mentions what is not appropriate to say about righteous people, people should think something good. Also, if the person makes a comment after that, people should not speak of it (Hawi, 1424H: 3715).
A meaningful management of rumor is to hide it, because rumor continues as a result of its spread and circulation. Therefore, the Almighty Allah cleared the doubt with detailed explanation and severe warning, saying: "Allah warns you against returning to the likes of this [conduct], ever, if you should be believers" (An-Nur: 17). Allah reminds and admonishes you not to return to such transgression forever if you should be believers, because your faith could be weakened by such deeds (Al-Sabouni, 1979:329). Al-Nasafi said: It is encouraging them to be observant, and reminding them to avoid any bad thing (Al-Nasafi, 1998: 57).
Establishing evidence and proof
This is what Al-Shaheed Sayyid Qutb (May Allah forgive him) called "request for external evidence and proof of reality". Allah says: "Why did they [who slandered] not produce for it four witnesses? And when they do not produce the witnesses, then it is they, in the sight of Allah, who are the liars" (Al-Nur: 13). This means that those fabricators must come with four witnesses to what they said? If they failed to do so, then they are the transgressors and liars according to the law of Allah. Also, there is a warning to those who heard the falsehood and did not deny from the beginning (Al-Sabouni, 1979, vol. 2, p. 329). Hence, the rule in this hadith is that Muslims should investigate evidence of reality, especially in the course of events.
Evidence is the first thing, and then the honesty of the person who conveyed the story. If the evidence is not clear, it is the responsibility of the Muslim community to stop it and adopt the approach of the Almighty Allah who ordered us to investigate. Allah says: "O you who have believed, if there comes to you a disobedient one with information, investigate, lest you harm a people out of ignorance and become, over what you have done, regretful" (Al-Hujurat: 6).
Tracing the source of the rumor
This is done by tracking the source of the rumor, punishing its inventors and holding them accountable, because dealing with the source of rumors and exposing it could serve as the first to stop the responses to rumors. Therefore, it is necessary to expose the hypocrites who invented the falsehood. The Almighty Allah said: "Indeed, those who came with falsehood are a group among you. Do not think it bad for you; rather it is good for you. For every person among them is what [punishment] he has earned from the sin, and he who took upon himself the greater portion thereof -for him is a great punishment" (Al-Nur: 11).
Strict warning of punishment in this world and the Hereafter
The Almighty Allah has warned those who repeat false rumors and showed that they are not an insignificant issue; it is tremendous in the sight of Allah. On the other hand, the Almighty Allah has warned those who seek to spread atrocity in the Muslim community. He promised them punishment in this world and the Hereafter. The Almighty Allah said: "Indeed, those who like that immorality should be spread [or publicized] among those who have believed will have a painful punishment in this world and the Hereafter. And Allah knows and you do not know" . | 2019-08-03T01:36:28.186Z | 2018-11-16T00:00:00.000 | {
"year": 2018,
"sha1": "98c3cfddb8b90b209bcda305084030b057bcca22",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/4996/Muslim_Character_in_Dealing_with_Rumors_in_Light_of_Surat_Al-Nur.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9375dd8ae965cb81bb3c584fb6893b9a6b7e86a6",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
219065408 | pes2o/s2orc | v3-fos-license | Research on Uyghur Pattern Matching Based on Syllable Features
: Pattern matching is widely used in various fields such as information retrieval, natural language processing (NLP), data mining and network security. In Uyghur (a typical agglutinative, low-resource language with complex morphology, spoken by the ethnic Uyghur group in Xinjiang, China), research on pattern matching is also ongoing. Due to the language characteristics, the pattern matching using characters and words as basic units has insufficient performance. There are two problems for pattern matching: (1) vowel weakening and (2) morphological changes caused by suffixes. In view of the above problems, this paper proposes a Boyer–Moore-U (BM-U) algorithm and a retrievable syllable coding format based on the syllable features of the Uyghur language and the improvement of the Boyer–Moore (BM) algorithm. This algorithm uses syllable features to perform pattern matching, which effectively solves the problem of weakening vowels, and it can better match words with stem shape changes. Finally, in the pattern matching experiments based on character-encoded text and syllable-encoded text for vowel-weakened words, the BM-U algorithm precision, recall, F1-measure and accuracy are improved by 4%, 55%, 33%, 25% and 10%, 52%, 38%, 38% compared to the BM algorithm.
Introduction
Pattern matching refers to a given string (hereinafter referred to as text) T with length n, and another string (hereinafter referred to as pattern) P with length m (m ≤ n). It is necessary to find out the starting position of the first occurrence or all occurrences of pattern P in text T. Once found, it is called a success; otherwise, the match fails. Pattern matching is one of the basic research contents of computer science [1]. As an important text processing technology, pattern matching has been applied to many related studies, such as data processing, data compression, text editing, machine translation, search engines, virus and network intrusion detection, content filtering and genetic detection etc. [2][3][4][5][6][7][8][9]. The quality of pattern matching will directly affect the quality of related research and the complexity of the algorithm.
According to China's sixth census in 2010, the Uyghur population is 10 million, and the language belongs to a low-resource language. From a technical point of view, since Windows Vista, iOS 8.0 and Android 4.0 operating systems began to fully support Uyghur language from the system level, the Uyghur network text resources and information to be processed also expanded rapidly, which also accelerated the Uyghur natural language processing (NLP) progress. In April 2019, Tencent launched a machine translation tool containing Uyghur-Chinese translation based on the WeChat platform. In February 2020, Google Translation also added Uyghur language support. At present, the agglutinative and morphological complexity of Uyghur language is one of the main difficulties in its pattern matching research.
In languages such as English, Chinese, and Uyghur, characters and words are constituent units of different granularities in the language and are often used as the basic unit of pattern matching research. With the large-scale growth of textual information and content in the Uyghur language, higher requirements have been placed on the technical processing of pattern matching in Uyghur. When researching the pattern matching of Uyghur, due to the language characteristics, there are rich morphological changes, and different suffixes can form new words by splicing. Therefore, the research on word pattern matching in Uyghur faces two problems: (1) research using characters as the basic unit, with each word composed of multiple characters, and there is a low matching efficiency; and (2) when the word is used as the basic unit for matching. The morphological complexity of words leads to low matching efficiency.
By analyzing the Uyghur word structure and morphological changes, almost all words can be composed of a certain number of syllables. Therefore, this paper considers designing a data format of syllables as the basic data unit to study the single pattern matching task in Uyghur. The Boyer-Moore algorithm was improved by combining the syllable feature information of the morphological changes of words, and pattern matching was performed on the ordinary text and the syllable-encoded text proposed in this paper. Experimental results show that the method has good performance.
The main contributions of this paper are as follows.
(1) Our research on the structural features of Uyghur words and syllables, and the proposed searchable compression format based on syllables, will help improve the performance of existing pattern matching algorithms. (2) We conducted in-depth research on the morphological changes of words caused by weakened vowels. Through the limited expansion of pattern matching sequences, the problem of mismatch caused by morphological changes is solved, and the semantically similar matching effect and recall, precision, accuracy, and F1 values have improved significantly. (3) The research on pattern matching in this paper is also applicable to other syllabic agglutinative languages and can serve as a useful reference for the pattern matching research of other languages of the same type.
Related Research
The related algorithms of pattern matching matured in the past few decades, and several classic algorithms have appeared [10,11]. Subsequent improvements have been made to these algorithms [12,13]. At present, the research on pattern matching pays more attention to application innovation and improvement in specific tasks, such as NLP, information retrieval, text filtering and network security. In Uyghur, the study of pattern matching started late.
Syllables, as one of the main Uyghur features, have been widely studied in recent years. Research based on syllable feature information covers tasks such as speech recognition, speech synthesis, lexical analysis, named entity recognition, and spell checking [14][15][16][17][18]. A multi-pattern matching algorithm for Uyghur was researched for the first-time using syllable information [19]. This method uses the number and structure of syllables as one of the matching conditions to improve the matching efficiency. However, this method can only match words with consistent morphological features and cannot match words with weakened vowels and changed syllable structures. Syllable segmentation and analysis of syllable structure are required during the matching process. The Uyghur text filtering task has also been studied [20]; the authors used extended stems and an additional suffixes library to improve pattern matching performance and deal with vowel weakening.
There are also many studies on pattern matching in compressed formats. The corresponding pattern matching algorithms for different compression units and compression algorithms are also different. Usually, the short text [21], the suffix [22,23], the word [24], and the character string [25,26] are used as the pattern matching unit of the compressed content. Some studies have used the BM algorithm as a pattern matching algorithm in compression format [27,28]. Narupiyakul [29] and Paul G [30] treats syllables as the retrieval unit.
Morphological Changes of Words
Uyghur is a typical agglutinative language. It has strong derivational ability and rich morphological variations. The complex morphology of words is the main feature of the agglutinative language [31][32][33][34][35]. As a typical complex agglutinative language, its morphological structure is word = stem + [suffix]. There are two types of suffixes: the inflectional suffix and the derivational suffix. Adding roots or stems the derivational suffix generates new words, similar to work + man = workman. After the inflectional suffix is added, it only changes the grammatical attributes such as the person, plural, and case of the original word. Similar to book + s = books, this paper discusses the inflectional suffix. Uyghur noun stems can be connected with different suffixes and support continuous concatenation of multiple suffixes. For example, the noun " ﻗﻮﻳ ﯘﯕ ﻼﺭﻧﯨ ﯔ " "qoyuŋlarniŋ" is generated by adding three layers of suffixes to the stem qoy (sheep): (1) qoy + uŋ (your sheep); (2) qoyuŋ + lar (your sheep, sheep plural); (3) qoyuŋlar + niŋ (your sheep's, sheep plural);
Vowel Weakening
Modern Uyghur phonetic harmony is very common, and one of the main manifestations is the weakening of vowels. Vowel weakening refers to the weakening of vowels into other vowels when some additional elements are added to the stem composed of specific vowels, such as Är (man) + i (third person) = Eri (his man Ä→E); karwat (bed) + im (first person) = karwitim (my bed, a→i); Taš (stone) + iŋ (second person) = tešiŋ (your stone, a→e).
Mireguli et al. [36] proposed an algorithm to identify the Uyghur vowel weakening based on the word and syllable structure. Other languages have similar situations [37][38][39][40][41][42][43][44]. Uyghur vowel weakening occurs frequently in written form. There are special exceptions, such as Taj (crown) + i (third person) = Taji (crown, a→a). The weakening rules are complex, and all phenomena cannot be described completely according to the rules. In the 27,266 stem words collected from the orthographic dictionary, 13,843 (50.7%) are structurally weak vowels [45]. Although these words contain a certain amount of irregular words, it can be seen that the weakening of vowels is a very common phenomenon in Uyghur
Syllable-Encoded Text
There are no special signs between Uyghur syllables. The pronunciation of syllables alone and in words is unchanged [14]. There are 12 types of syllables in current Uyghur words, with C for consonant and V for vowel. The syllable types are the six syllable structures V, VC, CV, CVC, VCC, and CVCC. Meanwhile, CCV, CCVC, CCVCC, CVV, CVVC, and CCCV are structures for recording foreign words. The CVV and CVVC structures with two Vs are used for Chinese or other language words with two vowels. This paper uses the syllable segmentation method described by Wayit et al. [46]. Wayit et al. [47] found that the top 2000 Uyghur syllables with the highest frequency can cover 99% of words, and proposed a syllable coding scheme B16 encodes each syllable, in which a syllable is encoded in the same length as the Unicode character encoding length. The encoding area is within the Unicode Private Use Area (ue000-uf8ff). This paper uses the Wayit et al. [47] coding scheme to design a text format based on syllable encoding and changes the basic unit of string pattern matching in text from the original characters to syllables to compress strings while achieving syllable-based pattern matching.
Basic Concepts
There are several search-related symbols and supplementary definitions for string matching used in this paper: 1. uChar is a Uyghur Unicode character, and its encoding range is (u0600-u06ff). 2. Sb is a syllable, which is composed of several uChar. When Sb is a syllable composed of three uChar, its structure is Sb [uChar1 uChar2 uChar3]. 3. Sc is the syllable encoding of Sb in B16 encoding scheme [47]. Each Sc encoding length is equal to a Unicode character, and the encoding range is in the Unicode Private Use Area (ue000-uf8ff). 4. W is a Uyghur word composed of several Sb, W (Sb1Sb2…Sbn), its length is equal to the number of uChar in the word. 5. Wz is the result of syllable segmentation and encoding of word W. When W has three syllables, its structure is Wz (Sc1Sc2Sc3), and the length of Wz is equal to the number of syllables of W. 6. P is a pattern and noun stem, its structure is similar to W, and W = P + Inflectional suffix. 7. Pz is the syllable code of pattern P, and its structure is similar to Wz. 8. T is a text containing n W, its structure is T (W0, W1, …, W (n-1)). 9. Tz is a syllable-encoded compressed text, which is generated by T after syllable encoding. Its structure is Tz (Wz0, Wz1, …, Wz(n-1)). 10. Structure matching: when the sequence of characters in string S1 is unchanged, it is completely contained in string S2, and the length of S1 ≤ the length of S2 11. Semantic matching: when W = P + Inflectional suffixes, semantics of pattern P are included in word W; then, P and W have semantic matching. Sometimes, the weakening of vowels results in the change of W structure and the mismatch of P structure. Pattern P length is less than or equal to word W length. 12. Matching result: when P matches a W semantic or structure in T, a complete W is returned. For example, when P = man, T = {"other", "manchu", "mankind", "man", "men"}, the result of P structure matching is {"manchu", "mankind", "man"}, and the result of semantic matching is {"man", "mankind", "men"}.
Retrieval Parameters and Calculation Formulas
The ideal search results for this article are listed below (2)
Preparation of Experimental Corpus
Because of the morphological complexity of agglutinative language words, the usual pattern matching experiment method prepares a certain size of corpus T and then randomly selects P of different lengths or selects a certain number of highly related words as P (e.g., the top 10 words with the highest correlation degrees). This method does not ensure that all forms of a word can be matched because some words change their structure more than once after adding a suffix, such as naxša+lar→naxšilar+iŋ→ naxšiliriŋ (ša→ši, lar→lir). Moreover, some forms of a word rarely appear, and the experimental corpus T may not include this form. This article prepares three types of experimental corpus.
1. Type A corpus generated by the algorithm. First, this paper selects 22 high-frequency words based on word length, syllable structure, and number of syllables ( Table 1); 11 of the words are weakened (Word V.W). These weakened words cover all four vowel weakening types (a→i, a→e, ä→i, ä→e). We design a morphology-based word generator algorithm based on stemming, taking nouns as an example, this algorithm can generate all the 312 forms of P in a dictionary [45] by adding 1-4 layers of suffixes. The experimental corpus T generated by this algorithm covers 22×312 = 6864 forms of 22 words. If the matching algorithm can match all 312 forms of P according to the P, it means that the algorithm theoretically can recognize and match all forms of pattern P in any natural language environment, and hence recall = 1. When recall = 1, corpora B and C can be used for the next experiment to test the pattern matching ability of the algorithm in a natural language environment. 2. Type B corpus made from natural language text. We collect a certain amount of actual corpus for word segmentation to generate a word data set. The content of the dataset is the words appearing in the corpus and the frequency of occurrence F in the corpus. The corpus is based on Unicode-encoded text with a size of 46.8 MB and has covered comprehensive news, agricultural technology, agency names, novels, natural sciences, dictionaries and encyclopedias, and social media short texts. There were 136,523 unique words and 7434 unique syllables. 3. Type A and type B corpus is a list of experimental words obtained through algorithm derivation and database fuzzy query; type C is a paragraph of natural language text composed of several typical sentences. Table 2 shows the statistical information of pattern P in the corpus type B. In the table, P indicates a test stem, Pm indicates the number of words that fuzzy match the P in structure, Pv is the weakened form of pattern P, Pvm is the number of words that fuzzy match the Pv in structure, F0 is the occurrence frequency of all Pm and Pvm in the corpus, and Pr is the number of words related to pattern P in semantics. For example, when P = är (man), ärkäk (male) belongs to Pr, ärkin (freedom) does not belong to Pr, the labeling of Pr is done manually, Pr is a part of Pm and Pvm. Fr is the frequency of all Pr in the corpus, and Fr is a part of F0. Pm and Pvm are obtained through fuzzy query through SQL statements: select words, frequency from table where words like '%P%'.
Matching of Existing Algorithms
The Boyer-Moore (BM) algorithm is used to perform pattern matching on experimental corpus type A, and Tz is the syllable-encoded text of corpus T. Table 3 shows the matching results. In the table, M indicates a successful match, Mis indicates a failed match, and e.g., indicates an example of a failed match. There are three cases of matching status.
1. Both P and T, Pz and Tz match exactly, for example: toxu and därya. 2. P and T match exactly, and Pz and Tz partially match, for example: quš. 3. Both P and T, Pz and Tz have matching failures, for example, naxša.
Analysis
According to experiments, to improve the degree of structural matching between P and T and between Pz and Tz, we must first solve the matching failure caused by changes in syllable structure. Below we use '*' for any string, '#' for any string that forms a syllable structure, and sx for any syllable.
Changes in syllable structure caused by weakened vowels
The naxša in Table 3 is taken as an example. When the third-person suffix si is added, the weakening of the vowels results in a change in the morphological structure: W = naxša + si = naxši + si (a→i). When P = naxša, the match with W = naxšisi (his song) fails; in order to be able to retrieve these forms, an algorithm needs to be designed to determine whether vowel weakening may occur based on the morphological structure of P, and if so, calculate the pattern P weakened form Pv and find out the mismatch pattern of P through Pv 2. Changes in the syllable structure caused by the addition of suffixes Taking Pz = quš in Table 3 as an example, when the first-person suffix um is added, the syllable structure changes as follows: Wz = quš + um → qu + šum (cvc + vc →cv + cvc) (bird→my bird). The syllable structure of Wz cannot match Pz. If the structural change of quš is represented by quš*, qu+š#+sx, then during the matching process, if the algorithm can recognize that the second syllable is a syllable that satisfies š#, the matching problem can be solved. From the syllable structure, š# belongs to C#. According to the Uyghur syllables type, there are five types of structures that may appear: CV, CVC, CVCC, CVV, and CVVC. When the first C is the character š and considering that there are 24 consonants and 8 vowels in Uyghur, then the theoretical type of syllables in the second syllable š# may be:
Solutions
It is found that the change of the syllable structure occurs between the last syllable of P and the first inflectional suffix. According to the rules [45] for attaching suffixes to nouns, P adds first layer of suffixes to generate 18 kinds of word forms. For comparison and convenience, we selected alma (apple) and quš (bird) and added first layer of suffixes to observe the change of the morphological structure. Table 4 shows the additional information. Here, we need to determine the value range of š#. According to the last calculation, š# has 6408 possibilities. By observing š#, there are the following structures: quš + sx (stem, no person), qu+ šum+sx (first person), qu+šung+ sx (second person), and qu+ši+sx (third person). All three forms of quš can be represented with three structures, which is a very interesting phenomenon. This means that the value range of š# can be reduced from 6408 to only 3 (šum, šuŋ, ši) and the remaining 6405 can be ignored. Another exciting result is that if the current value of š# is not one of these three, it can be determined that the word Wz in Tz may not meet the semantic matching condition Wz = Pz + inflectional suffix. For example, when Tz = {Wz1 = so+qu+šuš+niŋ (the war's…), Wz2 = tö+gi+qu+ši+niŋ (the ostrich's…)} because the third syllable of Wz1 šuš is not in (šum, šuŋ, ši), this method automatically excludes Wz1 and can match Wz2. When Pz is used to search Tz, the search results can be used to exclude some words that are not related to Pz semantically, without performing a semantic analysis; this further improves the precision, and the retrieval speed is faster. This method is also effective for generating weakened words alma.
If the structure after alma adds a configuration suffix cannot satisfy (al+mam, al+maŋ, al+mi), then the matching results are not related to alma; for example, almas (al+mas: diamond) is not related to alma (apple) in semantics. It can be seen in Table 4 that in order to make recall = 1, two algorithms need to be designed. The first algorithm determines whether P satisfies the weakening condition. If it is satisfied, the weakened form Pv of P is calculated. The second algorithm adds personal suffixes according to the structural characteristics of P. The two algorithms finally generate a list P for pattern matching, PList = {P, P1, P2, P3 / Pv}. Among them, P1, P2, and P3 are the result of adding personal (1-3) suffixes to P. The role of PList is to assist the BM algorithm to improve matching efficiency. Because the weakening of vowels is more complicated and cannot be completely solved by rules, there are some special cases not subject to rules or phenomena: for example, tağ + I → teği (a → e, subject to rules), taš + I → teši (a → e, subject to rules), and taj + i → taji (a → a, not subject to rules). The word weakening algorithm for these special cases is solved by adding a special case library.
Improvement of BM algorithm
According to the above analysis, if we use the weakening processing algorithm and the suffix addition algorithm to calculate the matching pattern list PList according to P, then we can calculate the common part Pcommon_part of the PList as the matching pattern of the BM algorithm. When the algorithm matches one Pcommon_part, it uses the remaining Premain_parts to match. If the match is successful, it starts to find the next Pcommon_part. For a single syllable P with weakening, Pcommon_part = null may appear. At this time, the algorithm will match each pattern P in the PList independently. For example, when P = At (horse), Pv = Eti (A→E), there is a common symbol Hamza in T, and there is no common syllable in Tz. The improved new algorithm BM-U is shown in Algorithm 1:
Experiment and Analysis
A total of four experiments were performed.
1. We used the BM-U algorithm to test the word morphology matching ability of pattern P, and used type A corpus generated by the algorithm. If recall = 1, it means that the new algorithm can recognize all word forms of pattern P, can use the type B and type C corpus to test the algorithm precision, accuracy, F1-measure, and observe the recall value of the algorithm in the natural corpus environment.
2. We used the BM and BM-U algorithms to test the matching ability of patterns P and Pz on natural language type B corpora T and Tz. Observe the matching performance of the two algorithms on the two encoding formats through experiments and calculate the impact of the algorithm and matching unit changes on the matching rate. 3. In order to facilitate the observation of the matching performance of the new algorithm, two algorithms were used to conduct demonstrative pattern matching experiments on natural language paragraphs using type C corpus. 4. We conducted pattern matching of syllable encoding file Tz format for monosyllable and non-syllable strings, compare character-based text T with syllable-based text Tz.
BM-U Word Morphology Matching Ability
The experimental method and pattern P are the same as those in Table 3. The algorithm is changed to the BM-U algorithm. The experimental result is BM-U algorithm uses pattern P and pattern Pz to correctly match all 312 forms of P and Pz in type A corpora T and Tz, and recall = 1. The new algorithm satisfies the conditions and can be used for pattern matching experiments based on natural language corpus B and C.
Experimental Results
The experimental results of BM and BM-U algorithms on type B natural language corpus T and Tz are shown in Tables 5 and 6. In the table, Pm and Pvm indicate the number of words that can be fuzzy matched with P and Pv within T. Pr is the number of words that are semantically related to P (semantic match) in (Pm + Pvm) and are manually labeled. Alg is the type of algorithm. T_P / Tz_P, T_R / Tz_R, T_F / Tz_F, and T_A / Tz_A represent the precision, recall, accuracy, and F1-measure values of the algorithm in T and Tz.
Analysis of Experimental Results
The pattern matching capabilities of T and Tz (Table 5) were compared using two algorithms, and the comparison results are shown in Table 6. In the table, VW indicates the calculation result of the vowel-weakened words (Table 6), and Not VW indicates the calculation result of the vowel words that are not weakened (Table 5), No. is the calculation formula number, P, R, A, F indicates precision, recall, accuracy, and F1-measure values. Taking formula No. 1 as an example, PT_BM-U indicates that T is retrieved using the BM-U algorithm, PT_BM is used to retrieve T using the BM algorithm, and ΔP is the BM-U algorithm P-value of each word in Table 5 minus the BM algorithm P-value sum of the increments after. ΔP > 0 indicates that the P-value of the BM-U algorithm is higher than that of the BM algorithm, and ΔP < 0 indicates that the P-value of the BM-U algorithm is lower than that of the BM algorithm. Δ represents the sum of the increments of all n words (here n = 11). In formula No.1 Δ= ΔP, and Avg (Δ) represents the average of the increments. Table 7 shows a comparison of the retrieval capabilities of T with two algorithms. The basic matching unit of T is a character, and the basic matching unit of Tz is a syllable Table 7. Comparison of Boyer-Moore (BM) and Boyer-Moore-U (BM-U) retrieval of T.
No.
Formula The improvement of the algorithm without weakening the words has no effect on the matching efficiency of T. The values of T_P, T_F, T_A, and T_R are unchanged. Since the content of T is collected by the P fuzzy search (% P%) method, T_R = 1. After improving the algorithm, the retrieval efficiency of weakened words significantly improved. The improvement of R, F, and A is very obvious, especially the average increase of R by 55%, mainly because the new algorithm can retrieve the weakened words. For example, when P = alma, T = {alma, almisi (his apple)}, the BM match result = {alma}, and the BM-U match result = {alma, almisi}. The new algorithm effectively increases the F and A values of T by 33% and 25%, respectively. Compared with R, F, and A values, the increase in P-value is not high (the average increase is 4%). Table 8 presents a comparison of the retrieval effect of T with the current BM algorithm and the retrieval of Tz with the BM-U algorithm proposed in this paper. For weakened words, all parameters were significantly increased; meanwhile, for non-weakened words, P, F, and A-value increased, R-value decreased and in the BM algorithm always T_R = 1. The R-value of BM-U on Tz is obviously improved once the algorithm is improved, but Tz_R <1 for some words. Here, the decline in R is mainly because the partially misspelled word T can still meet the matching conditions, and Tz cannot meet the syllable-based matching conditions, resulting in Tz_R <1 for BM-U. Taking No. 3 in Table 5 as an example, Tz_R = 0.75 (TP = 98, FN = 32) of the BM algorithm when P = yil, Tz_R = 0.96 (TP = 125, FN = 5) for the BM-U algorithm, and improving the algorithm increases the R-value by 21%. However, there are still five words that change the syllable structure due to misspelling {(FN = 5): (bir+yildn, yild+din, yill+din, yill+rdin, yi+le+si+ri)} has not been retrieved. The correct spelling of these five words should be { bir+yil+din, yil+din, yil+din, yil+lir+din, yi+li+siri }. The retrieval efficiency of the weakened words is relatively obvious in the algorithm. Figure 1 shows a comparison of the R-values of the weakened words by the two algorithms, and Figure 2 shows a comparison of the F and A-values of the weakened words by the two algorithms. Table 9 is the pattern matching results of the two algorithms in character-based natural language sentences, P = {alma, amerika}. The improvement of the algorithm allows the BM-U algorithm to match words with weakened vowels and improve user search experience. retrieval services, such as business information network (uqur.cn) and Kunlun network (uyghur. xjkunlun. gov.cn) have similar search performance.
Monosyllabic and Non-syllabic Retrieval
Because there is no vowel weakening phenomenon of single syllables (independent syllables, not monosyllable words) and non-syllable strings, there is no difference in the retrieval results of the BM algorithm and the BM-U algorithm.
Monosyllabic retrieval
Retrieving single syllables in Tz is very convenient, and the two algorithms are equally efficient. When P = Sb, Pz = Sc. As shown in Table 10, P = "ma" and P = "to" are the search results of single syllables. The number of structural matches in T far exceeds the number of matches in Tz. A search in T is equivalent to a fuzzy match P =% Sb%, and a search in Tz is equivalent to an exact match. The search results of T include other syllables such as or+man, mal, toğ+ra, and top. If accurate monosyllabic retrieval is implemented in T, it will increase the technical difficulty and extra time consumption, because after finding a match, the string needs its syllables segmented, and then it is determined whether the match is an independent syllable and not a part of other syllables. Implementing fuzzy matching of single syllables in Tz also increases technical difficulty and time consumption because this requires each syllable in Tz to be decoded and then for fuzzy matching to be executed.
Non-syllable retrieval
Tz is syllable-encoded text. Since the basic unit of data storage is the syllable, it is impossible to retrieve non-syllable content. For example, the search result of P = "mm" in Table 10 is one because there is an abbreviation mm in the text that meets the matching conditions. Searching with T is very convenient: we can just search directly. [19] designed two functions Bohum_Sani (number of syllables) and Bohum_Xekli (syllable type) to propose a multi-pattern matching algorithm Bohum-Ug, which first applied syllables to pattern matching research. The algorithm first splits the syllables of pattern P and text T. When pattern matching, use Bohum_sani function to compare the number of syllables. If the number of syllables is the same, use Bohum_Xekli to compare the types of syllables. Then compare the characters after the same syllable types. This algorithm requires syllable segmentation in advance. When the size of the text T is large, the syllable segmentation consumes additional algorithm time. The final matching result is similar to the BM algorithm and cannot match weakened words. The BM-U algorithm does not require syllable segmentation, and can match weakened words, because the matching mechanism of the BM algorithm is not changed, and the BM-U algorithm can be transplanted to all variants of the BM algorithm.
2. Tohti [20] proposed WM-Uy (Wu-Manber-Uy), a multi-pattern matching algorithm. Stem extraction is performed on the pattern P before pattern matching. After the pattern matching of the stem is successful, the word suffix is checked. If the suffix is a derivational suffix, the matching fails, and when the suffix is an inflectional suffix, the matching is successful. The WM-Uy algorithm is different from the single-mode matching BM-U algorithm proposed in this paper. (1) The WM-Uy algorithm does not use stemming to match monosyllable weakened words, such as P = {Eti (his horse), Eri (his man), Eqi (the white)} cannot match the corresponding unweakened words W = {At, Ar, Ak}.
(2) According to the Aizimaiti [48] WM-Uy algorithm suffixes library should include all 378 Uyghur suffixes (104 derivational suffixes, 274 inflectional suffixes). The BM-U algorithm does not have an suffixes library, for nouns compare weakened words up to four times. (3) The matching requirements of the WM algorithm are different from the BM-U algorithm. According to the requirements of the WM-Uy algorithm, when P = {Alma, Amerika}, the stem add derivational suffix words W = {Almizar and Almiliq (Apple Orchard), Amerikiliq (American ), Almimu (Apple is also ..., is it Apple?), Amerikimu, Almiči (the person who deals with Apple), Almixan (taking Apple as the female name)} are not in the match, but in the BM-U algorithm, these words can satisfy the weakening forms of pattern P and can be matched. (4) The WM-Uy algorithm can match Almas (diamond), Almax (exchange) and other words that can match P in structure but are not semantically related to Alma. This paper proposes a syllable-based searchable compressed text format Tz; when Tz format cooperates with BM-U it can exclude these semantically unrelated words.
Conclusions
Uyghur is a very typical phonetic language. Each word is composed of syllables. The pronunciation of characters and syllables is the same as that of words. The Tz format proposed in this paper is a searchable compressed text format based on syllable encoding. The original document doc (char) with the character as the basic unit is changed to the document doc (Sb) with the syllable as the basic unit. If the Tz format is used as an auxiliary storage format for a text corpus, then based on the average length of a Uyghur syllable being 2.4 characters, the theoretical matching speed is 2.4 times faster when matching with a brute force algorithm. The Tz format is more convenient for accurate retrieval and processing of natural language content in units of syllables, requires less space, and matches faster. It can exclude some semantically unrelated words without semantic analysis and requires a syllable encoding dictionary installed on the client. The Tz format design ideas can be used in other languages that can be segmented into syllables and have complex word form features [49]. Figure 3 shows the process of retrieving a text corpus using speech. vSb in the figure is the speech syllable corresponding to text syllable Sb. The BM-U algorithm proposed in this paper is designed on the basis of the original BM algorithm to address the complex morphology of Uyghur words. The retrieval object is Uyghur natural language content. The new design does not change the original search mechanism of the BM algorithm and upgrades the original matching method to an extended matching method based on P_list, where P_list can be calculated based on pattern P; thus, this improvement can be transplanted to other versions of the BM algorithm. This paper only considers the relationship between the stem and the 1-level syllables attached to the stem when designing P_list, which is a syllable-based unigram method. If the content of P_list is increased to the 2-level or 3-level syllables attached to the stem, it will become a syllable-based bigram and trigram problem. Increasing from unigram to bigram and trigram will increase the time consumption and technical difficulty but will help improve the precision and accuracy values. This extended matching idea can also be theoretically applied to multi-pattern matching methods such as Wu-Manber. This paper mainly studies the pattern matching of nouns. Uyghur verbs have more suffix types and numbers than nouns, and their combination levels, structural changes, and additional rules are more complicated. When designing a verb generator algorithm, in theory, the number of forms based on a verb stem may reach thousands. The BM-U algorithm proposed in this paper requires mode P to be a stem. Uyghur stemming itself is one of the most important basic research contents, among which the stemming of verbs is more difficult. This study also found that spelling errors also have a certain effect on the efficiency of pattern matching. These are our future research directions. | 2020-05-07T09:08:24.091Z | 2020-05-02T00:00:00.000 | {
"year": 2020,
"sha1": "1b7a823341b98f8aecc12ab4c246660fe4a5627e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2078-2489/11/5/248/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0e1b8cd7f239e87563dc417861e24512a73bed56",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53028301 | pes2o/s2orc | v3-fos-license | Accuracy and reliability of a low-cost, handheld 3D imaging system for child anthropometry
The usefulness of anthropometry to define childhood malnutrition is undermined by poor measurement quality, which led to calls for new measurement approaches. We evaluated the ability of a 3D imaging system to correctly measure child stature (length or height), head circumference and arm circumference. In 2016–7 we recruited and measured children at 20 facilities in and around metro Atlanta, Georgia, USA; including at daycare, higher education, religious, and medical facilities. We selected recruitment sites to reflect a generally representative population of Atlanta and to oversample newborns and children under two years of age. Using convenience sampling, a total of 474 children 0–5 years of age who were apparently healthy and who were present at the time of data collection were included in the analysis. Two anthropometrists each took repeated manual measures and repeated 3D scans of each child. We evaluated the reliability and accuracy of 3D scan-derived measurements against manual measurements. The mean child age was 26 months, and 48% of children were female. Based on reported race and ethnicity, the sample was 42% Black, 28% White, 8% Asian, 21% multiple races, other or race not reported; and 16% Hispanic. Measurement reliability of repeated 3D scans was within 1 mm of manual measurement reliability for stature, head circumference and arm circumference. We found systematic bias when analyzing accuracy—on average 3D imaging overestimated stature and head circumference by 6 mm and 3 mm respectively, and underestimated arm circumference by 2 mm. The 3D imaging system used in this study is reliable, low-cost, portable, and can handle movement; making it ideal for use in routine nutritional assessment. However, additional research, particularly on accuracy, and further development of the scanning and processing software is needed before making policy and clinical practice recommendations on the routine use of 3D imaging for child anthropometry.
Introduction
Body measurement, or anthropometry, can be compared to a reference population to define nutritional status and to monitor child growth. Length or height, weight, and head circumference (HC) are common anthropometric measures for infants and children under 5 years of age. Anthropometry is used clinically to diagnose malnutrition [1][2][3][4][5], to identify underlying conditions [3], to assess risk for future disease [6,7], and for clinical research [8]. At the population level, public health practitioners include anthropometry in research and surveys to identify causes and effects of abnormal nutritional status, to monitor trends through surveillance, and to target and evaluate interventions related to nutrition [7]. Anthropometry is also used to evaluate agricultural initiatives, and the global development community uses population-level anthropometry as an indicator of national economic development. Height-for-age is accepted as a more comprehensive indicator of poverty than income [9], and there is recognition that nutrition is essential for human capital development [10]. There is a target to improve stunting in the Sustainable Development Goals [11], and anthropometric indicators are used for allocation of Official Development Assistance [12].
Given that child growth has broad effects on health, nutrition, and development, it is important that anthropometric measurements are of high quality. Studies in primary care facilities of developed countries found that measurement error led to inaccurate and unreliable circumference measurement for adults [13,14] and unreliable length and circumference measurements for children [15,16]. There is also evidence that a lack of standardization and maintenance of anthropometric equipment in health facilities leads to misclassification of child weight status [17]. Three separate evaluations covering hundreds of large-scale, established surveys in developing countries found that on average more than 3% of weight or height measurements were biologically implausible [18][19][20]. According to a World Health Organization (WHO) Expert Committee, when more than one percent of measurements are considered biologically implausible, a survey is likely to be of poor quality [21].
The usefulness of anthropometry is undermined by poor measurement quality, which has led to calls for the use of technology to improve quality of child anthropometry [18,22]. This study evaluated the ability of a portable, three-dimensional (3D) imaging system to accurately and reliably measure child stature (length or height), head circumference, and mid-upper arm circumference (MUAC).
Study design and participants
We designed the Body Imaging for Nutritional Assessment Study (BINA) to evaluate the accuracy and reliability of a 3D imaging system in comparison to manual measurements for child anthropometry. We chose to compare to manual measurement because growth standards are based on manual measurement, and when manual measurement is done well the levels of precision and accuracy are sufficient for nutritional assessment [23,24]. The study was approved by the Emory Institutional Review Board (IRB), and included two phases. In the first phase we calibrated software to process 3D scans into measurements by scanning and measuring 36 children. In the second phase, the topic of this paper, we tested 3D imaging on a new sample of children. Children under five years of age who were apparently healthy and whose primary caregiver gave informed, written consent were eligible for the study. Caretakers received a nominal gift card ($15) for each child participating in the study. We recruited and measured children at 20 facilities in and around metro Atlanta, GA, USA; including at daycare, higher education, religious, and medical facilities. We selected recruitment sites to reflect a generally no role in collection, management or analysis of data, and no role in preparation, review, or approval of the manuscript and the decision to submit the manuscript for publication.
Competing interests: Dr. Eugene Alexander is employed by BST, Inc. and has a patent pending related to the study subject matter: Determining Anthropometric Measurements of a Non-Stationary Subject. All other authors do not have affiliations with or financial involvement with any organization or entity with a financial interest in the subject matter or materials discussed in the manuscript. We were able to adhere to PLOS ONE policy on sharing data, but could not share the data acquisition software code due to commercial interests of BST, Inc.
representative population of Atlanta children and included a maternity ward to sample newborns. Daycare centers received gift cards for participating as a study site. We formed a convenience sample by recruiting children on-site, via email, and through facility administrative staff; recruitment was ongoing throughout data collection, which lasted from September 2016 to February 2017. The intended sample size for the study was set at 500, with a target sample size of 100 for each of the following age groups: 0-5 months, 6-11 months, 12-17 months, 18-23 months and 24-59 months. We did not carry out a-priori power calculations. We set sample size targets by age group to oversample children under two years of age, an age group that is particularly difficult to measure manually, and to allow for an assessment of variability of measurement error across the entire span of 0-4 years.
Test methods
Five trained anthropometrists with post-secondary education performed all manual measurements and 3D scans. Anthropometrists received training over a three week period in August 2016 from expert anthropometrists at Emory University and passed a standardization test for manual anthropometry. Manual measurements followed the protocol used to develop the 2006 WHO Child Growth Standards (CGS) [25]; detailed methods for manual anthropometry in BINA are published elsewhere [23]. Staff from Body Surface Translations Inc. (BST) trained anthropometrists to take 3D scans in one day, and anthropometrists informally used 3D scanners throughout the three week training period to familiarize themselves with the technology. During the standardization test anthropometrists scanned children following study protocol, and after visual assessment we determined scans were of sufficient quality to proceed with the study. Each anthropometrist carried a 3D scanning device: a tablet with attached Structure Sensor 3D scanner (Occipital, San Francisco, CA, USA) and custom software from BST, AutoAnthro, for scanning and data entry of demographic information and manual measurements. AutoAnthro will be commercially available from BST. The 3D scanner we used was off-the-shelf, commercial hardware; and it was a fraction of the price of other scanners (USD $379). The scanner uses a Class 1 laser, which does not cause eye injury, and is the same type of laser used in video game technology. We collected scans (Fig 1) and then manual measurements consecutively at the same time of the day, usually in the morning. Each individual 3D imaging session comprised six scans, with three scans of the front of the child and three of the back. The software was designed for automated processing of six scans into body measurements. Consistent with manual anthropometry procedures, we scanned children two years of age and over standing up, and instructed younger children to lie down (S1-S3 Figs). Each child was scanned and measured twice by two different people, resulting in four sessions of scans and four sessions of manual measurements per child. Multiple measurements allowed analysis of both inter-and intra-measurer reliability.
Analysis methods
In this study, one anthropometrist could be triggered to take a third measurement for manual measurements based on maximum allowable difference [23,25], but not for scans. To determine a best-estimate from manual measurements, we excluded the outlying measurement in the case of a triggered, third measurement; and took the mean from the four remaining measurements (two from each anthropometrist). In this paper we refer to the average of four measurements as "best-estimate" and "all scan" for manual and scan-derived measurements respectively, and consider the former the reference standard. For analyzing reliability we limited our analysis to the first two manual measurements, ignoring any triggered third measurement; which provided a like-for-like comparison with scan-derived measurements. In the text we refer to the mean of two measurements as "repeated-manual" and "repeated-scan," and to measurements derived from one measurement as "single-manual" and "single-scan".
We used SPSS 20 (IBM Corp., Armonk, NY, USA) to test statistical significance of average bias with a two-sided, paired t-test with alpha of 0.05. Average bias is a metric of systematic bias. We also carried out Sign Tests-another metric of systematic bias that tests whether there were the same number of positive and negative differences using a Binomial Test.
Using StataSE 13's (StataCorp, College Station, TX, USA) baplot module we created Bland-Altman (BA) Plots [26] to assess if accuracy remained constant across different child body sizes and to look at random bias. For the y-axis of the BA Plot we subtracted the best-estimate from the single-scan value, and for the x-axis we used the mean of single-scan and best-estimate. We used Pitman's Test of Difference in Variance [27] to test the correlation between accuracy and the size of the child, and we calculated and plotted Limits of Agreement, which is the 95% precision interval for individual differences and is a metric of random bias. We disaggregated analysis based on age groups corresponding to a division in the estimation software, which used two anatomic models-one for children less than one month of age and another for children 1-59 months. If accuracy was not consistent across different sizes, indicated by a statistically significant Pitman's Test, we carried out the additional step of regressing the difference on the independent, second single-scan as suggested by Bartlett and Frost to rule out difference in SD as the cause of a statistically significant Pitman's Test [27]. We used Technical Error of Measurement (TEM) and the Coefficient of Reliability (R) as described by Ulijaszek [28] to measure reliability, which are the same measurements of reliability used to develop the WHO Child Growth Standards [25]. TEM represents one standard deviation and a 95% precision margin can be calculated by multiplying TEM by two. R measures the strength of correlation [28]. We used SPSS 20 to calculate the Intraclass Correlation Coefficient based on absolute agreement, which is another measurement of correlation that is familiar to a wider S4 Fig shows the flow of participants in the study. We received informed consent for 555 children, of which 26 children were either not present or had aged out by the day of data collection. Of the remaining 529, we excluded 55 due to: refusal to be measured (n = 18), incomplete measurements (n = 8), health status (n = 5), loss of data due to technical errors during upload (upload software since corrected) (n = 10), and use of child in calibration of the 3D imaging system (n = 14); resulting in a final sample size of 474. Table 1 presents sample characteristics. There was a low prevalence of wasting, stunting, underweight and overweight. The mean child age was 26 months and 48% of children were female. Based on reported race and ethnicity, the sample was 42% Black, 28% White, 8% Asian, 21% Multiple Races, Other or Race Not Reported; and 16% Hispanic. Children under two years of age and newborns were overrepresented, and nearly all of the newborns were less than four days old.
Accuracy
When using all-scan, the average bias of scan-derived measurements in cm was +0.6 (95% confidence interval (CI): 0.56, 0.62) for stature, +0.3 (CI: 0.30, 0.34) for HC, and -0.2 (CI: -0.21, -0.17) for MUAC (S1 Table). Differences were consistent and statistically significant at p < .0001 whether measurements were derived from single-scan, repeated-scan, or all-scan. However, the number of scan sessions did have an effect on the spread of differences and repeated measurements reduced variance as expected. For stature 97% of all-scan measurements were higher than manual measurements, or positive, and the 95% limit of agreement (LoA) showed that 95% of individual differences were within -0.1 to 1.2 cm; single-scan measurements were 78% positive with a LoA of -0.7 to 1.9 cm. We visually inspected the accuracy of scan-derived measurements using Bland-Altman Plots (Fig 2). Compared to children 1-59 months of age 3D imaging was less accurate for newborns for all measures (Table 2). After disaggregating by age group (corresponding to the two anatomic models) Pitman's Test was not significant for stature and HC, indicating no differential accuracy by size within the two age groups. For MUAC, Pitman's Test was statistically significant (p < .01), suggesting differential accuracy by size within both age groups. Subsequent
cm).
Among children 1-59 months of age there were no statistically significant or meaningful differences in accuracy by race or hairstyle (S2 Table). The largest difference was a 0.04 cm difference in average bias for head circumference between Black and White children.
Reliability
The intra-observer TEM for stature among children of all ages was 0.62 cm for scan-derived measurements, indicating that for a single observer the second scan-derived stature was within ±0.62 cm of the first scan-derived stature for two out of three children, and that for 95% of children the difference was within ±1.2 cm (Fig 3A and S3 Table). Manual measurement intraobserver TEM for stature among children of all ages was within ±0.72 cm for 95% of children. Intra-observer TEM from scan-derived measurements was higher than that from manual measurements for all measures and across all age groups, but unlike manual measurements, there were no meaningful differences by age group for scan-derived measurements (Fig 3A).
For all children under 5 years of age inter-observer TEM from repeated scans was within 0.1 cm of TEM from repeated manual measurements for all measures (Fig 3B). We also examined inter-observer TEM based on single measurements. Single-scan inter-observer TEM was higher than single-manual inter-observer TEM (Fig 4).
When using single measurements inter-observer TEM was higher than intra-observer TEM for manual measurement, but not for scans (Fig 4), indicating that scanning produced similar results for anyone who repeated the scan. Total TEM combines the intra-and inter-observer TEM from S3 Table into a single metric. For manual measurements Total TEM was 0.51 cm, 0.33 cm, and 0.31 cm for stature, HC and MUAC respectively; compared to 0.77 cm, 0.51 cm, and 0.43 cm for scan-derived measurements.
The Coefficient of Reliability based on Total TEM was 1.00, 1.00, and 0.99 for stature, HC and MUAC respectively from manual measurements; and 1.00, 0.99, and 0.98 for scan-derived measurements. The high R indicates excellent agreement for repeated measurements. Intraclass correlation coefficients, another measure of agreement, were also close to 1.00 for intraand inter-observer repeated measurements (S3 Table), confirming the excellent correlation between repeated measurements for both manual and scan-derived measurements.
Discussion
We previously demonstrated that BINA collected gold-standard, manual anthropometry based on analysis of biological plausibility, reliability, and z-score standard deviations [23]. In this paper we compared measurements derived from 3D imaging to these gold-standard, manual measurements. For biological plausibility, 3D imaging and manual measurement were exactly the same, with both methods producing plausible measurements >99% of the time; this finding indicates acceptable quality based on WHO expert committee criteria for biological plausibility [21]. We also found that repeated-scan 3D imaging produced measurement reliability that was within 1 mm of manual measurement reliability for stature, HC and MUAC; this level of reliability puts 3D imaging on par with manual measurements collected in the Multicenter Growth Reference Study (MGRS) used to develop the 2006 WHO CGS [24]. Considering only biological plausibility and reliability, 3D imaging performed as well as gold-standard manual measurements for child anthropometry. However, 3D imaging systematically underestimated or overestimated child size when compared to our best-estimate of size from manual measurement.
Before reaching any conclusion on the readiness of 3D imaging for child anthropometry, we would need to determine if the systematic inaccuracy found in this study is population specific. If the same under-and overestimation was found in a different sample with different anthropometrists, we could then identify and fix the cause of the bias in the model fit or simply build adjustments into the software. Knowing the cause of bias could facilitate adjustments. We hypothesized that inaccuracies in our study were the result of difficulty in manual Research similar to BINA should be carried out, ideally in developed and low and middle income countries, to help answer questions on systematic inaccuracy and also to address some of the other limitations of our study. The 3D imaging system may perform differently under the harsher conditions of a household survey or community-based screening. Increased handling during transport, lack of access to electricity, lighting, dust, space constraints and other environmental factors could all affect the functionality of the 3D scanner.
Additional limitations to our study stem from sampling design and automated processing. The sample size was not specified during study design based on power calculations, and due to limited sample size and the choice of population we did not fully explore differences in prevalence estimates and did not analyze sensitivity and specificity for clinically significant indicators, such as obesity, wasting and severe stunting. In addition, findings from our non-random sample cannot be generalized to any specific age group, and the processing of 3D scans was not fully automated as originally planned. Anthropometrists took more scans than needed and manually selected the best quality scans. Also, the orientation (front/back) of each scan was manually coded. Further software development is needed to achieve full automation, which could improve repeatability.
Our primary interest in researching 3D imaging for child anthropometry was to improve the quality of anthropometric data, and while not conclusive, our findings suggest that 3D imaging could play a role in quality improvement. Compared to manual measurement, we spent substantially less time on training and supervision for 3D scanning, and achieved similar reliability. Also, our findings on scan-derived measurement reliability suggest that scanning was not affected by child age, which can be viewed as a proxy for cooperation, or anthropometrist's technique. Both cooperation and measuring technique are known to negatively affect anthropometric data quality. Qualitative research on BINA anthropometrists' experiences using 3D scanners is currently underway and this may help to provide additional evidence on the potential of 3D imaging to improve anthropometric data quality. However, our study was not designed to determine if 3D imaging led to better quality, and anthropometrists in BINA; who were well educated, highly motivated, and well-trained; achieved high quality anthropometric data with both 3D imaging and manual measurement. Conclusive evidence on quality improvement will not be available until 3D imaging is tested in a setting of poor quality manual measurement.
Results from our analysis of z-scores and classification (S4 and S5 Tables); along with an expanded discussion on reliability, bias hypotheses and study limitations; is included in the supplementary online content.
Conclusions
3D imaging is not new for anthropometry [29][30][31][32][33], but the 3D scanner used in our study was inexpensive, brought unique functionality, and shows promise as a substitute for traditional anthropometry measurements. The scanning device is small, lightweight, and the software developed by BST only requires a series of snapshots, which allows some subject movement. The 3D imaging system used in our study, AutoAnthro, could be an ideal replacement for bulky height boards used in surveys, and to our knowledge it is the first portable 3D system specifically designed for whole body scanning of infants and young children. In conclusion, our findings indicate that AutoAnthro can produce reliable child anthropometry, but further research and development is needed before 3D imaging can be recommended as a solution to improving the quality of anthropometric data. Table. Accuracy by race and hairstyle. Considering best-estimate manual measurements and scan-derived measurements from all sessions among children 1 to 59.9 months of age. (DOCX) S3 Table. Intra-observer reliability and inter-observer reliability. Based on repeated manual measurements and repeated scan sessions by age group. (DOCX) S4 Table. Z-score mean, standard deviation (SD) and prevalence by selected z-score-forage cutoffs. Among children 1-59.9 months of age. (DOCX) S5 Table. Sensitivity and specificity of adjusted, scan-derived measures. Comparison to best-estimate manual measures among children 1-59.9 months of age. (DOCX) S1 Text. Supplementary methods, results and discussion. (DOCX) | 2018-11-10T06:29:28.198Z | 2018-10-24T00:00:00.000 | {
"year": 2018,
"sha1": "a2df74c304bac5b116c3c631a0292ff68e29147f",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0205320&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd84f28563c06d9db0f4a6abd26dafb85b8ba1b9",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268538180 | pes2o/s2orc | v3-fos-license | Elucidation of the genetic determination of body weight and size in Chinese local chicken breeds by large-scale genomic analyses
Background Body weight and size are important economic traits in chickens. While many growth-related quantitative trait loci (QTLs) and candidate genes have been identified, further research is needed to confirm and characterize these findings. In this study, we investigate genetic and genomic markers associated with chicken body weight and size. This study provides new insights into potential markers for genomic selection and breeding strategies to improve meat production in chickens. Methods We performed whole-genome resequencing of and Wenshang Barred (WB) chickens (n = 596) and three additional breeds with varying body sizes (Recessive White (RW), WB, and Luxi Mini (LM) chickens; (n = 50)). We then used selective sweeps of mutations coupled with genome-wide association study (GWAS) to identify genomic markers associated with body weight and size. Results We identified over 9.4 million high-quality single nucleotide polymorphisms (SNPs) among three chicken breeds/lines. Among these breeds, 287 protein-coding genes exhibited positive selection in the RW and WB populations, while 241 protein-coding genes showed positive selection in the LM and WB populations. Genomic heritability estimates were calculated for 26 body weight and size traits, including body weight, chest breadth, chest depth, thoracic horn, body oblique length, keel length, pelvic width, shank length, and shank circumference in the WB breed. The estimates ranged from 0.04 to 0.67. Our analysis also identified a total of 2,522 genome-wide significant SNPs, with 2,474 SNPs clustered around two genomic regions. The first region, located on chromosome 4 (7.41-7.64 Mb), was linked to body weight after ten weeks and body size traits. LCORL, LDB2, and PPARGC1A were identified as candidate genes in this region. The other region, located on chromosome 1 (170.46-171.53 Mb), was associated with body weight from four to eighteen weeks and body size traits. This region contained CAB39L and WDFY2 as candidate genes. Notably, LCORL, LDB2, and PPARGC1A showed highly selective signatures among the three breeds of chicken with varying body sizes. Conclusion Overall this study provides a comprehensive map of genomic variants associated with body weight and size in chickens. We propose two genomic regions, one on chromosome 1 and the other on chromosome 4, that could helpful for developing genome selection breeding strategies to enhance meat yield in chickens. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-024-10185-6.
Background
Over thousands of years, hundreds of chicken breeds have evolved through natural and artificial selection across different environments [1], leading to significant phenotypic variations in body size, plumage, egg color, and flying ability [2].Chicken meat has already established itself as one of the most efficient protein sources, accounting for over 30% of global meat products and playing a critical role in food security worldwide [3].Body weight (BW) is an important economic trait primarily determined by minorgenes that interact with functional genes and serve as molecular markers, which are extensively studied for their association with weight gain.
Genetic analysis and pattern recognition have proven to be useful in identifying the origin of specific breeds or revealing their characteristic traits [4].A study of commercial broiler populations for selective sweeps revealed numerous loci involved in the selection for muscle mass [5].Genetic variations present among various chicken breeds have been leveraged to portray specific traits in these breeds [6][7][8][9][10].As a result, there have been extensive genomic studies on the genetic conditioning of domestic animals such as chickens.Notably, several genes associated with growth and carcass traits in chickens have been identified, including insulin-like growth factors (IGFs) [5,11] and growth hormone secretagogue receptor (GHSR) [12], lysozyme (LYZ), melanocortin 4 receptor (MC4R), adhesion G protein-coupled receptor G6 (ADGRG6) [13], etc.Furthermore, insulin like growth factor 2 mRNA binding protein 1 (IGF2BP1) has been shown to correlate positively with breast muscle weight and body size in various animals [14][15][16].A genomewide association study (GWAS) in F2 progenies of star and silky black-bone chickens also revealed that the LIM domain binding 2 (LDB2) gene was responsible for BW at seven to twelve weeks and weight gain at six to twelve weeks [17].However, expanding these studies to include more breeds and larger populations is necessary for deeper insights into this research field with current focus on BW and size traits.
The chicken quantitative trait locus (QTL) database (release 45) includes 4,776 QTLs related to growth traits in chicken, such as BW at different ages and average daily gain [18].However, many of the QTLs, particularly those identified in previous studies, lack precise mapping, leading to broad confidence intervals that encompass several genes.Despite intensive research into the genetics of meat production traits, the knowledge of key genes causing significant phenotypic variations remains limited.This study used a large population of Wenshang Barred (WB) chickens (n = 596) and three other breeds (n = 50) to minimize the risk of biases.It also performed a systematic comparison of the whole genome and a GWAS to identify the genes and genomic regions responsible for BW and size.This study provides new insights into the genetics of chicken selection and promises to facilitate the development of techniques for breeding native chickens.
Ethics statement
All handling and experimental procedures concerning the chickens used in this study were conducted following ARRIVE guidelines.Ethical approval was granted by Science Research Department of the Shandong Academy of Agricultural Sciences (SAAS) (Jinan, China), with the reference number 2,021,001.
Birds and sample collection
A total of 596 WB chickens obtained from Jinqiu Agriculture and Animal Husbandry Co., Ltd.(Wenshang, Shandong, China) were used in this study.The chickens were raised in accordance with the breeding and management protocols for WB chickens.The experimental chickens remained in cages throughout the entire process, including the brooding stage from 0 to 7 weeks of age, the growing stage from 8 to 16 weeks of age, and were then transferred to the laying house at approximately 15 weeks of age.The chickens were housed individually in cages and had unrestricted access to food and water, along with regular immunization.Chickens were kept under natural light during the growing stage, followed by 16 h:8 h light:dark cycle after growing stage.The chicken coop maintained temperature control through the use of a fan humidification curtain, and feeding and manure cleaning processes were mechanized.The BW and body size were measured at 0-18 weeks.The measurement methods for some traits were as follows: Body oblique length (BOL): the distance between the shoulder joints and the sciatic tuberosity was measured along the animal's body surface with a leather ruler.Chest breadth (CB): the distance between the two shoulder joints measured on the body surface with a caliper.Chest depth (CD): the distance from the first thoracic vertebra to the anterior edge of the keel was measured with a caliper on the body surface.
Thoracic horn (TH): the angle of the thorax on both sides was measured with a thoracic corrector at the anterior edge of the keel.Keel length (KL): the distance from the anterior end of the keel eminence to the end of the keel was measured on the body surface with a caliper.Shank length (SL): the straight line distance from the upper tibial joint to the third and fourth toes measured with a caliper.Shank circumference (SC): circumference of the middle of the tibia.Pelvic width (PW): distance between the two sciatic tuberosities measured with calipers.All the phenotypic data were distributed within the range of the mean ± 3 standard deviations and passed quality control for subsequent GWAS analysis.
Genetic materials, DNA extraction and sequencing
We collected blood samples from the wing vein of 596 chickens (supplementary Table S1) and extracted genomic DNA using the phenol-chloroform method.The DNA quality was assessed by agarose gel electrophoresis, and paired-end (2 × 150 bp) DNA libraries were constructed for each sample.The DNBSEQ sequencing platform (BGI Genomics, Shenzhen, China) was used to obtain sequence data for all libraries.Notably, the sequencing data of three chicken breeds (n = 50), including those from our previous research on local chicken breeds (Recessive White (RW), WB, and Luxi Mini (LM) chickens) were also used in this study.RW broilers are classified as a specialized line for meat production, known for their large body size and well-developed pectoral muscles.WB chickens, on the other hand, are versatile and utilized for both meat and egg production, characterized by medium-sized bodies.The LM chicken is a small ornamental breed that originates from China, known for its compact size.At 5 months of age, adult hens of this breed typically weigh around 0.86 kg, while adult cocks weigh approximately 1.2 kg.The accession number for these data is CRA006685 in the GSA database [19].This study analyzed a total of 646 chickens from three breeds.
Variant calling, quality control
Sequencing raw data was filtered with SOAPnuke (v1.5.6) [20] by removing reads containing sequencing adapter; removing low-quality data with read quality value < 20; remove reads whose unknown base (N base) ratio > 10%, and remove the reads with low quality accounting for more than 50%.The Burrows-Wheeler aligner (BWA) software [21] was used to align clean data to the chicken reference genome (http://ftp.ensembl.org/pub/release-106/fasta/gallus_gallus/),and the Samtools software [22] was used to sort the aligned sequences according to the coordinates on the genome.The Qualimap 2 tool [23] was used to obtain summary statistics to assess the effectiveness of read mapping and alignment quality, and Samtools software was used to filter out the reads with quality values less than 30.Single nucleotide polymorphisms (SNPs) were called using the GATK HaplotypeCaller v3.3 [24] with the SNP filtering conditions based on the following: Quality by Depth (QD) < 2.0, Fisher Strand (FS) > 60.0, root mean square of Mapping Quality (MQ) < 40.0, MQRankSum < -12.5, HaplotypeScore > 13.0, and ReadPosRankSum < -8.0.The SNPs obtained by preliminary filtration were selected for subsequent analysis according to the following quality control standards: SNPs with minor allele frequency (MAF) > 0.05 and missing rate < 0.1.A total of 9,406,362 biallelic SNPs were retained for subsequent analysis.
Population genetics analysis
Principal component analysis (PCA) was performed using the software PLINK v1.9.The population structure of different admixture proportions was evaluated using the program ADMIXTURE v1.3.Three solutions (2 < k < 4) were selected for genetic clustering, and the software FigTree v1.4.0 (tree.bio.ed.ac.uk/software/figtree/) was used to visualize the phylogenetic trees.
Analysis of nucleotide diversity, linkage disequilibrium (LD) decay
To further evaluate the genetic characteristics among different species, we determined the genetic diversity by measuring the fixation-index (Fst) using VCFtools v0.1.13[25].The linkage disequilibrium (LD) decay level was calculated and plotted using the PopLDdecay software [26], with a maximum distance of 500 kb.
Detection of selective sweeps
We detected candidate divergent regions (CDRs) by searching the genome for regions with high Fst (top 1%) values.First, we calculated the Fst value along the autosomes in sliding 40-kb windows with 10-kb steps using VCFtools software and in-house scripts, by comparing values among WB, RW and LM chickens.We restricted our CDR descriptions to the top 1% most significant windows in Fst values, as these windows represented the extreme ends of the distributions.
Estimation of genetic parameters
SNP-based heritability (h 2 SNP) was calculated using the GCTA v1.93.2 beta software [27] based on the genetic relationship matrix (GRM) between pairs of individuals [28].The restricted maximum likelihood (REML) method was used for genetic parameter estimation.The geneticstatistical model was defined as follows: where Y i is a vector of clutch traits; X i and Z i are inci- dence matrices for b i and u i ,respectively; bi is a vector of fixed effect; u i is a vector of polygenic effects with a variance-covariance structure of u∼N 0, Gσ 2 u ; G is the GRM between individuals; σ 2 u is the polygenic variance; e i is a vector of random residual effects with e i ∼N(0, Iσ 2 e ); I is an identity matrix of dimension n × n (with a sample size n= 596).
Genome-wide association study for body weight and body size in Wenshang Barred chicken
WB chickens were selected based on meat production traits over multiple generations, and SNP information and phenotypic records were comprehensively collected.To investigate the genetic basis of BW and body size, association analysis of BW, chest breadth (CB), chest depth (CD), thoracic horn (TH), body oblique length (BOL), keel length (KL), pelvic width (PW), shank length (SL) and shank circumference (SC) was performed using the linear mixed model in the Genome-wide Efficient Mixed Model Association (GEMMA) software (v0.98.4) based on chickens genotyped by whole-genome sequencing.After quality control (--mind 0.1, --maf 0.05) using the PLINK v1.9 software, a total of 9,406,362 SNPs were retained, and GWAS was performed as follows: where y denotes the vector of phenotypic values; W represents the vector of covariates, including a column of 1 s; α is the vector of the corresponding coefficients including the intercept; x represents the vector of marker genotypes; β denotes the effect size of the marker; u represents the vector of random polygenic effects; e is the vector of errors.The Wald test was used as a criterion to select SNPs associated with metabolizable efficiency traits.Similarly, the whole-genome and suggestive significance thresholds were corrected by the Bonferroni test (0.05/9,406,362 and 0.01/9,406,362, respectively).Additionally, Manhattan and quantile-quantile (Q-Q) plots were visualized using the CMplot package in the R environment.The LD blocks of target regions were performed using the Haploview v4.2 software.
Statistical analysis
Statistical analyses were performed using SPSS 25.0 software (IBM Corporation, Armonk, NY, USA) or R environment.
Whole-genome sequencing and variation
Following standardized procedures for library construction and whole-genome sequencing using the BGISEQ platform, we obtained 4.15 Tb of raw data (supplementary Table S1) for 596 individuals, with a mean coverage of 7.03X (supplementary Table S1).After performing quality control, the total reads per individual were 51,113,663, with a mean mapping ratio of 99.73%.These data satisfied the requirements for subsequent analyses.
Phylogenetic and demographic analyses
We conducted a comprehensive analysis of the genetic relationships among three different chicken breeds with varied body sizes.First, we used QC SNPs and performed PCA on the three breeds, which revealed a significant genetic difference among RW, WB, and LM chickens (Fig. 1a).Second, we used pairwise genetic distances to construct a neighbor-joining tree (Fig. 1b).Third, genetic coancestry analysis was performed by assuming different number of ancestral populations (K = 2-4, Fig. 1c) to classify the chickens into groups.It was found that the LD decay distance was different among the three breeds (Fig. 1d).As expected, the fastgrowing RW, WB, and LM chickens were genetically distant from each other.
Genomic signatures in purebred WB chickens
We performed the Fst test based on allele frequency differentiation with a 40 kb window size and a step size of 10 kb to identify the genomic loci that underwent selective sweeps among the three chicken breeds (Fig. 1e).By overlapping the results of the Fst analysis, we identified 287 protein-coding genes in the RW and WB populations and 241 protein-coding genes in the LM and WB populations (Fig. 1e, supplementary Table S2-3).Notably, we detected a lead signal on chromosome 5 in the RW breed, which annotated the INS and IGF2 genes that play key roles in the skeletal muscle development process [29].We also examined the IGF2BP1 gene and found significant differences in the Fst values among the RW, WB, and LM chickens (Fig. 1f ).Subsequently, we identified the list of genes harboring the top selective sweep windows (Table S4).For instance, the known growth factors HBEGF, VEGFA, FGF23, and FGF6 play a crucial role in body development.Additionally, TBX20 acts as a transcriptional activator and repressor required for cardiac development and is responsible for maintaining functional and structural phenotypes in adult heart.TOLLIP is a Toll-interacting protein and an essential component of the signaling pathway of IL1B and Toll-like receptors.Also, TBX5 is involved in heart development and limb pattern formation.In addition, we also conducted KEGG pathway and GO term enrichment analyses on the gene sets from the 453 selective sweep genes in all chickens (Table S5 and S6, Fig. S1).Based on the phenotype or physiological process, the GO terms were classified into several clusters, including autophagy (e.g., GO:0006995, GO:0006914 and GO:0016236), energy metabolism (e.g., GO:0009060, GO:0022900, GO:0006091, GO:0007005, GO:0055114), and growth (e.g., GO:0071363, GO:0007169, GO:0007167).
Descriptive statistics of traits
We calculated the descriptive statistics for the traits related to BW and body size (Table 1).The coefficients of variation of these traits in the population ranged from 4.62 to 12.94%.The SNP-based heritability estimates for the BW traits (0.47 -0.67) and shank traits (0.33 -0.59) were high, but they were relatively low (0.04 -0.05) for TH traits, breast muscle traits, and body size traits.
GWAS and fine-mapping for body weight and body size traits
This study mainly focused on analyzing BW and body size traits.The Manhattan plots and significant SNPs are shown in Figs. 2, 3, 4 and 5, and the corresponding Tables 2, 3, 4 and 5.The Q-Q plots are shown in Fig. S3, S3, S4, and S4, and Table S7 shows the significant SNPs for all phenotypes.The additive effects of lead SNPs estimated by GEMMA are shown in Table S8.
We identified two significant regions, one on chromosome 1 (170.4-171.5 Mb) and the other on chromosome 4 (74.1-76.4Mb), for the ten BW traits.The chromosome 1 region was found to be correlated with BW during the entire growth stage.In particular, the largest region associated with BW at 16 weeks contained 218 significant SNPs, implicating genes such as RB1, RCBTB2, CAB39L, SETDB2, PHF11, ARL11, KPNA3, SPRYD7, RNASEH2B, and WDFY2.In contrast, the chromosome 4 region was associated with BW traits after 10 weeks of age.The most significant interval was correlated with BW at 16 weeks of age, comprising 1,076 significant SNPs, involving genes such as PPARGC1A, KCNIP4, SLIT2, LCORL, LDB2, and LAP3.
We also identified breast muscle size traits on chromosome 4 (74.4-76.2Mb).Specifically, 167 significant SNPs were associated with the CB trait, and 343 significant SNPs were correlated with the CD trait, involving the following genes: SLIT2, KCNIP4, LAP3, LCORL, and LDB2.
For body size traits, 99 significant SNPs found to be associated with the 18-week BOL trait were located on the SLIT2 and LCORL genes.The 167, 19, and 175 significant SNPs located on chromosomes 1, 2 and 4, respectively, were associated with the 18-week KL trait, and involved the genes ITM2B, CAB39L, SETDB2, PHF11, ARL11, KPNA3, SPRYD7, YES1, COLEC12, SLIT2, LCORL, and LDB2.A total of 24 significant SNPS located in the 75.07 -76.24Mb region of chromosome 4 were correlated with PW traits, and the annotated genes included SLIT2, LCORL and LDB2.
The LCORL, LDB2 and PPARGC1A gene is a potential causal gene for body weight and body size
The LCORL, LDB2, and PPARGC1A genes were significantly correlated with 13, 12, and 8 traits, respectively, based on the results of the above analysis.In this section, we focus on these three genes and analyze their polymorphism in the three chicken breeds (RW, WB, and LM) with varied body sizes.The Fst analysis results indicated that the significant SNPs present on these genes showed apparent differences among breeds with pronounced differences in BW and body size (Fig. 6a, b and c).Furthermore, we observed that the related SNPs showed a strong linkage in the WB chicken breed (Fig. 6d, e and f ).These findings suggest the possibility that the genome regions of these three genes might have undergone natural selection during the development of different chicken breeds.
Discussion
The domestic chicken is an ideal model to investigate the genetics of phenotypic evolution [30].Evolving poultry genetics and breeding have led to a diverse range of phenotypes and demographic history in local breeds [31,32].Domestication has also limited phenotypic differences among local breeds by selecting for genetic variants that favor traits leading to improved production [33].Among these traits, animal body size plays a critical role in the profitability of poultry meat.Therefore, optimizing this trait has been an important goal during domestication [33,34].Selection for particular traits is the decisive factor behind the substantial rise in productivity, accounting for more than 90% of the improvement [35].We conducted a comprehensive genetic diversity Chicken breeds exhibit great variation in size in response to natural and/or artificial selection [36].Understanding the genetic mechanisms underlying this variability in chicken body size is still inadequate.The chicken body size primarily reflects the growth of muscles and bones [37,38], making growth a crucial selection criterion in chicken breeding.Genetically, chicken body size is a complex trait influenced by several genes on autosomal and sex chromosomes.Hundreds of QTLs have been mapped on autosomes for body size-related traits, such as SL, KL, and BW [37,[39][40][41][42][43][44].
A genome-wide association study was conducted to analyze the body size of Asian pheasants and Asian bantams.The study found a region on chromosome 4 (GGA4:17.3-21.3Mb) that contained a total of 60 genes.Two notable genes in this region are myotubularin 1 (MTM1) and secreted frizzled-related protein 2 (SFRP2), both of which are potential candidate genes associated with body size traits [44].The previous GWAS study included 541 chickens from 23 regional breeds in Italy, with each breed consisting of 20 to 24 chickens.Significant SNPs were found in the genome-wide association study, specifically associated with dwarfism in the dwarf breeds.These breeds shared a candidate genomic region on chromosome 1, where significant SNPs were found within the LEMD3 and HMGA2 genes [45].This study examined the genome-wide association of 10 BW traits and 16 body size traits.We confirmed a total of 2,522 genome-wide significant SNPs, most of which were present on chromosomes 1 and 4. The 72 ∼ 76 Mb region of chromosome 4 contained genes, such as PPARGC1A, KCNIP4, SLIT2, LCORL, LDB2, and LAP3.The GWAS of F2 progenies showed that the LDB2 gene was associated with BW at 7 ∼ 12 weeks and average daily gain at 6 ∼ 12 weeks [17].A previous GWAS study was conducted using the chicken 60 K SNP panel on 1,328 Korean Fig. 5 Manhattan plots of GWAS for SL and SC traits in WB chicken.Each dot represents a SNP in the dataset.The horizontal red and blue lines indicate the thresholds for genome-wide significance (P value = 1.07e-09) and suggestive significance (P value = 5.32e-09), respectively.SL: shank length; SC: shank circumference native chickens to analyze body weight (BW) traits.The results identified twelve single nucleotide polymorphisms (SNPs) associated with BW at the suggestive significance level.These SNPs were found near or within 11 candidate genes, specifically WDR37, KCNIP4, SLIT2, PPARGC1A, MYOCD, and ADGRA3 [46].Some of the genes overlapped with the results of this study.The NCAPG-LCORL locus is widely believed to impact human height in human studies.In the GWAS of cattle and horses and the whole-genome selective sweep analysis of pigs and dogs, the NCAPG-LCORL locus was found to be significantly associated with body length and BW [47].Furthermore, PPARGC1A has been shown to facilitate mitochondrial biogenesis and modulate skeletal muscle metabolism by mediating the flux of glycolysis and the tricarboxylic acid (TCA) cycle, which drives the transformation of fasttwitch myofibers to slow-twitch myofibers, thus increasing chicken skeletal muscle mass [48].In another region, the CAB39L and WDFY2 genes were identified as candidates on chromosome 1 (170.5-171.5 Mb) and associated with BW from 4 to 18 weeks of age and body size traits.The CAB39L gene can be considered a novel candidate gene for chicken growth and development [49].The WDFY2 may be a candidate susceptibility gene located downstream of TP63 in the network of limb development [50].Other genes, such as IGF2BP1 and GIP, were found to be associated with the SL trait.The GIP gene encodes an incretin hormone that induces insulin secretion [51] and mediates appetite and energy intake [52].
In domestic animals, loci with a significant positive effect on favorable traits tend to undergo strong selection and fixation.In this study, we investigated the genomic variations of the LCORL, LDB2, and PPARGC1A genes in the chromosome 4:72 ∼ 76 Mb region by perform- ing selective sweep analysis.The results indicated that these genes not only were associated with BW and body size traits, as revealed by the GWAS, but also were strongly selected and fixed among breeds with differences in BW and size.Thus, it is likely that the chromosome 4:72 ∼ 76 Mb region harbors loci that significantly impact BW and body size.
Observations of the same individual at multiple time points are called longitudinal traits and provide a better representation of growth and production in farm animals than single data records [53][54][55][56][57].The BW of chickens at different weeks of age is a classic example of a longitudinal trait.In this study, we performed GWAS independently for each time point to identify the genetic basis of BW.However, a more effective strategy would be to fit the growth curve and use the fitted parameters to conduct the association analysis.This approach would better reflect the growth trajectory and provide a novel insight into the genetic underpinnings of BW in chickens.
Conclusion
In conclusion, our GWAS identified 2,522 SNPs with genome-wide significance, the majority of which are being reported for the first time.Several SNP effects overlapped with previously reported QTL regions, supporting the validation of QTL effects.Using a combination of GWAS and FST-based approaches, we identified three genes (LCORL, LDB2, and PPARGC1A) in the Chinese WB chicken associated with BW and body size traits.Our study provides important insights into the evolution and genetic basis of Chinese local chickens, which may be beneficial for both domestic and international chicken breeders.This study may also contribute to the development of genome-scale selective breeding strategies aimed at increasing chicken meat yield.
Fig. 1
Fig. 1 Population genetic diversity and demographic history inferences (a) PCA plot with three chicken breeds (b) Neighbor-joining tree constructed by genetic distance among three chicken breeds (c) Population structure analysis of three body size of chickens, where the number of ancestral clusters were set from K = 2-4.(d) LD decay in three body size of chickens.(e) Detection of selective sweep windows in purebred chickens.The red dash line indicates the top 1% threshold of Fst values.(f ) Putative selected windows and genes on the chromosome 27:6.05-6.09Mb region.RW: recessive white chickens; WB: Wenshang Barred chickens; LM: Luxi mini chickens
Fig. 2
Fig. 2 Manhattan plots of GWAS for BW traits in WB chicken.Each dot represents a SNP in the dataset.The horizontal red and blue lines indicate the thresholds for genome-wide significance (P value = 1.07e-09) and suggestive significance (P value = 5.32e-09), respectively.BW: body weight
Fig. 3
Fig. 3 Manhattan plots of GWAS for breast muscle size traits in WB chicken.Each dot represents a SNP in the dataset.The horizontal red and blue lines indicate the thresholds for genome-wide significance (P value = 1.07e-09) and suggestive significance (P value = 5.32e-09), respectively.CB: chest breadth; CD: chest depth; TH: thoracic horn
Fig. 4
Fig. 4 Manhattan plots of GWAS for body size traits in WB chicken.Each dot represents a SNP in the dataset.The horizontal red and blue lines indicate the thresholds for genome-wide significance (P value = 1.07e-09) and suggestive significance (P value = 5.32e-09), respectively.BOL: body oblique length; KL: keel length; PW: pelvic width
Fig. 6
Fig. 6 Association results of the candidate region on chromosome 4 for BW and body size traits.(a, b, c) Putative selected SNPs in the LCORL, LDB2 and PPARGC1A genes.RW: recessive white chickens; WB: Wenshang Barred chickens; LM: Luxi mini chickens.(d, e, and f) Linkage disequilibrium (LD) analysis of the overlap significant SNPs on the LCORL, LDB2 and PPARGC1A genes
Table 1
Descriptive statistics for BW and body size traits of WB chicken
Table 2
Overview of the significant SNPs associated with BW traits in WB chicken *BW: body weight
Table 3
Overview of the significant SNPs associated with breast muscle size traits in WB chicken *CB: chest breadth; CD: chest depth
Table 4
Overview of the significant SNPs associated with body size traits in WB chicken *BOL: body oblique length; KL: keel length; PW: pelvic width
Table 5
Overview of the significant SNPs associated with SL and SC traits in WB chicken | 2024-03-21T13:16:12.825Z | 2024-03-20T00:00:00.000 | {
"year": 2024,
"sha1": "bb59dba27237fbcd8956ec037c10734341affd3c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12864-024-10185-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a89acf003cfddf3cb878eb2b7bdafee5a411f02",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260681024 | pes2o/s2orc | v3-fos-license | Lysophosphatidylglucoside/GPR55 signaling promotes foam cell formation in human M2c macrophages
Atherosclerosis is a major cause of cerebral and cardiovascular diseases. Intravascular plaques, a well-known pathological finding of atherosclerosis, have a necrotic core composed of macrophages and dead cells. Intraplaque macrophages, which are classified into various subtypes, play key roles in maintenance of normal cellular microenvironment. Excessive uptake of oxidized low-density lipoprotein causes conversion of macrophages to foam cells, and consequent progression/exacerbation of atherosclerosis. G-protein-coupled receptor 55 (GPR55) signaling has been reported to associate with atherosclerosis progression. We demonstrated recently that lysophosphatidylglucoside (lysoPtdGlc) is a specific ligand of GPR55, although in general physiological ligands of GPR55 are poorly understood. Phosphatidylglucoside is expressed on human monocytes and can be converted to lysoPtdGlc. In the present study, we examined possible involvement of lysoPtdGlc/GPR55 signaling in foam cell formation. In monocyte-derived M2c macrophages, lysoPtdGlc/GPR55 signaling inhibited translocation of ATP binding cassette subfamily A member 1 to plasma membrane, and cholesterol efflux. Such inhibitory effect was reversed by GPR55 antagonist ML193. LysoPtdGlc/GPR55 signaling in M2c macrophages was involved in excessive lipid accumulation, thereby promoting foam cell formation. Our findings suggest that lysoPtdGlc/GPR55 signaling is a potential therapeutic target for inhibition of atherosclerosis progression.
Foam cell formation is associated with decreased phagocytic activity and increased proinflammatory cytokine production, leading to progression/exacerbation of atherosclerosis 7,8 .
Intraplaque macrophages are classified into several subtypes.Activated macrophages are traditionally divided into two categories, M1 and M2 macrophages, respectively involved in pro-inflammatory and anti-inflammatory responses.There are four subtypes of M2 macrophages: M2a, M2b, M2c, and M2d.Other phenotypes relevant to atherosclerosis development have been described 7 .M1 macrophages promote atherosclerotic plaque formation based on sustained inflammation, whereas M2 macrophages cause atherosclerotic plaque regression by promoting tissue repair, anti-inflammatory cytokine release, and efferocytosis 9 .M2c macrophages, in particular, can suppress innate inflammation by migrating to affected areas, efferocytosing early apoptotic cells, and releasing anti-inflammatory cytokines 10 .However, roles of the various macrophage subtypes in atherosclerosis are not well understood.
G-protein-coupled receptor 55 (GPR55), an orphan class A G protein-coupled receptor identified in 1999 11 , recognizes multiple ligands and is associated with many physiological and pathophysiological processes that affect the central nervous system, cardiovascular system, bone remodeling, immune system control, gastrointestinal and metabolic function, and adipose tissue control 11 .In a mouse model of enteritis, GPR55-deficient mice displayed reduced levels of inflammation and macrophage infiltration 12 .GPR55 is highly expressed in human inflammatory cells (monocytes and macrophages) during atherosclerosis initiation and progression.Studies of pro-atherogenic functions of GPR55 in phorbol-12-myristate-13-acetate (PMA)-differentiated human THP-1 macrophages and endothelial cells suggested involvement of GPR55-mediated alteration of macrophage gene expression in chronic inflammation associated with lipid metabolism 13,14 .
Phosphatidylglucoside (PtdGlc), a cell surface glycophospholipid originally detected in human umbilical cord erythrocytes, is involved in erythrocyte differentiation and apoptosis 15,16 .It is also expressed on cell surfaces of phagocytes (neutrophils, monocytes) and plays a role in neutrophil differentiation and apoptosis 17,18 .Lysophosphatidylglucoside (LysoPtdGlc), a degradation product of PtdGlc, functions as a chemotactic molecule for human monocytes/macrophages via GPR55 receptor 19 .Stimulation of cells results in degradation of PtdGlc by secretory phospholipase A 2 to produce water-soluble lysoPtdGlc 20 .Our 2022 study revealed that UDP-glucose glycoprotein glucosyltransferase 2 (UGGT2), the enzyme responsible for PtdGlc biosynthesis, plays a key role in regulation of lipid homeostasis 21 .Pro-atherogenic effects have been attributed to saturated fatty acids 22 .We proposed that UGGT2-dependent PtdGlc biosynthesis leads to elimination of excessive saturated lipids and reduction of hypoxia-induced lipid bilayer stress in the endoplasmic reticulum (ER) 21 .
Excessive circulating oxLDL is associated with atherosclerosis progression.oxLDL is taken up by intraplaque macrophages via scavenger receptors, including CD36 and lectin-like oxLDL receptor-1 23 .CD36 is highly expressed on macrophages in general, and levels are higher on M2 than on M1 macrophages 24,25 .Following uptake, oxLDL is degraded by lysosomal acidic lipase (LAL) in late endosomes.For storage of degradation products in lipid droplets, cholesterol and fatty acids are transported to the ER and re-esterified by sterol O-acyltransferase 1 (SOAT1) 8,26 .Stored esters are metabolized by neutral cholesterol ester hydrolase (NCEH1) to free cholesterol 26 , and resulting products (acyl glycerides and cholesteryl esters) accumulate in lipid droplets.In contrast, cytotoxic free cholesterol and fatty acids are loaded onto apolipoproteins and excreted extracellularly by two transporters: ATP binding cassette subfamily A member 1 (ABCA1) and ATP binding cassette subfamily G member 1 (ABCG1).In ABCA1-and ABCG1-induced cholesterol efflux, liver X receptors (LXRs) recognize intracellular cholesterol, a product of oxLDL 27,28 , and activate the peroxisome proliferator-activated receptor (PPAR) pathway, thereby promoting ABCA1 and ABCG1 biosynthesis 29 .This process is disrupted by foam cells, which incorporate excessive amounts of lipids, with consequent imbalance of lipid uptake, lipid metabolism, and cholesterol efflux.
In early atherosclerotic lesions, foam cells and dead cells are efficiently removed by M2c macrophages to maintain the intravascular environment 30,31 .In advanced atherosclerosis, excessive foam cell formation promotes apoptosis regardless of M2c macrophage activity, and failure to remove these cells accelerates plaque formation, leading to atherosclerosis progression 30,31 .The reasons why M2c macrophages do not display notable anti-inflammatory effects in advanced atherosclerosis lesions are unclear 31 .Most intraplaque cells, including macrophages and neutrophils, express PtdGlc 32 , and such PtdGlc-expressing cells release lysoPtdGlc into the surrounding environment 20 .LysoPtdGlc activates macrophages via GPR55 19 .The above observations suggest the possibility that PtdGlc/lysoPtdGlc/GPR55 signaling involves macrophage-mediated foam cell formation.We report here pharmacological evaluation in vitro of GPR55-mediated lysoPtdGlc effect on M2c macrophage foam cell formation.
Results
GPR55 mRNA expression level was higher for M2c than other macrophage subtypes.GPR55 agonist O-1602 was reported by V. Chiurchiu's group to upregulate CD36 expression in THP-1 macrophages, which take up oxLDL 13 .Human monocyte-derived M2a and M2c macrophages polarized respectively by IL-4 and IL-10 displayed enhanced lipid uptake and foam cell formation 33 .Relationships in polarized human monocyte-derived macrophages between GPR55-mediated signaling and foam cell formation remain unclear.Cell surface marker protein expression under our experimental conditions was higher for polarized M1, M2a, and M2c than for M0 macrophages (Fig. 1A-C).RT-qPCR analysis revealed expression in all macrophage subtypes of cannabinoid receptors GPR55, type I (CNR1), and type II (CNR2).Such expression was higher for M2c than for other subtypes (Fig. 1D-F).
Human monocytes express PtdGlc 17 , and LysoPtdGlc is a specific ligand for GPR55 19 .Flow cytometric analysis using anti-PtdGlc mAb DIM21 revealed that surface PtdGlc expression was higher for M1 and M2c than for M0 or M2a macrophages (Fig. 1G).and M2c macrophages were evaluated by staining lipid droplets from cells with ORO following oxLDL uptake (Fig. 2A, Supplement Fig. 1).ORO staining area was significantly larger in oxLDL (50 µg/mL)-treated than in nontreated cells for both M0 and M2c (Fig. 2B).Flow cytometric analysis revealed that most M0 and M2c cells took up Alexa Fluor 647-conjugated oxLDL (Fig. 2C); they were therefore defined as foam cells.
Effects of lysoPtdGlc/GPR55 signaling on lipid uptake and efflux.Anti-inflammatory activity of M2c macrophages in early atherosclerotic lesions is fairly efficient; in contrast, such activity in advanced atherosclerotic lesions becomes not efficient 31,34 .In view of the reported ability of LysoPtdGlc to activate macrophages 19,20 , we examined its effect on oxLDL-processing ability of M2c macrophages.Lipid accumulation was assessed by ORO staining.Intracellular lipid level in M2c macrophages was elevated by oxLDL uptake, but unaffected by lysoPtdGlc (Fig. 3A).GPR55 antagonist ML193 had no effect on oxLDL uptake.These findings indicate that lysoPtdGlc/GPR55 signaling is not involved in lipid uptake.ABCA1 and ABCG1 are transporters that load degradation products of oxLDL, cholesterol, and phospholipids into apolipoprotein AI (ApoA-I) and high-density lipoprotein (HDL) 35 .The possible role of lysoPtdGlc/ GPR55 signaling in ABCA1-or ABCG1-mediated cholesterol efflux in foamy M2c macrophages was investigated by cholesterol efflux assay (Fig. 3B,C).ABCA1-mediated cholesterol efflux was significantly elevated in these macrophages (Fig. 3B).Such efflux was significantly reduced by lysoPtdGlc stimulation of these macrophages, and the reduction was reversed by ML193 treatment.In contrast, ABCG1-mediated cholesterol efflux showed no notable elevation in foamy M2c macrophages (Fig. 3C), and was unaffected by lysoPtdGlc stimulation or ML193 treatment.These findings indicate that ABCA1-mediated (but not ABCG1-mediated) cholesterol efflux is inhibited by lysoPtdGlc/GPR55 signaling.
Effects of lysoPtdGlc/GPR55 signaling on levels of various proteins.
The ability to regulate internal lipid content affects foam cell formation of macrophages.The enzyme LAL (see "Introduction" section) hydrolyzes oxLDL in late endosomes, SOAT1 esterifies cholesterol and fatty acids in ER to store oxLDL degradation products in lipid droplets, and NCEH1 metabolizes stored esters in ER to free cholesterol.These are important processes in metabolism of lipids taken up by macrophages 23,25,36 .We examined effects of lysoPtdGlc on protein expression of these three enzymes in M2c macrophages.SOAT1 expression was significantly increased by oxLDL uptake, but was unaffected by lysoPtdGlc in the presence or absence of oxLDL uptake, and also unaffected by ML193 (Fig. 4A,B).Expressions of LAL and NCEH1 were unaffected by oxLDL, lysoPtdGlc, ML193, or combination treatments (Fig. 4C-F).These findings indicate that lysoPtdGlc/GPR55 signaling is not involved in SOAT1-, NCEH1-, or LAL-mediated lipid metabolism.
ABCA1 protein expression in M2c macrophages was significantly increased by oxLDL uptake, but was unaffected by lysoPtdGlc in the presence or absence of oxLDL uptake, and also unaffected by ML193 (Fig. 4G,H).ABCG1 expression was not significantly increased by oxLDL uptake, and was unaffected by lysoPtdGlc or ML193 (Fig. 4I,J).
ABCA1 is localized in intracellular vesicles and translocated to the cell surface through palmitoylation by Zinc finger DHHC-domain-containing protein 8 (ZDHHC8), a palmitoyltransferase 37 .We therefore examined ZDHHC8 protein expression, and found that it was unchanged by oxLDL, lysoPtdGlc, ML193, or combination treatment (Fig. 4K,L).These findings, taken together, indicate that lysoPtdGlc-mediated downregulation of ABCA1-dependent cholesterol efflux in M2c macrophages is independent of alteration of ABCA1 protein expression induced by oxLDL uptake, and that lysoPtdGlc plays a role in ABCA1-but not ABCG1-mediated cholesterol efflux.
LysoPtdGlc/GPR55 signaling inhibits translocation of ABCA1 to plasma membranes.In view of experimental results described in "Effects of lysoPtdGlc/GPR55 signaling on levels of various proteins" section, we examined the possibility that lysoPtdGlc/GPR55 signaling suppresses ABCA1 translocation from intracellular compartments to the cell surface [38][39][40][41] , through flow cytometric analysis of surface ABCA1 expression in M2c macrophages.ABCA1 expression was high in intracellular compartments (Fig. 5A) but much lower on cell surfaces (Fig. 5B).Number of ABCA1-positive cells was increased by oxLDL uptake, and ML193 had no effect on such increase (Fig. 5C).The oxLDL-induced increase of ABCA1-positive cell number was inhibited by lysoPtdGlc, and such inhibition was reversed by ML193.Surface expression of ABCG1 on M2c macrophages was not upregulated by oxLDL uptake, nor by lysoPtdGlc stimulation (Supplement Fig. 2).These findings indicate that lysoPtdGlc/GPR55 signaling suppresses ABCA1 translocation to the cell surface, and thereby inhibits ABCA1-mediated cholesterol efflux in foamy M2c macrophages.
Discussion
In macrophages, oxLDL is taken up, internalized, metabolized, and exported as cholesterol.Based on cholesterol efflux assay, we demonstrated the essential role of lysoPtdGlc/GPR55 signaling in cholesterol efflux in M2c macrophages.lysoPtdGlc did not affect cholesterol uptake in these cells.LysoPtdGlc/GPR55 signaling had no effect on expression of LAL, SOAT1, or NCEH1 proteins, which are involved in metabolism of oxLDL.Present and previous findings 13 , considered together, suggest that cholesterol efflux in atherosclerosis-associated M2c macrophages is downregulated by lysoPtdGlc stimulation.Pretreatment with lysoPtdGlc inhibits ABCA1mediated cholesterol efflux in foamy M2c macrophages, which take up excessive amounts of oxLDL.ABCA1 surface expression is inhibited by lysoPtdGlc/GPR55 signaling, although such signaling is not directly involved in biosynthesis of ABCA1 protein.Cholesterol induces PPARγ-LXRα pathway and upregulates ABCA1 and ABCG1 expression 35 .Cholesterol efflux is promoted by cAMP/PKA-induced phosphorylation at the cell surface 42 .Present findings indicate that lysoPtdGlc/GPR55 signaling is associated with translocation of ABCA1 to the cell surface, but not with PPARγ-LXRα or cAMP/PKA pathway.ABCA1 translocation is regulated by palmitoyltransferase ZDHHC8 37 ; however, lysoPtdGlc/GPR55 signaling had no effect on ZDHHC8 protein level.Mechanisms underlying regulation of ABCA1 translocation by lysoPtdGlc/GPR55 signaling via ZDHHC8 remain to be elucidated.Cell surface PtdGlc expression on several macrophage subtypes was demonstrated by our experiments.PtdGlc is expressed by most intraplaque cells 32 .Activated macrophages, vascular endothelial cells, and vascular smooth muscle cells all induce apoptosis in plaque 43 .Neutrophils, which induce apoptosis via PtdGlc 17 , are localized at www.nature.com/scientificreports/plaque erosion sites 44 .It is therefore conceivable that lysoPtdGlc is released from intraplaque cells during their activation and apoptosis.V. Chiurchiu's group reported increased GPR55 expression in THP-1 macrophages during foam cell formation 13 .GPR55 agonist O-1602 increased oxLDL-induced lipid accumulation in these cells by upregulating CD36 and scavenger receptor Class B Type I, but decreased cholesterol efflux by downregulating ABCA1 and ABCG1 13 .Adhesion of THP-1 monocytes to vascular endothelial cells was enhanced by O-1602, with consequent promotion of atherosclerosis 14 , upregulation of proinflammatory TNF-α protein, and downregulation of anti-inflammatory IL-10 13 .On the other hand, Y. Yin's group reported that GPR55 antagonist CID16020046 in human aortic endothelial cells inhibited oxLDL-induced apoptosis, secretion of inflammatory cytokines IL-8 and monocyte chemoattractant protein-1 (MCP-1), and oxLDL-induced expression of the adhesion molecules vascular cell adhesion molecule-1 (VCAM-1) and E-selectin 45 .These pharmacological results are consistent with a pro-atherosclerotic role of GPR55, with our observation of promotion of atherosclerosis progression by GPR55, and with the proposed involvement of lysoPtdGlc/GPR55 signaling in macrophages and foam cells in lipid accumulation and atherosclerosis progression.Palmitoylethanolamide (PEA), an endogenous fatty acid amide, is another ligand for GPR55, and promotes efferocytosis of M2c macrophages and reduction of plaque size in early atherosclerotic lesions 46 .In contrast, in our proposed advanced atherosclerosis model, foam cell formation is promoted by lysoPtdGlc/GPR55 signaling.Excessive lipid uptake suppresses efferocytosis of M2c macrophages, leading to their conversion to foam cells 34 .GPR55-mediated functions of M2c macrophages may thus change from PEA-dependent anti-inflammatory effect to lysoPtdGlc-dependent promoting effect on atherosclerosis development.To elucidate the involvement of the GPR55 signal in the regulation of lipid metabolism homeostasis and promotion of atherosclerosis progression in human atherosclerotic lesions, it is necessary to determine the levels of lysoPtdGlc and the expression variability of GPR55 in atherosclerotic tissue.However, lysoPtdGlc is thought to be present only in trace amounts in biological samples 20 .Since lysophospholipids are readily degraded by phospholipases, it is difficult to obtain sufficient amounts of lysoPtdGlc from human atherosclerotic lesions without degradation.In addition, current analytical techniques have not established a method for quantitatively analyzing lysoPtdGlc at trace levels in biological samples under the coexistence of other lysophospholipids.In the future, it is necessary to establish analytical techniques that enable quantitative analysis of lysoPtdGlc in atherosclerotic lesions.In addition, to clarify the molecular mechanism of how lysoPtdGlc and GPR55 are involved in the progression of atherosclerosis, further detailed analysis is needed in an atherosclerosis model using GPR55-deficient mice.
In this study, we demonstrated the GPR55-mediated effects of lysoPtdGlc on M2c macrophage foam cell formation in vitro.The results show that lysoPtdGlc/GPR55 signaling inhibits ABCA1-mediated cholesterol efflux in foamy M2c macrophages via inhibition of ABCA1 surface expression.These results suggest that lysoPtdGlc/ GPR55 signaling is a potential therapeutic target for inhibiting atherosclerosis progression.Further studies on the molecular mechanisms involving lysoPtdGlc/GPR55 signaling in M2c macrophage-mediated atherosclerosis progression are warranted.
Cell culture.Ethical approval for obtaining blood from healthy human volunteers was provided by the Ethics Review Board of Juntendo University Faculty of Medicine (Authorization number: 2017170).All research was performed in accordance with the Declaration of Helsinki, and relevant guidelines/regulations.
Peripheral blood was obtained from healthy volunteer subjects, with written informed consent.Peripheral blood mononuclear cells (PBMCs) were isolated from blood samples using Lymphoprep (Stemcell Technologies; Cologne, Germany) as per manufacturer's protocol.PBMCs were suspended in DMEM/F-12 (Life Technologies; Carlsbad, CA, USA), plated on 6-well tissue culture plates (density 2 × 10 6 cells/mL), and incubated 3 h at 37 °C.Adherent cells were cultured in RPMI 1640 supplemented with 10% FBS (Biowest; Hiroshima, Japan) and 20 ng/ mL M-CSF in 6-well plates for 6 days to induce differentiation into M0 macrophages.M1 macrophages were produced by culturing M0 macrophages for 24 h in RPMI 1640/10% FBS in the presence of 1 ng/mL LPS and 20 ng/mL IFN-γ.M2a and M2c macrophages were produced by culturing M0 macrophages for 24 h in RPMI 1640/10% FBS in the presence of (respectively) 20 ng/mL IL-4 and 100 nM dexamethasone.
Foam cells.M0 and M2c macrophages were incubated with oxLDL at various concentrations for 24 h and stained with Oil Red O solution (ORO) (ScyTek Laboratories; Logan, UT, USA) as per manufacturer's protocol.Cells were observed using a fluorescence microscope (model BZ-X800; Keyence; Osaka, Japan) equipped with 40× objective lens, and lipid droplet area was calculated using software program BZ-H4C (Keyence).Data were expressed as ratio of lipid droplet area to cell number in 10 randomly selected fields.Lipid quantity was determined using a microplate reader (2030 ARVO X4; PerkinElmer Japan; Tokyo) to measure absorbance at wavelength 490 nm of isopropanol-extracted supernatants from ORO-stained cells.
LysoPtdGlc stimulation and oxLDL uptake.To evaluate effects of lysoPtdGlc on functions of the four macrophage subtypes under cholesterol homeostasis in advanced atherosclerosis, polarized macrophages were
Figure 2 .
Figure 2. oxLDL uptake induces foam cell formation in M0 and M2c macrophages.(A,B) oxLDL uptake in macrophages was determined based on ORO staining.Cells were observed under light microscopy (40× objective lens), and ORO-stained areas were analyzed.Stained areas in M0 and M2c macrophages were normalized to ORO-stained cell number (A), and representative images are shown (B).Bars represent mean ± SEM from five independent experiments.(C) Alexa Fluor 647-conjugated oxLDL uptake in M0 and M2c macrophages was analyzed by flow cytometry, and representative histograms from three independent experiments are shown.Dotted and solid lines: oxLDL treatments at 0 and 50 µg/mL, respectively.
Figure 3 .
Figure 3. Lipid uptake and cholesterol efflux activity in lysoPtdGlc-stimulated M2c macrophages.(A) M2c macrophages were treated with ML193 (GPR55 antagonist) for 2 h, stimulated with 10 nM lysoPtdGlc for 2 h, incubated with 50 µg/mL oxLDL for 24 h, stained with ORO, and dissolved in isopropanol.Absorbance of supernatants was measured at wavelength 490 nm.(B,C) ABCA1-mediated (B) and ABCG1-mediated (C) cholesterol efflux in M2c macrophages was analyzed using TopFluor cholesterol under various conditions as indicated.Cells were treated sequentially with ML193, lysoPtdGlc, and oxLDL as in A. Data were normalized to non-treated cells.Bars represent mean ± SEM from three independent experiments.Data analysis and notations as in Fig. 1.
Figure 5 .
Figure 5. LysoPtdGlc/GPR55 signaling reduces surface expression of ABCA1 on M2c macrophages.ABCA1 expression in M2c macrophages was analyzed by flow cytometry.(A) Cells were fixed, permeabilized with digitonin, and stained with anti-ABCA1 antibody.And then, the cells were stained with Alexa Fluor 488 conjugated secondary antibody.A representative histogram from three independent experiments is shown.Dotted line, isotype control.Solid line, anti-ABCA1 antibody.(B,C) ABCA1 surface expression was analyzed by flow cytometry.Cells were treated with ML193 for 2 h, stimulated with 10 nM lysoPtdGlc for 2 h, and incubated with 50 µg/mL oxLDL for 24 h.Representative histograms from three independent experiments are shown (B).Percentages of ABCA1-positive cells were normalized to those of non-treated cells (C).Bars represent mean ± SEM from three independent experiments. | 2023-08-08T06:17:39.778Z | 2023-08-06T00:00:00.000 | {
"year": 2023,
"sha1": "f48871638c4bc3637194faf6101327d97ef5e32a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-39904-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cef770b2374faa71ddc0ad330bfe5c8b9be67781",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15865441 | pes2o/s2orc | v3-fos-license | An ontology-based method for secondary use of electronic dental record data.
A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance.
Introduction
Every day, patients ask their dentist questions. "How long will this filling last?," "Why is my gum disease not getting better?" or "Should I have an implant or a bridge?" Gathering information prerequisite to answering these kinds of questions so it is available when needed, can be applied effectively and efficiently, and keeping it up-todate as a part of the care process is a national priority 1 .
How to operationalize the vision of the Learning Healthcare System is an important issue not only for medicine, but also for dentistry. Key questions include: What research questions can be answered by analyzing data in electronic dental records (EDR)? How can data from the multitude of current EDR systems be extracted in a standardized, reproducible, manner? How can dentistry create a cycle of continuous care improvement based on EDR data? And, finally, how can information gathered in dental visits be integrated into a Learning Healthcare System strategy to effectively contribute to total patient health 2 ?
Typically, studies reusing electronic patient records extract data using a custom-developed, "one-off" mechanism, which is inefficient for data reuse on a broad scale 3 . Like electronic health records, most EDRs store information in a proprietary format. Efficient extraction of data is impeded by several factors. First, incompatible database systems require idiosyncratic application programming interfaces to access data. Second, no two EDR databases are structured the same way. Third, even when the same kind of information is stored it may not be encoded in the same format, requiring conversions (for instance, when blood pressure measurements are stored as two integers vs. a single text string). Last, encodings may not map unambiguously to each other, such as when different EDRs record presence of "caries," "root caries" and "incipient caries." We need a standardized approach that enables efficient access to information in EDRs, and integration across different dental care providers and EDR systems. Our approach is to structure data from dental patient records using a realist approach. We interleave the construction of our Oral Health and Disease Ontology (OHD) with the reencoding of the EDR data using the OHD, which more directly represents what happens during dental visits. The OHD includes terms relevant to the diagnosis and treatment of dental maladies, and is publicly available 4,5 . Notably, we did not start from scratch. The OHD incorporates terms from a growing network of interoperable ontologies built using principles of the OBO Foundry 6 . In this paper, we report on initial efforts to represent dental patient data contained in an EDR and to build the supporting OHD. We describe a snapshot of the in-development ontology, selections from patient records represented using the OHD, and sample queries that retrieve relevant data. We conclude with a discussion of the benefits and challenges of our approach as concerns meaningful use of EDRs aggregated across practices, practice software, and with other sources of health information such as the EHR.
Methods
The data source for this project was a relational database of de-identified dental records for 7,337 patients from a single dental practice spanning the years 1999 -2011. Only some 4500 had treatment records. The practice used Eaglesoft (Patterson Dental, Effingham, IL), one of the leading EDR systems in the US (18% market share). The database contained 232,270 records that pertained to patients' dental health history of which 54,000 dealt with restorative, endodontic and surgical procedures.
Our interdisciplinary team is composed of dentists, informaticians, ontologists and clinical dental researchers. We first developed a set of research questions that we felt could be answered with the data. We then met a number of times to bring each of us to a reasonable level of mutual understanding of the domain and common informatics issues. Subsequently we acquired basic familiarity with the database structure by reviewing vendor-supplied documentation in the form of sample queries and explanations of what they did. Our clinical dental researcher wrote a target spreadsheet format to make as concrete as possible the deliverable for our work. Once this was in place we worked iteratively to develop the framework presented in Figure 1, each iteration developing part of (3), (4), and (5) focused on one or a small number of entities involved in restoration and subsequent dental work (1). We have implemented all steps except statistical analysis (6).
Development of guiding dental clinical research questions
The purpose of developing these questions was to (1) enable studies of interest to general practitioners; (2) reuse a small selection of data commonly stored in EDRs; and (3) focus the development of the OHD on a clearly defined, tractable subset of clinical data. Questions included: What is the time from one restoration to its replacement on the same tooth? Does the time between successive restorations depend on the restorative material, such as amalgam and composite? What findings, e.g. caries and fracture, are present on a tooth over time and how do these relate to restorations (e.g. cause for placing the restoration)?
First pass development of the Oral Health and Disease Ontology (OHD)
OHD terms were added as needed to represent entities involved in the procedures for which data were requested. Once a tentative definition was sketched, we looked for superclasses in a subject of existing OBO ontologies that aspire to follow the OBO Foundry principles. The clinical processes of interest were restorative procedures, dental procedures that indicated failure of those restorations, and clinical examinations that produced relevant information. Participants were patients, their teeth, the surface layers of those teeth and materials used for restorations. The primary information entities used were the CDT billing codes, as well as clinical findings generated during (1), is information that hard to aggregate from different EDRs (2). In our approach, we iteratively extract data out of EDRs (3) by writing scripts to translate data to an ontology-structured knowledge base (4). In each step we also develop queries (5) that incrementally provide the necessary data to do the statistical analysis (6).
examinations.
We identified the following ontologies for reuse and/or specialization: the Ontology for General Medical Science (OGMS) 7 (entities related to health care [patient role, visit, disorder, symptom]); the Foundational Model of Anatomy ontology (FMA) 8 and the Common Anatomy Reference Ontology (CARO) 9 (anatomical descriptions of teeth, tooth surfaces, jaws, etc.); the Ontology for Biomedical Investigations (OBI) 10 (properties to relate processes to entities, for instance a restoration material to the restoration procedure); and the Information Artifact Ontology (IAO) 11 (CDT billing codes 12 , clinical findings, the relationship between them and what happened, and provenance information regarding the development of the OHD). The OHD, therefore, is an aggregation of terms imported from other ontologies, as well as terms our team defined as subclasses or specializations of those terms. Other potentially helpful resources include the dentistry focused subset of UMLS and emerging dental diagnostic coding systems such as SNODENT, which can be leveraged as the scope of the OHD increases to include more aspects of clinical dental care.
Extracting from and translating information in the EDR
Each step of this phase focused on translating information about a single kind of entity, e.g. patient, tooth, procedure, tooth surface layer, and relations that connected this entity to others, e.g. surface layers being part of teeth, teeth participating in (in the ontological sense) procedures.
Although we clearly understood what information we needed to answer our research questions, in many cases it was not clear how this information was encoded. We supplemented our initial understanding of the database by reviewing documentation about stored procedures, triggers, relationships between tables, and the types of data in each table's fields. When necessary, we consulted the vendor in order to understand the table structure, obtain SQL queries that would return data for the use cases of our interest. Once we understood what entities and relationships the data represented, we developed a computer programs to extract the data, preparing for the next step.
The ontology-based knowledge base we constructed consists of the OHD, instances of its classes, and relations among those instances. In each round of development, the data we extracted provided partial information, for example in one round we extracted patient information, including birthdate and gender. For example, each patient was to be represented by an instance of a gender-specific subclass, and related to their birth date. At this point we sometimes needed to add new terms to the OHD. After adding any necessary terms to the OHD, we translated the information we had retrieved from Eaglesoft into OWL statements, which were added to our knowledge base.
We developed SPARQL queries in tandem with this process, attempting, in each iteration, to get closer to generating the information specified by our clinical researcher. In a number of cases, using actual patient data made us realize that aspects of this specification were unclear, underspecified, or that the underlying EDR could not supply the information requested. This, in turn, led to adjustments of the specification. In this way, we constructed both the OHD and the OHD-structured representations in manageable increments, a process that is continuing.
Results
Here we show the results of our first integration of data from an EDR system with the OHD ontology and some preliminary analysis of the data. At this time, the OHD comprises roughly 150 classes whose URI are in the OHD namespace, about 200 CDT code classes (subset of the complete set), 12 classes from OBI (selected terms), all 82 OGMS classes, 14 selected terms from IAO, 1 term from NCBI taxonomy (Homo Sapiens), about 1,500 terms from the FMA (all parts of the jaw and maxilla, dentition and tooth sockets), 3 terms from CARO, and all 32 terms from an early draft of BFO2, and about a dozen relations. Some of our work was to develop a way of using the CDT codes within our ontological framework. Most of the classes created de novo were related to procedures and roles specific to dentistry, materials used in dental restorations, and clinical findings relevant in dental treatment that weren't available in existing biomedical ontologies. Illustrative examples: tooth restoration procedure(class): A dental procedure in which parts of teeth that have been lost due to disease or other causes are replaced by alternative materials in order to reform the teeth and reestablish anatomical and functional form and health. dental visit(class): An outpatient encounter during which a dental health care professional and a patient meet for the purpose of evaluating, treating, or preventing deterioration of the patient's teeth and supporting structures. The knowledge base provides the capability to query across all of the data in new and meaningful ways. For instance, we can query for the material used in filling restorations in a given practice during a certain time frame. Figure 3 shows the result for the material used between 1999 and 2011, grouped by year. Interesting here is that the overall number of restorations increased during 2003 -2008 years, and that the percentage of amalgam was higher in the earlier years. Figure 4 is the SPARQL query used to retrieve the data for this chart.
To the best of our knowledge, the approach presented in our paper is the first that leverages Semantic Web Technologies for structuring and mining EDR data. Figure 4: SPARQL query to retrieve data for use in Figure 3. The query asks for filling restorations procedures and the date that they occurred, determining the material type by asking which participant in the procedure was a dental restoration material.
These preliminary results show the feasibility of extracting semantic structured data from an EDR system. The main advantages are the flexibility and the complexity of queries that can be performed and the advanced analysis of patient dental records that this will enable. In addition, having dental data semantically structured facilitates the integration of data coming from different EDRs and potentially EHRs and enables the use of other semantic biomedical data (available for instance as Linked data 13 ) for data analysis.
An important contribution of our approach is related to the data quality and reliability. During the mapping process we have identified redundant, meaningless and even incorrect data (related, for instance, to having the proper value of some fields changed in order to facilitate display of the data such as adding a prefix). While our method won't fix all the possible issues with the source data, it can enhance the data quality by fixing duplication/errors, by identifying incorrect practices in data entry, and by removing redundant legacy data.
Our work also led to improvements of ontologies we have reused. Our observation that dentists were concerned with surface layers of teeth prompted the FMA to include such entities in a subsequent version. We also identified some errors such as the assertion that maxillary dentition is part_of secondary dentition (consequently including primary maxillary dentition in secondary dentition -an error). Identifying such errors and fixing them in the source ontologies benefits all others using these ontologies.
Together with these benefits, the proposed approach poses also some challenges. First of all there is the difficulty of interpreting the structure and the content of the data sources. The involvement of the Eaglesoft vendor personnel and the dental practice was fundamental for ensuring that the source data were translated correctly into our knowledge base. The same level of involvement should be considered while replicating our approach for a different vendor/practice combination. Still, by publishing our work as open source (http://code.google.com/p/ohd-ontology) we hope to make it possible that such effort is done once, instead of each time a different group wants access to such data.
Another challenge identifying and properly translating into our knowledge base events or findings that rely on what might be missing data, or on complex patterns of findings or procedures. For instance, most of the time it is not possible to characterize if a "missing tooth" finding on a patient's first visit being the result of a concurrent extraction, or having not initially formed or having been lost due to advanced periodontal disease.
Our immediate plans are to do more extensive analysis on the data, using a SPARQL extension to R to enable our dental clinical researchers to answer more complex clinical questions without the assistance of an intermediary. We also plan to make available de-identified data for the analysis result as Linked Data, and to continue developing the OHD and translation methods until the full content of the EDR is available. | 2018-04-03T04:48:15.680Z | 2013-03-18T00:00:00.000 | {
"year": 2013,
"sha1": "43748fe060cff26c5b4953a185c2b75782950c0e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "43748fe060cff26c5b4953a185c2b75782950c0e",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14151593 | pes2o/s2orc | v3-fos-license | Early Eating Behaviours and Food Acceptance Revisited: Breastfeeding and Introduction of Complementary Foods as Predictive of Food Acceptance
Current dietary advice for children is that they should eat at least five portions of fruit and vegetables a day (Department of Health. National Diet and Nutrition Survey, 2014). However, many parents report that children are reluctant to eat vegetables and often fail to comply with the five-a-day rule. In fact, in surveys carried out in areas in the UK, the number of children eating according to the five-a-day rule has been found to be as low as 16 % (Cockroft et al. Public Health Nutr 8(7):861–69, 2005). This narrative review looks at those factors which contribute to food acceptance, especially fruit and vegetables, and how acceptance might be enhanced to contribute to a wider dietary range in infancy and later childhood. The questions we address are whether the range of foods accepted is determined by the following: innate predispositions interacting with early experience with taste and textures, sensitive periods in infancy for introduction, breastfeeding and the pattern of introduction of complementary foods. Our conclusions are that all of these factors affect dietary range, and that both breastfeeding and the timely introduction of complementary foods predict subsequent food acceptance.
Introduction Early Food and Taste Acceptance
Infants are born with specific taste preferences and aversions; however, specific food preferences cannot be hardwired; humankind needs to be flexible about which foods can be accepted because different cultures depend upon a wide range of foodstuffs. It would therefore be useful for infants to rapidly accept predominant tastes that define the foods of their culture or subculture so that they are able learn to like the food available in their environment. In general, the foods we learn to like in infancy and early childhood do predict those that we eat in later childhood and adulthood [2]. However, although it would seem that these preferences are mostly learned postnatally, it would also seem that there are innate preferences which ensure acceptance of sweet, smooth and high energy density foods [3, 4•] which predict good sources of energy and which are easily consumed.
In addition, there are marked differences in the willingness to accept food tastes and textures, and to try new foods that have not already been introduced to the diet, both in children and adults. To some extent, most people are 'fussy' in that they have a few foods that they will not eat, and most are reluctant to eat very novel foods from different cultures. In the UK, from clinical experience, the foods that are usually found aversive by adults are those of difficult texture, shell fish, bananas and mushrooms, or taste and smell, such as olives and fish.
This 'fussiness', which is a rather poorly defined term, is however more apparent in children. There is also a key stage This article is part of the Topical Collection on Psychological Issues in development during which children show extreme new food refusal, the neophobic stage, and it is not always clear how distinct 'fussiness' is from neophobia [5]. This stage which peaks at around the age of 20 months, is more extreme in some children than in others [6] and gradually fades away by the age of 5 to 8 years [7]. There is then an interaction between the innate reluctance of some children to accept tastes, textures and new foods and the effect of early exposure to new tastes and textures. A child might not accept a food because of innate predisposition or because they have not been given that food during a possible 'sensitive' period for introduction and familiarisation.
Research studies in this area report on different methodologies, some looking at intake of a food in early infancy before the neophobic stage has been reached [8, 9, 10••, 11•], others looking at acceptance of a new food during the neophobic stage [12,13], and further studies, usually longitudinal, looking at dietary range in older children [14,15]. The research studies reviewed here, therefore, cover a range of methodologies and look at short term or long term influences on food acceptance and dietary range and cover acceptance of and preference for odours, tastes, textures and foods.
Amniotic Fluid
There is some evidence that the experience of amniotic fluid, in turn affected by maternal diet, determines some preferences observed in the new born infant, but these observed preferences are for a specific odour. Infants orient towards the odour of flavours experienced in amniotic fluid of foods eaten by their pregnant mothers [16]. Not all food flavours can pass through to this medium, and those that do tend to be strong and rather have idiosyncratic tastes, such as garlic or anise [16][17][18]. Prenatal learning through exposure to the uterine environment occurs, but the demonstrated preference is not one of intake but of an orienting response; and, reported influence on subsequent food acceptance in the infant is not well founded. The one study that has found an association with later preference, for example prenatal garlic exposure predicting consumption of a gratin containing garlic at 8-9 years, does not control for the interim exposure period and, therefore, does not provide evidence for the long term effects of prenatal flavour exposure [19].
Birth
The infant is born with a preference for a sweet taste [3,20] and with a relatively neutral or positive response to salt and sour tastes, and possibly to umami, depending on the concentration used when testing. There is, however, a distinct aversive response to a bitter taste [3,21]. This preference is thought to be adaptive in that sweet tastes are usually associated with good sources of energy [22]. The aversion to a bitter taste is adaptive in that this taste is often associated with toxicity [23], and this is why many plants (or green vegetables) have developed a bitter taste, to prevent being eaten by mammals. There is, however, variation in the extent of bitter taste aversion. Bitter 'supertasters' can be found in both adult and child populations [24,25], which makes the exposure to and acceptance of such tastes especially difficult. Supertasters have been found to have a reduced liking for cruciferous and Brassica vegetables, such as broccoli [26]. There is also a possibly genetically determined response to other tastes such as geosmin, the earthy quality present in certain foods such as beetroot and mushroom [27].
Heritability in the acceptance and rejection of foods has also been observed, both in a general neophobic response [28] and of rejection of specific foods (meat, fish, fruit and vegetables) but not fatty foods of smooth texture such as yoghurts. However, this rejection could be one of texture, rather than taste [29]. It has been reported that infants in the transition to solid foods do not accept foods such as leafy vegetables and sliced meat well because of the texture of these foods [30].
Early Milk Feeding
Formulae Some flavour preferences might be learned from the intra-uterine environment, but they are also learned during the early stage of milk feeding; modified by exposure and to some extent predicting subsequent acceptance of foods. However, whilst this learned preference does not seem to be for specific foods (or flavours) eaten by the mother whilst breastfeeding, specific taste modification has been reported in formula-fed infants who have been exposed to bitter hydrolysate formulae, and this modification of taste acceptance is most marked if started shortly after birth [31]. This easy acceptance of a bitter tasting formula after early exposure continues into later childhood [32] and, to some extent, generalises to other similar tastes. Children fed with bitter hydrolysates during infancy preferred sour flavoured juices at 4-5 years [33]. However, the preference would seem to be context specific, a higher intake of a bitter food (broccoli) rather than sweet food (carrot) was not found in infants fed with vegetable hydrolysates compared with those fed normal formula [34].
Learned Preference for Specific Foods
The transmission of taste compounds from the mother's diet through breast milk to the infant has been observed, but can vary widely from mother to mother, differ according to the food eaten, and the compounds are transferred in relatively small amounts. The change to the taste of the mother's milk is therefore likely to be subtle and variable [35]. Some specific and rather idiosyncratic transmission has been noted, such as that of garlic, caraway, cigarettes and alcohol [36][37][38][39]. This changeable nature of breast milk does seem to facilitate the acceptance of complementary foods when these are introduced. However, research does not support the idea that increased acceptance of specific pureed food fed to infants is linked to specific foods in the maternal diet [39]. Menella, Jagnow and Beauchamp [40] found ratings of greater enjoyment of a target food (carrot) but not an increase in intake between the infants of those mothers who were exposed to carrot and those who were not. Similarly, pureed green beans and peaches were given to infants of mothers who had either breast fed, were still breastfeeding or were formula fed [41]. Infants who were breastfed ate more of the peaches, and mothers of the breast fed infants ate more fruit during the week prior to testing, but not peaches specifically. There was no difference in intake of green beans between formula fed (FF) or breast fed (BF) infants. Infants increased their intake of green beans after a period of 8 days of exposure, regardless of whether they were FF or BF, suggesting that exposure to the actual foods themselves is a much more robust effect.
Generalised Food Acceptance
Therefore, although breast feeding would seem to confer some advantage over formula feeding in subsequent food acceptance, the effect is more that of the acceptance of taste change or taste variety. Consumption of a specific food does not predict a preference for that food rather than an isolated taste or flavour, but it is more that the taste of breast milk fluctuates according to changes in maternal diet, whereas infant formula milk does not vary in taste. The enhanced acceptance is therefore based on a generalisation effect, the greater the varied experience of tastes then the better the acceptance of a new taste; a generalisation effect also observed throughout the introduction of complementary foods (ICF). What is common to each of these studies looking at the effect of breastfeeding is that each study includes some exposure to complementary foods fed to infants via the spoon, and that even infants who are formula fed respond quickly to this exposure in the ICF period. Infants exposed to the flavour of caraway through breast milk showed a subsequent higher intake of caraway-flavoured puree, but this heightened preference in comparison with formula fed infants was no longer evident after a 10-day exposure period for all infants [42].
Long-Term Effects of BreastFeeding
A beneficial effect of breastfeeding has been noted in studies looking at food acceptance in older children and later infancy [5,43]; however, it is not always clear which intervening factors might be operating and whether or not factors such as maternal SES and early feeding practice have been controlled for [10••, 15]. Both parental educational levels and breastfeeding predict the higher consumption of vegetables [44]. Higher SES mothers were more likely to have foods such as fruit and vegetables in the house, certainly if they are eating these themselves [45] the infant will therefore be exposed to the sight and smell of the foods, as well as the taste via breast milk, and these in turn will affect food intake [46, 47•]. It could also be that higher SES mothers who breastfeed are more likely to give the infant home-prepared foods rather than to rely on commercial baby food, and this trend in itself has been shown to predict subsequent fruit and vegetable intake in older children [14,48,49].
Tastants Added to Foods
There is a clear learned acceptance of a specific taste in first foods given to an infant. In a sample of 6-month-old infants, who had already been started on solids, there was a relationship between the infant's experience of a taste (salt) and their acceptance of the taste in a bland rice base [50]. This acceptance and preference was quickly learned and was higher in infants aged 16-17 weeks than in infants 18-25 weeks [51]. Single tastes are therefore rapidly accepted; real foods, however, have a more complex combination of flavours, and so we cannot assume that infants learn to accept more complex tastes as rapidly, nor do we know whether preference in complex tastes would be for the predominant taste or for all of the taste compounds.
Preference for Specific Foods
The advantage of exposure to foods rather than flavours which pass through breast milk is that the tastes that are experienced are usually in the context and combination that will be carried on into adulthood. Although this may not be true if the infant is fed a diet predominately comprising commercial baby food, in which tastes are often masked by other more acceptable sweet tastes [14]. There has been one study [8] which has attempted to bridge this gap between milk flavour and first food acceptance. Mothers of infants with an average age of 5 months were asked to feed their infant expressed breast or infant formula with added vegetable puree for 12 days, baby rice with the added vegetable puree for a further 12 days, followed by 11 days of exposure to the vegetable puree alone. At follow-up, vegetable puree intake was measured and there was an effect of exposure, the intervention group showed increased vegetable intake specific to those vegetables introduced. However, the infants were not assessed at the end of the milk feeding intervention so this effect could be merely due to the early experience of vegetable puree and rice.
In a similar study, [52] foods were introduced to bottle fed infants at a mean age of 4 months and tested 3 weeks later. After a 9-day exposure period during which infants were either fed carrots, potatoes or variety of vegetables, infants ate more carrots after exposure to carrots. When chicken was introduced as a new taste, infants in the variety group ate more than the other groups. Variety of early exposure does seem to influence the acceptance of new foods. In both of these studies, however, it was noted that infants always seem to prefer vegetables such as carrots, which have an inherently sweet taste, to green beans or potatoes. It would seem then that it is relatively easy to induce a food preference with repeated exposures, where there is a similarity to an innate taste preference, or an already accepted food, [8,9,39,52].
Generalisation Effect
Two studies have looked specifically at generalisation effects; that is whether new foods are more likely to be accepted if a variety of foods are offered initially. In the first study, infants (mean age 5.2 months) were exposed to a single vegetable, a variety of vegetables with daily change, and a variety with change every 3 days, for 9 days. Where the food had been rotated daily, infants showed an enhanced acceptance of a new food (zucchini-tomato, peas, meat and fish) [10••]. However, this finding was not replicated in a recent study [11•] looking at early and late introduction of vegetables within the 4-6-month period. Acceptance of a novel vegetable was measured after a 9 day exposure period in two groups of infants. During the exposure period one group was given a single vegetable, one group was given a variety pack of three vegetables. Although there was no main effect of vegetable variety on new food acceptance, there was an interaction between age of introduction and variety; acceptance at the later age (5.5-6 months) was better if a variety of vegetables rather than a single vegetable had been given. This suggests a sensitive period for the acceptance of new tastes, similar to previous taste studies [50], early within the introductory period.
Long-Term Effects
Long-term effects of the timing and type of complementary foods introduced have been reported in various studies, looking at children of different ages. More frequent acceptance of new foods during the neophobic period has been reported in those children who were introduced to complementary food earlier within the usual period of introduction [12] (4-6 months commonly reported in the UK [1]. And, the earlier the age that children had been introduced to fruit and vegetables (mean age of introduction for fruit 4.8 months and for vegetables 6.2 months) the greater the child's intake at 2-6 years [44]. These findings are supported by a longitudinal study of older children where frequency of consumption of home-cooked fruit and vegetables at 6 months of age, predicted a higher proportional intake of fruit and vegetables at 7 years [14].
The studies involving exposure to vegetables and fruit in infancy can be quite complex and confusing with attempted crossover exposures and new foods which might be either fruit or vegetables [53]. However, what they show in general is that some vegetables are more difficult than others (green beans versus carrots), with longer exposure periods needed for the more aversive, usually bitter, tastes. On the whole, it can be concluded that it is relatively easy to induce a preference within the usual period of the introduction of complementary foods, that earlier introduction within the time period facilitates acceptance, and the greater the variety of foods introduced, the more likely the infant is to readily accept other foods.
Texture
The concept of a sensitive period for the introduction of food of a texture other than puree was first suggested by Illingworth [54] and was based on case studies of hospitalised infants. Past and current research supports this observation, suggesting that it might be easier to get infants to accept new textures, and to progress with texture acceptance, if they are introduced earlier within accepted time frames for introduction. It is usual practice within the UK for pureed food to be offered between the ages 4 and 6 months of age. A survey carried out in the UK in 2011 reported that approximately 80 % of infants had been given their first foods by the age of 5 months [1]. Subsequent to this, it is advised that more 'lumpy' solids are to be given from around 6 months of age [55] and over 50 % of infants in one study had been given foods that required chewing by the age of 7 months [56]. The acceptance of a wider range of textures by the end of the first year is important when we consider the onset of neophobia in the second year of life and the type of foods that commonly present with complex and or multiple textures. The 'mouth feel' of textured food is difficult for many children and they typically prefer smooth foods to foods with' bits' in them [4•]. Most fruit and vegetables, unless pureed, are foods which have complex textures. A tomato, for example, has a firm skin, a pulp and seeds; all of which require different oral-motor skills to process them. These oral-motor skills are usually learnt between the ages of 6 and 12 months, the period in which the tongue learns to move solid food around the mouth in preparation for swallow, and this ability is dependent upon the experience of textured food within the mouth [55], rather than on any particular age or developmental stage.
It has been observed, again in hospitalised infants, that those who are introduced late in the first year to textures other than smooth or puree, are less likely to accept difficult textures in later childhood. Indeed, children who are introduced after the first year are more likely to become orally defensive and refuse any other than a smooth texture. They are more likely to gag and vomit when given solid foods, and in response to this, parents become more reluctant to persevere with solid food introduction [56][57][58].
There is, however, only one experimental study which looks at texture progression and acceptance in infancy.
Twelve-month-old infants were given pureed and chopped carrots; infants consumed more of the pureed carrots, but there was variability in the infants' willingness to take the chopped carrot. The strongest predictor of the acceptance of chopped carrot at 12 months, other than the presence of teeth, was earlier experienced with textured foods [59]. In addition, children who had been used to a high variety of different foods in their diet ate more of the chopped carrot; this again reflects the generalisation effect, the greater the experience, the greater the willingness to try. A small advantage associated with breastfeeding was observed in these children; longer duration of breastfeeding was associated with higher variety in the diet and greater acceptance of chopped carrot.
Two analyses of longitudinal data bases show a similar advantage of early experience. In the first study [60], children introduced to lumpy solids after the after the age of 10 months were more difficult to feed and were fussier at 15 months than were children introduced earlier to lumpy solids. Those introduced to complementary foods after 10 months also ate fewer family foods and more baby foods such as baby cereals. In a second analysis of these data, children introduced to lumpy solids after the age of 10 months were reported as having more feeding problems at 7 years. They were also reported as eating fewer portions of fruit and vegetables and ate more of all of ten categories of fruit and vegetables assessed at 7 years. Those introduced to complementary foods by 6 months ate more green leafy vegetables, green vegetables, tomatoes and citrus fruits than those who were introduced later, even when breastfeeding duration was controlled for within the analysis [61].
Given that these data are based on longitudinal reports, it could be that those introduced later to lumpy solids were more difficult to feed and more reluctant to accept textured foods.
A further longitudinal questionnaire study [48] did observe a relationship between acceptance of a range of textured foods and feeding style, whether breast fed or formula fed. But interestingly, breastfeeding and bottle feeding with a' chewing style' teat were both reported as promoting feeding progress. However, it was also noted that food acceptance was greater where family foods were given more often to the infant. There is then a relationship between longer breastfeeding duration and the extent to which family foods, rather than pureed or commercially available baby foods, are fed to the infant as first foods [62], and this early and prolonged introductory period to real food tastes and textures generally influences subsequent texture acceptance.
Sensory Sensitivity
One of the newer areas of interest in food acceptance is that of sensory hypersensitivity or hyper reactivity to sensory arousal. This denotes an over awareness and responsivity to stimuli, an over arousal which can give rise to an aversive reaction to normally non-threatening factors in the environment [63]. Specifically, oral/visual/tactile/olfactory hypersensitivity can lead to a limited range of foods accepted within the diet, a limited acceptance of textures and a fear of trying new foods [58].
It has been found that preschool children who are tactile defensive have more problems with food of various textures [64]; that boys with higher smell reactivity are more neophobic [47•] and that preschool children with taste, smell and tactile sensitivity are more neophobic and less likely to model their mother's fruit and vegetable consumption [65]. The effects of this sensory sensitivity can also be observed in the food choices of older children; taste/ smell sensitivity was found to be associated with a limited range diet in children from 5 to 10 years [66]. A relationship has also been found between neophobia, or limited acceptance of range, and the hedonic evaluation of tactile substances in children aged 2-4 years [67] and 4-7 years [68].
As this hypersensitivity would seem to be an innate trait, then it might also contribute to the reluctance of some infants in the early introductory period to accept new flavours, or more specifically textures; and, such an interaction has been observed between early experience and infant sensory sensitivity. In infants introduced to complementary foods early or late within the 4-to 6-month period of introduction, and screened using the Dunn Infant Sensory Profile [69], it was found that infant sensory sensitivity predicted consumption of a new food. The higher the sensory reactivity the lower the consumption of a new food taste. In addition, the relationship between tactile hypersensitivity and acceptance of the new food was moderated by the age of introduction to complementary food. Those infants who were introduced later within the 4 to 6 month period were less likely to accept the new food if they were scored highly for sensory reactivity [70].
Conclusion
A combination of breastfeeding with the timely introduction of complementary foods may confer a generalisation effect on the acceptance of new foods, and would seem the strategy which best predicts the subsequent acceptance of foods such as fruit and vegetables. However, it is clear that whereas breastfeeding is not a necessary prequel to a wide food acceptance, the timely and frequent introduction of complementary foods of differing tastes and textures is.
There are some data which would seem to support the idea of sensitive periods for the introduction of complementary foods according to both taste and texture, and this effect would appear to be more marked for those infants who are sensory hypersensitive. We also know that there are innate differences between children which make some tastes and textures more difficult to accept and that these tastes and textures are those that are associated with vegetables and especially green leafy vegetables.
A generalisation effect has been noticed at all stages; the more variation in tastes and textures that are experienced the more willing the child is to try new foods. This gives rise to the advantage conferred by breastfeeding over formula feeding, but also means that complementary foods should be given with frequent taste variation, and that the early introduction of textured complementary foods (other than smooth puree) confers an advantage on the subsequent acceptance of other more complex textures, such as those found in most fruits and vegetables.
In conclusion, then it would seem that both breastfeeding and the timely introduction of a variety of tastes and food textures would best predict acceptance and subsequent inclusion of a wide range of foods, especially fruit and vegetables, within the child's diet.
Compliance with Ethical Standards
Conflict of Interest Gillian Harris has received financial support through a grant from Cow & Gate, and has received compensation from Danone for serving on a forum and from Cow & Gate for giving occasional talks on toddler food refusal.
Helen Coulthard declares that she has no conflict of interest.
Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2016-05-12T22:15:10.714Z | 2016-03-08T00:00:00.000 | {
"year": 2016,
"sha1": "3fd82f5e42e4b4cf5feb4131561e04edc3e9e6a1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13679-016-0202-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fd82f5e42e4b4cf5feb4131561e04edc3e9e6a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56541286 | pes2o/s2orc | v3-fos-license | NEW LIGHT SOURCES FOR in-vitro POTATO MICROPROPAGATION NOVAS FONTES DE LUZ PARA MICROPROPAGAÇÃO in vitro DE BATATA
This research work objective was to optimize the micropropagation of potato cultivars through the use of new light sources in the growth rooms. Treatments consisted of three potato cultivars (Asterix, Catucha and Macaca), and five light sources (blue, green and red LEDs; Growlux and white fluorescent lamps). The explants consisted of nodal segments containing one bud, isolated from plantlets grown in vitro. The experimental design was completely randomized arranged in a 3x5 factorial, with eight replications. Each experimental unity consisted of a flask with five explants. Three 28-day consecutive subcultures were carried out in MS semi-solid medium, in growth-room under controlled conditions (temperature = 25+2 oC; photoperiod = 16 hours; light intensity = 20 μmol m s). At the end of each subculture, the bud number per plantlet, plantlet length and internode length were evaluated. After the third subculture, the concentrations of carotenoids and aand b-chlorophylls were also determined. Different micropropagation efficiencies were found among potato cultivars grown in vitro conditions: ‘Macaca’ was the most and ‘Catucha’ the least responsive cultivar. The growth room light sources differently affected the potato plantlet development: red and green LEDs were the most and least recommended for plantlet development, based on the results of bud number per plantlet, plantlet length, and leaflet concentrations of aand b-chlorophylls and carotenoids.
INTRODUCTION
Potato (Solanum tuberosum L.) is worldwide the most consumed vegetable and important source of carbohydrates, fibers and potassium for human feeding (ABBA, 2014).In Brazil, potato is an economically and socially relevant crop with yearly average production of 3.5 million t over a harvest area of 140 thousand hectares (AGRIANUAL, 2013).The main available cultivars used are: 'Asterix' characterized by rough pink tuber epidermis, light yellow pulp, nematode resistance and common scab (Streptomyces spp.) tolerance; 'Catucha' characterized by smooth yellow epidermis, yellow pulp, high yield potential and late blight (Phytophthora infestans) resistance; and 'Macaca' characterized by rough dark purple epidermis, white pulp, short dormancy cycle and green tuber low susceptibility (ABBA, 2014).
Potato is vegetatively propagated using virus free seed potatoes that are fundamental to reach high yields.Virus free seed potatoes are obtained by means of tissue culture techniques, which represent 17 to 21% of the total crop costs (AGRIANUAL, 2013).Several studies have been made to optimize the micropropagation process, mainly related to culture medium composition (ABDULLATEEF et al., 2009).However, tissue culture environments might be improved by using new available light sources that might represent lower production costs, especially for seed potato production (SEABROOK, 2005;LI et al., 2010).
Light is the radiant energy source for photosynthesis process, which regulates plant development.Chlorophyll is the most important pigment in photosynthesis, since it is responsible for light energy capture and transformation in chemical energy (WU et al., 2007).Plants use the photosynthetically active radiation that is the light wavelength between 390 and 760 nm (visible light) (STREIT et al., 2005).Therefore, photosynthesis efficiency is influenced by light quality, or else, light wavelength, light duration and intensity (NHUT et al., 2002).
Although natural sunlight can be used as source of energy for plants grown in micropropagation laboratories, the white fluorescent lamps are worldwide the most used.Recently developed light emission diodes (LEDs) have been pointed out by authors as potential sources of light for in-vitro environments cultivation.LEDs are characterized by specific wavelengths, small mass and volume, long useful life, low heating and highly efficient light generation process (60%), and do not contain mercury or other hazard element to the environment (YEH;CHUNG, 2009).For this reason, interesting research works have been carried out with several plant species grown under LEDs, such as cherry (MULEO; THOMAS, 1997), banana (NHUT et al., 2002), strawberry (ROCHA et al., 2010), cotton (LI et al., 2010), among others.
This work aimed to optimize the in-vitro propagation of three potato cultivars, using new light sources in the culture environment.
MATERIAL AND METHODS
The experiment was carried out in the Tissue Culture Laboratory of Embrapa-Temperate Climate, at Pelotas, Rio Grande do Sul, Brazil, using explants from meristems.Nodal segments with one bud and 10 mm long were used, originated from three successive subcultures in MS medium (MURASHIGE; SKOOG, 1962) supplemented with 100 mg L -1 of myo-inositol, 30 g L -1 of sucrose and 7 g L -1 of agar, pH 5.8, cultivated in growth room under white fluorescent lamps.
Explants were transferred to 250 mL glass flasks, 6.5cm diameter x 13cm high, containing 40 mL of culture medium described by Pereira et al. (2005).Under such conditions, three successive 28 day long subcultures were carried out at constant temperature (25 + 2ºC) and light intensity (20 µmol m -2 s -1 ) with photoperiod of 16 hours.After each subculture period, explants with similar characteristics of the ones used in the first subculture were transferred to fresh culture medium of equal treatment.This procedure (three successive subcultures) was done to minimize the effect of white fluorescent lamps in the initial explant production.
Treatments consisted of three potato cultivars (Asterix, Catucha and Macaca cv) and five light sources, as follows: blue LEDs-EDEB 3LA1 470 nm, green LEDs-EDET 3LA1 530 nm, red LEDs-EDER 3LA3 630 nm, Growlux fluorescent lamps and white fluorescent lamps (control).Treatments followed a completely randomized design arranged in a 3 x 5 factorial (cultivars x light sources) with eight replications.Each experimental unit consisted of a flask with five explants.
In the end of each subculture, the number of buds per plantlet, average plantlet length (mm) and average internode length (mm) were evaluated.The three subculture average values were considered data entries for statistical analysis.In the end of the third subculture, plantlet leaf samples (100 mg of fresh leaf tissue) were collected from the different treatments for analysis (concentration determination) of carotenoids and (a and b) chlorophylls in 80% acetone extracts.The pigment quantification was made by spectrophotometry (a-chlorophyll at 663 nm; b-chlorophyll at 645 nm; and carotenoids at 470 nm), according to Lichtenthaler (1987).
The results were submitted to analysis of variance and means were compared by Duncan test (p < 0.05).The bud number per explant data was transformed in the square root of (x+0.05), that is, (x + 0.5) ½ .The other variables were not transformed.
RESULTS AND DISCUSSION
All three potato cultivars (Asterix, Catucha and Macaca cv) showed good in-vitro growth under the five studied light sources (blue, green, red LEDs, Growlux and white fluorescent lamps), without problems of contamination, oxidation or callus formation.According to Pereira et al. (2005) and Abdullatellf et al. ( 2009), such results might be attributed to the easy in-vitro propagation of Solanum tuberosum species.Also, in all treatments, no plantlet tuber formation was observed, which usually impairs explant multiplication, since tuber formation occurs at the expense of energy reserves (SEABROOK, 2005).Similarly, during in vitro propagation, no abnormal plantlet morphological characteristics were observed.Changes in plant morphology indicate physiological impairments or somaclonal variation.
In the present research, the analysis of variance showed interactions between potato cultivars and light sources for the variables: average bud number per plantlet, average plantlet length and average internode length (Table 1).
Higher bud number per plantlet was obtained with Macaca cv under most light sources studied, except for the blue LEDs, under which no differences were found among cultivars.Also, 'Macaca' did not differ from 'Asterix' under white fluorescent and Growlux lamps (p<0.05).On the other hand, 'Catucha' was the least responsive cultivar in vitro, once it showed the least bud number per plantlet under most light sources, except for the blue LEDs (Table 1).Wilson et al. (1993) also demonstrated genetic differences among four potato cultivars (Kennebec, Norland, Denali and Superior) as concerned to their responses (bud number per plantlet) under different sources of light.
In general, 'Macaca' showed the longest plantlets, but it did not differ from 'Catucha' under blue and green LEDs and Growlux lamps (p<0.05).And 'Asterix" showed the shortest plantlets under all sources of light (Table 1).'Catucha' showed the longest internodes under all studied light sources, but it did not differ from 'Macaca" under blue LEDs and white fluorescent lamps (p<0.05).And 'Asterix' showed the in-vitro worst performance, with the shortest internodes (Table 1).
As concerned to the light source effect on the in-vitro plantlet development, it was observed that red LEDs, white fluorescent and Growlux lamps induced higher bud number per plantlet in Macaca cv; blue LEDs, white fluorescent and Growlux lamps induced higher bud number per plantlet in 'Catucha'; and blue and red LEDs plus white fluorescent and Growlux lamps induced higher bud number per plantlet in 'Asterix'.Green LEDs negatively affected the bud number per plantlet in all three cultivars (Table 1).
Such results demonstrated the positive effect of white fluorescent and Growlux lamps and red and blue LEDs on the in-vitro bud number per plantlet of the studied potato cultivars.According to Folta and Maruhnich (2007), red and blue lights induce faster plantlet growth, even faster than white light does, meanwhile green light that is absorbed by phytochromes and cryptochromes, act influencing events that reduce vegetative development.Furthermore, Wu et al. (2007) reported that red light spectrum emission is near the point of maximum absorption by chlorophylls and phytochromes and it is important for photosynthetic apparatus development and for starch accumulation; and that blue light is relevant for chloroplast development, chlorophylls formation and stomata opening.
Red LEDs induced longer plantlets in all three studied potato cultivars; plantlet length plus bud number per plantlet are the most important variables to be evaluated for the in-vitro propagation process.Average plantlet length showed intermediate values under Growlux fluorescent lamps and green LEDs, and the least values under white fluorescent lamps and blue LEDs (Table 1).Kim et al. (2004) and Rocha et al. (2010) had already observed longer plantlets of chrysanthemum and strawberry when grown under red LEDs than under other sources of light.With other potato cultivars, Wilson et al. (1993) obtained longer plantlets under red LEDs, compared to plantlets grown under white fluorescent lamps or even under blue LEDs.Petiole and plantlet length has been associated to red light, which was observed to stimulate and enhance plant species cell lengthening (WILSON et al., 1993).In general, longer plantlets (since not resultant from etiolating) are considered the best in the micropropagation process, because they are easily separated and acclimated, and besides, they contain higher bud number.Villavicencio et al. (2007) observed that potato plantlets of 51 to 70 mm long presented 91% survival rate after acclimatization, meanwhile plantlets with less than 30 mm long showed only 77% survival rate.
White fluorescent lamps and blue LEDs induced shorter plantlet internodes in all three potato cultivars studied.Longer internodes were found under red and green LEDs for 'Catucha'; under red LEDs for 'Macaca'; and under green LEDs for 'Asterix.
Muleo e Thomas (1997) also observed longer internodes in cherry plantlets grown under red LEDs than under white fluorescent lamps and blue LEDs.In the same way, Kim et al. (2004) found longer internodes in chrysanthemum plantlets grown under red LEDs.
LEDs provide radiant energy for better potato explant and plantlet development in laboratory rooms, and also, they show the advantage of longer useful life that may reach 100,000 hours, meanwhile fluorescent lamps present an average useful life of 8,000 hours, and incandescent lamps of 1,000 hours (ROCHA et al., 2010).According to these authors, another advantage is the energy saving, since LEDs present high energetic efficiency (50%) compared to fluorescent lamps (20%) and incandescent lamps (5%).This fact directly reduces the plantlet production costs, once the growth room illumination is responsible for 65% of energy power costs of a tissue culture laboratory (YEH and CHUNG, 2009).
Interaction between potato cultivars and light sources for the (a and b) chlorophyll concentrations was found (p<0.05).Red LEDs induced higher (a and b) chlorophyll concentrations in all three studied cultivars.However, no chlorophyll differences were found for 'Catucha' and 'Macaca' under red LEDs and white fluorescent lamps; and for 'Catucha' under blue, red LEDs and Growlux lamps.On the other hand, green LEDs provided the least chlorophyll concentrations in all cultivars (Table 2).Furthermore, although distinct a-and b-chlorophyll concentrations among potato cultivars were found, induced by different light sources, there always was the same response for each chlorophyll type in relation to the light source used (Table 2).Besides, almost all treatments presented a:b chlorophyll ratio higher than 2 (a:b > 2) that is close to the ratio found in plants grown under natural sunlight (3:1) (STREIT et al., 2005).According to these authors, a-chlorophylls are important for the photosynthesis first stage (photochemistry), meanwhile b-chlorophylls act in the radiant energy capture process and transference to the reaction centers.
In the present work, significant a-and bchlorophyll differences (p<0.05) were observed among cultivars, under each source of light studied, with 'Macaca' always ranked on top, except for the white fluorescent lamps (Table 2).3).Highest carotenoids concentrations (0.64 and 0.57 mg g -1 of fresh tissue) were found in plantlets grown under white fluorescent lamps and red LEDs, respectively.However, red LEDs did not significantly differed from blue LEDs and Growlux lamps, for this variable.Wu et al. (2007) had already observed higher carotenoids accumulation in peas grown under red light.In the present work, the least carotenoids concentrations were obtained under green LEDs, corroborating the results of Rocha et al. (2010), when studying in vitro culture of strawberry.In general, green light is considered a less relevant type of energy for photosynthesis mainly due to its low absorption coefficient (KIM et al., 2004).It is highlighted that chlorophylls and carotenoids synthesized by plants are essential pigments: in the photosynthesis process (light absorption); in preventing plants from photooxidation; in the coloration of plants; and besides, as precursors of vitamins and antioxidants (WU et al., 2007).
CONCLUSIONS
A genetic effect of potato cultivar in the process of in-vitro propagation was evident, since 'Macaca' showed higher bud number per plantlet, followed by 'Asterix' and at last 'Catucha'.
LEDs (light emission diodes) can be used as source of light in substitution to the white fluorescent lamps in tissue-culture growth rooms for in-vitro potato propagation.
The light source of growth rooms influenced the potato explant development in vitro: red LEDs were the most and green LEDs the least recommended for plantlet vegetative development and pigment synthesis (a-and b-chlorophylls and carotenoids).
Table 1 .
In vitro explant development of three potato (Solanum tuberosum L.) cultivars under different sources of light.
Table 2 .
Leaf (a and b)chlorophyll concentrations (mg g -1 ) in plantlets of three potato (Solanum tuberosum L.) cultivars, grown in vitro under different sources of light.
Table 3 .
Carotenoid concentrations (mg g -1 ) in plantlets of three potato (Solanum tuberosum L.) cultivars, grown in vitro under different sources of light. | 2018-12-17T20:53:20.957Z | 2015-09-10T00:00:00.000 | {
"year": 2015,
"sha1": "ee83e5b21e554cbd6312c04a5d4827dab32202df",
"oa_license": "CCBY",
"oa_url": "http://www.seer.ufu.br/index.php/biosciencejournal/article/download/26601/17115/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "61de3a85e1d5cf577f680858a9ea9d1fc3161be5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
119411053 | pes2o/s2orc | v3-fos-license | Multiplicity and disks within the high-mass core NGC7538IRS1: Resolving cm line and continuum emission at ~0.06"x0.05"resolution
Context: High-mass stars have a high degree of multiplicity and most likely form via disk accretion processes. The detailed physics of the binary and disk formation are still poorly constrained. Methods: Using the VLA in its most extended configuration at ~24GHz toward the prototypical high-mass star-forming region NGC7538IRS1 has allowed us to study the NH3 and thermal CH3OH emission and absorption as well as the cm continuum emission at an unprecedented spatial resolution of 0.06"x0.05", corresponding to a linear resolution of ~150AU at a distance of 2.7kpc. Results: A comparison of these new cm continuum data with previous VLA observations from 23yrs ago reveals no recognizable proper motions. If the emission were caused by a protostellar jet, proper motion signatures should have been easily identified. In combination with the high spectral indices S~nu^{alpha} (alpha between 1 and 2), this allows us to conclude that the continuum emission is from two hypercompact HII regions separated in projection by about 430AU. The NH3 spectral line data reveal a common rotating envelope indicating a bound high-mass binary system. In addition to this, the thermal CH3OH data show two separate velocity gradients across the two hypercompact HII regions. This indicates two disk-like structures within the same rotating circumbinary envelope. Disk and envelope structures are inclined by ~33deg, which can be explained by initially varying angular momentum distributions within the natal, turbulent cloud. Conclusions: Studying high-mass star formation at sub-0.1"resolution allows us to isolate multiple sources as well as to separate circumbinary from disk-like rotating structures.
Introduction
The formation processes leading to the most massive stars are still puzzling in many ways. While there is a clear consensus that high-mass stars shape the interstellar medium, whole clusters, and even entire galaxies, questions related to the physical processes at early evolutionary stages, in particular associated with the fragmentation processes of young, dense, and rotating cores, formation of multiple entities, and embedded accretion disks, are still poorly explored (e.g., Beuther et al. 2007a;Zinnecker & Yorke 2007;Tan et al. 2014;Beltrán & de Wit 2016).
How do massive accretion disks form and what are their properties? These are central questions for high-mass star formation research (e.g., Henning et al. 2000;Kratter & Matzner 2006;Beuther et al. 2009;Cesaroni et al. 2007;Vaidya et al. 2009;Kraus et al. 2010Kraus et al. , 2016Beltrán et al. 2011;Ilee et al. 2013;Boley et al. 2013Boley et al. , 2016Sánchez-Monge et al. 2014;Johnston et al. 2015). The main indirect evidence stems from observations of massive and collimated outflows that are qualitatively similar to low-mass jets (e.g., Beuther et al. 2002;Zhang et al. 2005;Arce et al. 2007;López-Sepulcre et al. 2009;Duarte-Cabral et al. 2013). Such jet-like outflows are best explained via magnetocentrifugal acceleration from an accretion disk and subsequent Lorentz collimation. Radiation (M)HD simulations of massive collapsing cores produce accretion disks as well (e.g., Yorke & Sonnhalter 2002;Krumholz et al. 2009;Kuiper et al. 2010Kuiper et al. , 2011Kuiper & Yorke 2013;Peters et al. 2010;Commerçon et al. 2011). Are massive disks similar to their low-mass counterparts, hence dominated by the protostar and Keplerian rotation or are they self-gravitating non-Keplerian entities (e.g., Sánchez-Monge et al. 2013;Cesaroni et al. 2007)? The answer may be a small inner Keplerian accretion disk that is fed from a larger-scale non-Keplerian structure (toroid or pseudo-disk). This picture is supported by analytic and numeric models with Keplerian disks growing with time from the infalling rotating structure (e.g., Stahler & Palla 2005;Kuiper et al. 2011). The transition from molecular to ionized infall is an additional important characteristic (e.g., Keto 2002Keto , 2003Klaassen et al. 2009Klaassen et al. , 2013. While indirect evidence for massive disks is very strong, direct observations are still sparse (see AFGL4176 as one of the best recent examples; Johnston et al. 2015, of G11.92-0.61 by Ilee et al. 2016). This discrepancy is mainly due to the clustered mode of massive star formation at large distances. High spatial resolution is crucial to disentangle these structures.
What are the fragmentation properties of massive gas clumps during the formation of high-mass stars and their surrounding clusters? High-mass stars form in clusters with a high degree of multiplicity, and Chini et al. (2012) argue that this multiplicity likely stems from the formation processes (see also Peter et al. Fig. 1. Centimeter continuum emission from NGC7538IRS1. The left panel shows in color scale the new 1.2 cm continuum data imaged using only baselines between 10 and 37 km, achieving a spatial resolution of 0.06 × 0.05 . The contours present for comparison the old VLA data from 1992 discussed previously in Gaume et al. (1995a), Sandell et al. (2009), Moscadelli & Goddi (2014), and Goddi et al. (2015) starting at the 4σ contour and continuing in 8σ intervals (1σ ∼ 0.05 mJy beam −1 ). The two stars indicate the CH 3 OH maser positions by Moscadelli & Goddi (2014); see section 3.1 for more details. The middle panel shows in color the spectral index map derived from the new full dataset with robust weighting −2 and the contours show the corresponding continuum image using all data at a resolution of 0.07 × 0.05 . The contour levels start at a 4σ levels of 0.16 mJy beam −1 and continue in coarser 32σ steps. The right panel again shows the new 0.06 × 0.05 data, now converted in the Rayleigh-Jeans approximation to brightness temperature. 2012). Peter et al. (2012) find companion separations between 400 and several thousand AU, stressing the necessity of high spatial resolution, Furthermore, Sana et al. (2012) infer that multiple system interactions dominate the evolution of massive stars. Interferometer studies have revealed that most high-mass starforming regions fragment into multiple objects, suggesting that massive monolithic cores larger than several 1000 AU are rare, however, the degree of fragmentation varies (e.g., Cesaroni et al. 2005;Beuther et al. 2007bBeuther et al. , 2012Zhang et al. 2009;Bontemps et al. 2010;Wang et al. 2011;Rodón et al. 2012;Palau et al. 2013;Sánchez-Monge et al. 2014;Johnston et al. 2015). Even regions that remain single continuum sources down to arcsec resolution, mostly fragment on even smaller scales.
A particularly revealing example is the famous high-mass star-forming region NGC7538IRS1. At a distance of ∼2.7 kpc (Moscadelli et al., 2009), the luminosity of the central energy source is estimated to stem from a 30 M O6 star (e.g., Willner 1976;Gaume et al. 1995a;Moscadelli et al. 2009). The strong emission of this source from near-to mid-infrared wavelengths to the cm-and mm-regime has made it a famous high-mass protostar for over several decades; a summary of the literature can be found in Beuther et al. (2012). The recent 0.2 observations at submm wavelengths with the Northern Extended Millimeter Array (NOEMA) revealed fragmentation of the envelope, however, these observations did not allow us to identify a Keplerian accretion disk (Beuther et al. 2013, see also high-resolution data by Zhu et al. 2013). Most likely, such a Keplerian structure is hidden on still smaller scales below 500 AU (e.g., Krumholz et al. 2007;Kuiper et al. 2010Kuiper et al. , 2011. Furthermore, this region reveals two cm continuum sources at approximately 0.2 separation that may either be two hypercompact Hii region or an ionized jet (Gaume et al., 1995b;Sandell et al., 2009;Moscadelli & Goddi, 2014;Goddi et al., 2015), where the association of the potential ionized jet with the molecular outflow is debated (Knez et al., 2009;Beuther et al., 2013).
Observations
We observed NGC7538IRS1 on July 21, 2015, during a four hour track with the Karl G. Jansky Very Large Array (VLA) in its most extended A-configuration (baselines extending out to 37 km). The proposal ID is 15A-115. With the flexible VLA correlator we covered many spectral lines and the cm continuum emission in the radio K band. Specifically we covered seven NH 3 inversion lines, seven CH 3 OH lines, and two Hα recombination lines. Line parameters are given in Table 1. The following analysis concentrates on the cm continuum, NH 3 , and CH 3 OH emission. Although the recombination lines are detected, the emission is comparably weak and is not discussed further here. The intrinsic spectral resolution for the molecular line data varied be- tween 15.625 and 31.25 kHz, corresponding to a velocity resolution of ∼0.19 and ∼0.38 km s −1 at the given frequencies, respectively (Table 1). Since this region is very strong in absorption and emission, we reduced almost all lines at the native correlator resolution. Only the three lowest energy CH 3 OH lines -located within a single spectral window -were reduced separately with 0.4 km s −1 resolution. To create the continuum image, 16 spectral windows with a width of 112 MHz each between 23.6 and 25.8 GHz were combined.
The flux and bandpass were calibrated with the two strong quasars 3C48 and J0319+4130 (also known as 3C84), respectively. The phase and amplitude gain calibration for these long baselines requires comparatively fast switching between the target source and the gain calibrator J2339+6010. Our loop typically stayed 1 min 50 sec on source and 1 min 20 sec on the gain calibrator. We visited the source during the 4 h track 58 times assuring an excellent uv coverage. The phase center of the VLA for our target source NGC7538IRS1 was R.A. (J2000.) 23:13:45.36 and Dec. (J2000.0) +61:28:10.55. The data calibration was conducted with the VLA pipeline 1.3.1 in CASA 4.2.2. All solutions were carefully checked, and the bandpass for the NH 3 (1,1) line was bad during half of the track. Flagging this second half for this one spectral window and then rerunning the pipeline gave excellent results.
Further imaging and analysis of the data was also conducted in CASA. The continuum data were imaged with two different approaches: once using all data with a robust weighting value of -2, and once excluding baselines of the inner 10 km (base- Fig. 3. CH 3 OH example spectra extracted toward the northern cm continuum peak position. The spectra are shifted of the yaxis for presentation purposes. All lines are labeled. lines covered between 10 and 37km) to even improve the spatial resolution. While the normal robust -2 dataset with all data resulting in a beam of 0.07 × 0.05 (PA −25 • ) was better suited for studying the spectral index, the highest-spatial-resolution image with restricted baseline range and a spatial resolution of 0.06 × 0.05 (PA −32 • ) was used for morphological comparison. The largest scales typically recoverable with the VLA at this frequency in the A-array are ∼ 2.4 . The 1σ rms for both images is ∼0.05 mJy beam −1 . The molecular line data were all imaged with a robust weighting scheme and a robust value of 0. Just the NH 3 (1,1) data were also imaged in natural weighting (robust value 2) for comparison (see section 3.2). While the naturally weighted NH 3 (1,1) image has a beam of 0.11 × 0.09 (PA −31 • ), the other images with robust weighting 0 have a synthesized beam of 0.08 × 0.06 (PA varying between −28 • and −30 • ). The 1σ rms measured in an emission-and absorption-free channel varies between 2.1 and 3.9 mJy beam −1 .
Centimeter continuum emission
The double-peaked structure of the cm continuum emission presented in Gaume et al. (1995a), Sandell et al. (2009), and Moscadelli & Goddi (2014) can be interpreted in a two ways: On the one hand, this structure may be two separate protostellar sources, whereas, on the other hand, the double-peaked structure could also be part of one underlying jet. Our new data now allow us to address this question with two approaches: (a) search for proper motions with a time baseline from December 1992 Fig. 4. Color-scale presents the 1st moment maps (intensity-weighted velocities) of the NH 3 inversion transitions from (1,1) to (7,7) as indicated in each panel. The first two left panels show the data for the NH 3 (1,1) line with different weighting schemes (natural weighting and robust weighting, which is a hybrid between natural and uniform weighting). The other NH 3 lines are always presented with robust weighting 0. The 1st moment maps are clipped at a ∼4σ level. The contours show in all panels the 1.2 cm continuum emission in levels of 8% to 98% of the peak emission (7.4 mJy beam −1 ). The two stars indicate the CH 3 OH maser positions by Moscadelli & Goddi (2014). The line in the bottom right panel indicates the position-velocity cut shown below. (Müller et al., 2001;Lovas, 2004), lower level energy E l /k, spectral resolution ∆ν, and critical density n a crit a Calculated as n crit = A C with Einstein coefficient A and collision rate C at 150 K. b No collision rates in LAMBDA database (Gaume et al. 1995a; observed in the same configuration and wavelength as our new data) and our new data from July 2015 presented here. And (b) an analysis of the spectral index based on the broad bandpass of the new data. Figure 1 presents an overlay of the old and new data as well as a spectral index map. The spectral index map was derived from all 16 continuum windows between 23.6 and 25.8 GHz in CASA within the clean task using higher order Taylor terms (parameter nterm=2) to model the frequency dependence of the sky emission. The spectral index map is computed as the ratio of the first two Taylor terms. This task also computes an error map of the spectral index treating the Taylor-coefficient residuals as errors and propagating them through the spectral index determination. The spectral index map in Figure 1 is clipped for errors larger than 0.4. To search for proper motions, the highest possible positional accuracy is required. Table 2 presents the peak positions and peak fluxes at 24.6 GHz. When investigating the data for NGC7538IRS1 in detail, two issues arose. First, the positional Fig. 4. The top left and top 2nd panels show this cut for natural and robust weighting for the NH 3 (1,1) lines, respectively. All other cuts are carried out for the robust weighting case. accuracy of the phase calibrator was incorrect by ∆R.A. ∼ 0.01 and ∆Dec. ∼ 0.16 (Moscadelli & Goddi, 2014;Goddi et al., 2015). We corrected for this positional shift after the imaging process. Furthermore, Moscadelli et al. (2009) and Moscadelli & Goddi (2014) inferred proper motions for the region of -2.45 mas yr −1 /-2.45 mas yr −1 from CH 3 OH maser observations at 12 GHz. For the 22.58 yrs time difference between the two observational epochs, this corresponds to a shift of ∼0.055 in R.A. and Dec., respectively. To get the 1992 and 2015 data into the same framework, we shifted the 1992 data according to these proper motions. Fig. 1 takes both shifts into account. Furthermore, we show the central positions of the CH 3 OH maser groups IRS1a and IRS1b as presented in Fig. 11 of Moscadelli & Goddi (2014). Since the maser positions in Moscadelli & Goddi (2014) We find that the overall structure of the two central peaks associated with the two CH 3 OH maser positions have not moved significantly within the spatial resolution and uncertainties of the two observational datasets. The northern source cm1 is elongated in northeastern direction, which is already visible in the old data. No positional shift can be identified for the northern of the two main peaks cm1, whereas the second main source cm2 exhibits a tiny shift of ∼ 0.025 . However, this is less than half of the synthesized beam and we refrain from further interpretation of that apparently very small shift. The additional emission feature ∼ 0.4 to the south appears to have shifted slightly in southeastern direction. Since we are mainly interested in the two main emission peaks cm1 and cm2, we do not further analyze the separate southern structure.
At the given distance of 2.7 kpc, our approximate average spatial resolution element 0.055 corresponds to a linear resolution of ∼150 AU. Assuming a jet velocity of ∼250 km s −1 with an inclination angle of 45 • , the 23 years time baseline between the two observational epochs would still result in proper motions of ∼857 AU, corresponding in an angular shift of ∼ 0.32 , which is well resolvable by our observations. Although the inclination angle is unknown, jets may be even faster (Martí et al., 1998;Frank et al., 2014;Guzmán et al., 2016) and hence these data are strong evidence that a jet is unlikely the underlying cause for the cm continuum emission.
Regarding the spectral index S ∝ ν α , for a hypercompact Hii region α can vary between -0.1 for optically thin to +2 for optically thick emission. While large Hii regions are typically in the optically thin regime at the given frequency around 24 GHz, a hypercompact Hii region such as NGC7538IRS1 can easily be in the (partly) optically thick regime (e.g., Franco et al. 2000). For comparison, while ionized jets theoretically can cover the same spectral index regime, typical emission from an ionized jet has rather a spectral index α around +0.6 (e.g., Reynolds 1986;Purser et al. 2016). The observed spectral index α shown in Figure 1 varies largely between 1 and 2. Therefore, the spectral index analysis of NGC7538IRS1 also indicates that the cm continuum emission is not caused by an ionized jet but more likely is dominated by a hypercompact Hii region(s).
With the high optical depth indicated by the spectral index, converting the cm continuum fluxes in the Rayleigh-Jeans limit to brightness temperatures additionally gives a hint about the temperatures of the ionized gas. Fig. 1 (right panel) shows that the brightness temperatures of the inner region vary between approximately 2000 and 4900 K. These can be considered as lower limits for the ionized gas temperatures because the spectral index as a proxy of the optical depth varies throughout the region.
Combining the multiepoch and multiwavelength analysis above, the central double-lobe cm continuum emission in NGC7538IRS1 is most likely emitted by at least two embedded protostars within their associated hypercompact Hii regions. Fig. 7. Color-scale presents the 1st moment maps (intensity-weighted velocities) of the CH 3 OH lines as shown in each panel. The contours show in all panels the 1.2 cm continuum emission in levels of 8% to 98% of the peak emission (7.4 mJy beam −1 ). The two stars indicate the CH 3 OH maser positions by Moscadelli & Goddi (2014). The two lines in the bottom right panel indciate the cuts for the position-velocity diagrams shown below.
NH 3 and CH 3 OH
At this high spatial resolution and with the given very strong continuum emission, all molecular line features are only observed in absorption against the continuum sources. Figures 2 and 3 show example spectra of all NH 3 and CH 3 OH lines extracted toward the northern cm continuum peak position. For NH 3 the hyperfine structure is detected for all lines. The integrated, 1st and 2nd moment maps and the position-velocity diagrams discussed below (Figs. 4 to 8) were created after inverting the data (simply multiplying by -1) because the corresponding algorithms in CASA only work on positive data. This inversion does not affect the kinematic signatures at all. The excitation temperatures E l /k and the critical densities calculated as n crit = A/C (with the Einstein coefficient A and the collision rate C from the LAMBDA database; Schöier et al. 2005) are given in Table 1 as well. While NH 3 covers an excitation range between 22 and 537 K, this is slightly smaller for CH 3 OH between 28 and 149 K. However, while the critical densities n crit for NH 3 are around 2000 cm −3 , they are more than an order of magnitude larger for CH 3 OH around a few times 10 4 cm −3 . In addition to this, the chosen J 2 − J 1 25 GHz transitions of CH 3 OH have been found to emit as masers in several high-mass star-forming regions (e.g., Menten et al. 1986;Voronkov et al. 2007;Brogan et al. 2012). These masers are likely collisionly excited (e.g., Sobolev & Strelnitskii 1983) and thus form its own subgroup of methanol Class I maser (e.g., Leurini et al. 2016). Compared to other types of Class I masers, they need higher gas volume den-sities for the population inversion to occur (n > 10 6 cm − 3; e.g., Sobolev et al. 1998;Leurini et al. 2016). The fact that we see the 25 GHz transitions in absorption might be an indication that the majority of the methanol in NGC 7538 IRS1 relevant for our absorption is residing in somewhat lower density gas. Fig. 4 presents the 1st moment maps (intensity-weighted velocities) of the NH 3 inversion lines from (1,1) to (7,7). The first two maps of the (1,1) transition are produced with different weighting schemes (natural weighting and robust weighting 0) recovering different spatial structures. While the naturally weighted NH 3 (1,1) shows a bit more extended emission, one sees a larger scale velocity gradient approximately in eastwest direction. In comparison to that, the higher resolution (robust weighting 0) image rather reveals a velocity gradient in northeast-southwest direction, which is consistent with previous the findings by Beuther et al. (2012Beuther et al. ( , 2013, Moscadelli & Goddi (2014), and Goddi et al. (2015). Since we are mainly interested in the kinematics of the innermost central sources we concentrate in the following on the higher resolution data. All other transitions are also presented in this higher resolution imaging mode (robust weighting 0).
Interestingly, all seven NH 3 inversion lines with excitation levels E l /k between 22 and 537 K (Table 1) exhibit almost the same kinematic structure of the central core with one velocity gradient in approximately northeast-southwest direction, but without any differentiation between the two continuum sources seen in the overlayed contours (Fig. 4). These NH 3 velocity structures appear similar to the previous findings for the rota- tional structure of the rotating envelope by Beuther et al. (2013) and Goddi et al. (2015). Performing spectral cuts across the main velocity gradient direction (NH 3 (7,7) panel in Fig. 4 indicates the exact orientation), Figure 5 presents the corresponding position-velocity diagrams. The two panels corresponding to the (1,1) inversion lines exhibit three features because of the close-in frequency-space hyperfine structure. The other six lines only show the central strong hyperfine component. This velocity gradient is almost linear across the source. It does not show any hint of Keplerian motion but resembles more a solid-body rotation diagram.
For comparison, Fig. 6 presents also the integrated absorption and the 2nd moment maps (intensity-weighted velocity dispersion) for two selected NH 3 and one CH 3 OH lines. The spatial structure of these integrated and velocity dispersion maps does not reflect the overall velocity gradient seen in NH 3 , but all maps are double-peaked toward the two cm continuum peak positions. This shows that the largest gas column densities and strongest line broadening are indeed associated with the two main proto-stellar condensations. The fact that NH 3 exhibits two centers of line broadening, in spite of only a single larger scale velocity gradient, indicates the potential existence of two smaller scale embedded rotating structures.
The important new information now stems from the thermal CH 3 OH absorption data. Similar to NH 3 , for CH 3 OH we also present the 1st moment maps and position-velocity diagrams in Figures 7 and 8. While the CH 3 OH 1st moment maps also exhibit the general trend of velocities from the northeast to the southwest, the data show a clear structural change between the northern and southern continuum source. With these data one can depict for all lines with excitation levels between 28 and 149 K (Table 1) one velocity gradient across the northern continuum source and one velocity gradient across the southern continuum source. These two velocity gradients are almost parallel in the east-northeast to west-southwest direction. While we identify these velocity gradients in thermal CH 3 OH absorption, the previously studied CH 3 OH class II masers also show velocity gradients approximately in the east-west direction (Moscadelli & Goddi, 2014). Although the angles derived from the maser and thermal absorption are not exactly the same, they are both approximately in east-west direction and both have the same orientation with respect to the blue-and redshifted structure. Hence, while the maser and thermal emission and absorption trace different spatial scales, both appear to stem from the same rotating structures.
Figures 8 presents the corresponding position-velocity cuts along the two axes outlined in the bottom right panel of Fig. 7. For both regions we identify clear velocity gradients across the sources, however, in both cases again without any Keplerian signature. The underlying physical reasons for these kinematic signatures are discussed in section 4.2.
While the measured velocity dispersion of the two NH 3 lines varies only a bit, the velocity dispersion of CH 3 OH is considerably narrower (Fig. 6). Only the overlap region between the two continuum peaks shows a larger velocity dispersion, but this can be attributed to beam smearing effects between the two peak positions. Inspecting individual absorption spectra against the main northern continuum peak position (Fig. 9), the spectral profiles show why the measured line widths in NH 3 and CH 3 OH are different. While the peak and redshifted side of the absorption spectra are similar, NH 3 shows a pronounced blueshifted wing. In absorption spectra that is clear sign for outflowing gas. Because the critical density of NH 3 is an order of magnitude lower than that of CH 3 OH (Table 1), it appears that NH 3 also traces outflowing gas of within the envelope whereas the CH 3 OH signatures are more dominated by the rotating disk-like structures.
Fragmentation and multiplicity
The high-mass star-forming region NGC7538IRS1 is intriguing because it does not show significant fragmentation signatures in the cold dust and gas emission at (sub)mm wavelengths. At ∼ 0.3 resolution and 1.3 mm wavelength, Beuther et al. (2012) still identified only a single source, whereas then at ∼ 0.2 resolution and 843 µm wavelength first fragmentation signatures of the innermost rotating structure could be identified (Beuther et al., 2013). Our new (non-)proper motion analysis, spectral index study and the kinematic signatures in CH 3 OH clearly confirm that at least two very young protostars are embedded within the innermost core. The elongation in northeastern direction of the northern source cm1 may harbor further subsources. However, this elongation may also be due to an underlying ionized disklike structure since it is approximately aligned with the CH 3 OH velocity gradient. The projected separation of the two main protostars cm1 and cm2 is ∼ 0.16 or ∼ 430 AU. Since the two sources are embedded in a large-scale rotating structure (Fig. 4), they are most likely a bound binary system. Fig. 9. Example absorption spectra of NH 3 (7,7) and CH 3 OH(10 2,8 − 10 1,9 ) taken toward the northern cm continuum peak position. The CH 3 OH spectrum is shifted slightly to positive flux density values for better presentation.
While the rotation axis of the disk-like structures around these two protostars are almost parallel (Fig. 7), the rotation axis of the surrounding gas envelope is inclined to the disk-like structures (Fig. 4). As the NH 3 velocity gradient is approximately 41 • east from north and the CH 3 OH velocity gradient is approximately 74 • east from north, the relative inclination between the two axis is 33 • . Why are the axes of the disk-like structure around the two protostars and the surrounding envelope not better aligned? In turbulent molecular clouds one can easily consider a collapse scenario in which the different collapsing shells within the envelope have varying angular momentum distributions already at the beginning of the collapse. Therefore, gas that falls earlier and deeper into the gravitational potential well of the forming cluster can have different angular momentum vectors than the remnant envelope that may still feed the inner disklike entities. Therefore, in this scenario, misalignment between axes on different spatial scales can qualitatively be well understood. Similar results were recently obtained by Kraus et al. (2016) with VLTI observations toward the high-mass binary system IRAS 17216-3801.
(Non-)Keplerian motions
Where is the high-velocity gas one would expect from an embedded Keplerian disk? Two main options exist as potential answers. First, it can be a spatial resolution issue where the Keplerian velocities are hidden below our angular resolution. With a spatial resolution of ∼ 0.07 , corresponding to a linear resolution of ∼190 AU that would limit the size of potential Keplerian structure to below that scale. However, in that picture the observations would in principle still see the high-velocity gas, just smear it out over the beam size. Hence, some remaining high-velocity gas could even be observable at this spatial resolution.
Second, it may also be a physical effect because we recall that the presented data are absorption line observations. Hence, we only observe the gas in front of the hypercompact Hii, and we explicitly miss the innermost ionized gas within the hypercom-pact Hii region. Considering the scenario outlined first by Keto (2002Keto ( , 2003 in which the accretion flow changes from molecular to ionized form within the inner hypercompact Hii region, we would be missing the highest velocity gas by such molecular observations anyway. Estimating the source size of the central hypercompact Hii regions is difficult in such a crowded area. However, based on Fig. 1, we can estimate the projected size to ∼ 0.1 − 0.2 . Assuming a spherical source structure and that only the front half is part of our observations, we are missing ∼ 0.05 − 0.1 along the line of sight in our molecular gas data. That corresponds to linear scales of ∼ 135 − 270 AU. Therefore, in both scenarios, we cannot resolve the central highest velocity gas structures. However, while the first simple spatial resolution argument would still "see" the high-velocity gas -just smeared out over the beam size -the second physical argument of an inner ionized Hii does not allow us to see that high-velocity gas at all in such molecular absorption line data. What scales are predicted by simulations for Keplerian structures around high-mass accretion disks? For example, Krumholz et al. (2007) present position-velocity diagrams for simulations around a forming high-mass star (8.3 M at the presented time step) where the Keplerian signatures could be visible at least out to radii of 250 AU. Kuiper et al. (2011) show the time evolution of Keplerian structures around forming massive stars and the Keplerian size increases with time. In the model of a collapsing 60 M core, the Keplerian structure grows from below 100 AU at times earlier than 10 4 yrs to more than 1000 AU after 5 × 10 4 yrs. While these are only simulated individual cases studies, they already outline the range of potential Keplerian disk sizes. With the observed infall (e.g., Beuther et al. 2013), NGC7538IRS1 should still be at a comparably early evolutionary stage, hence small disk sizes are possible. Furthermore, our data clearly show the multiple structure of the region, which can truncate disks even further.
These observations clearly outline the complicated nature of studying high-mass accretion disks. On the one hand, extremely high spatial resolution at sub-0.1 is required. However, that resolution is achievable now with observations at cm wavelength at the VLA, such as those presented here, or new observations with the Atacama Large Millimeter Array (ALMA) that can reach even higher resolution; however, this target NGC7538IRS1 is too far north and not accessible with ALMA.
On the other hand, it is also crucial to identify the right sources and disk tracers. If we are dealing with hypercompact Hii regions, ionized tracers such as radio recombination lines could be very useful (e.g., Keto & Klaassen 2008;Klaassen et al. 2009). For NGC7538IRS1, we did observe such recombination lines simultaneously, however, the sensitivity was insufficient for further analysis. Furthermore, as shown in , where they present radio recombination line data toward NGC7538IRS1 at 22 and 43 GHz, these lines at cm wavelengths are typically very broad. A significant amount of the line width at these wavelengths is caused by thermal and pressure broadening and disentangling kinematic signatures from these components is not trivial. Going to (sub)mm wavelengths may improve the situation because there at least the pressure broadening is significantly reduced. Therefore, it may be more promising to select sources at even earlier evolutionary phases where no hypercompact Hii region has formed yet; one hence can study the kinematics at much smaller scales in the molecular form.
Conclusions
Resolving the famous high-mass star-forming region NGC7538IRS1 at the highest spatial resolution possible at cm wavelengths with the VLA (0.06 × 0.05 corresponding to ∼150 AU) reveals several new insights into the physics of this archetypical high-mass star-forming region. Comparing the new data to previous epoch observations from ∼23 yrs ago, no proper motions can be identified. In combination with a high spectral index largely varying between 1 and 2, we infer that the cm continuum emission does not stem from an underlying jet, but that it is rather dominated by two hypercompact Hii regions that are likely formed by two separate high-mass protostars. Based on the kinematics, these protostars appear to form a bound system within a circumbinary envelope.
The CH 3 OH and NH 3 spectral line data reveal different velocity structures in absorption against the strong continuum emission. The thermal CH 3 OH data show two velocity gradients across the two continuum sources, indicating the existence of two embedded disk-like structures. The approximate orientation and velocity structure of these thermal CH 3 OH measurements agree well with the much higher resolution CH 3 OH maser data (Moscadelli & Goddi, 2014). While the two disk-like structures are almost parallel, the NH 3 data trace a rotating circumbinary envelope that is inclined to the two disk-like structures by ∼33 • . Such variations in rotation axis between envelope-and diskstructures can be caused by varying initial angular momentum distribution in the natal, turbulent molecular cloud.
The fact that we do not identify Keplerian signatures in the disk-tracing CH 3 OH data is mostly caused by the nature of this molecular absorption line data, which do not trace the innermost gas that is ionized already. A closer investigation of the kinematics to the center will require recombination line observations, best conducted at (sub)mm wavelengths where the pressure broadening of the line becomes negligible. | 2017-05-17T16:26:24.000Z | 2017-05-17T00:00:00.000 | {
"year": 2017,
"sha1": "123572ad5f398a86a20bd1c3e26c0c8bcd996acd",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2017/09/aa30575-17.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6ff360349bc62a54d7d7e6942670b9c7de5debba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119099605 | pes2o/s2orc | v3-fos-license | Physical properties of galactic RV Tauri stars from Gaia DR2 data
We present the first period-luminosity and period-radius relation of Galactic RV Tauri variable stars. We have surveyed the literature for all variable stars belonging to this class and compiled the full set of their photometric and spectroscopic measurements. We cross-matched the final list of stars with the Gaia DR2 database and took the parallaxes, G-band magnitudes and effective temperatures to calculate the distances, luminosities and radii using a probabilistic approach. As it turned out, the sample was very contaminated and thus we restricted our study to those objects for which the RV Tau-nature was securely confirmed. We found that several stars are located outside the red edge of the classical instability strip, which implies a wider pulsational region for RV Tau stars. The period-luminosity relation of galactic RV Tauri stars is steeper than that of the shorter-period Type II Cepheids, in agreement with previous result obtained for the Magellanic Clouds and globular clusters. The median masses of RVa and RVb stars were calculated to be 0.45-0.52 M$_{\odot}$ and 0.83 M$_{\odot}$, respectively.
INTRODUCTION
RV Tauri-type variables form the long-period extension of Population II Cepheids, which are metal-poor low-and intermediate mass F-, G-and K-type supergiant stars, older than the classical Cepheids (see Wallerstein (2002) for a review). RV Tauris are among the most luminous stars (10 3 -10 4 L ), having already left the red giant (RGB) or the asymptotic giants branch (AGB) and rapidly evolving through the post-RGB/AGB instability strip to become planetary nebulae (Manick et al. 2018;Kamath, Wood & Van Winckel 2015, 2014Jura 1986). Hence, they provide important information about the not so well-known late-phases of stellar evolution, in which pulsations and mass-loss processes can interact and influence stellar evolution.
During their evolution, post-AGB stars cross the instability strip where they become unstable against radial pulsation. The observed periods of RV Tau stars usually fall between 20 and 90 days (Soszyński et al. 2010(Soszyński et al. , 2008. The main characteristics of the light curve is the presence of alternating minima (i.e. every second minima are shallower). The periodicity is not strict, as the cycle-to-cycle variations can be quite significant, and in some cases, it has been shown to be caused by lowdimensional chaos (e.g. Buchler et al. 1996, Plachy, Bódi & Kolláth 2018. In addition to the Cepheid-like pulsations, some RV Tau stars show long-term modulation of the mean brightness with periods of 700-2500 days, associated with time-variable dust obscuration (Kiss & Bódi 2017). The absence or presence of the slow modulation is the basis for classifying the stars into the RVa and RVb Previous studies on the period-luminosity (PL) relations of RV Tauri stars were almost exclusively based on various samples of Population II Cepheids in the Magellanic Clouds or globular clusters. There were hints of a different slope for longer-period Type II Cepheids (Mc-Namara 1995), which was found to depend on the wavelength of the observations, with negligible effects in the JHK S bands (Matsunaga et al. 2006). Recently, Groenewegen & Jurković (2017a) and Manick et al. (2017) presented supporting indications of a steeper RV Tau PL relation for the dusty objects in the Magellanic Clouds and the Milky Way, respectively.
Until now no PL relation has been published for nearby, bright and in all other aspects well-observed Galactic RV Tauri stars. Gaia DR2 has opened, for the first time, the possibility of a geometric distance measurement of Galactic RV Tau-stars. The main inspiration of our study is to compare the RV Tau populations of the Milky Way and the Magellanic Clouds and the universality of their PL relations.
DATA AND METHODS
To identify all the known stars in the Galaxy that are thought to be of RV Tau-type, we searched the catalog of variable star index (VSX database 4 ), the General Catalog of Variable Stars (GCVS; Samus et al. 2017) and the SIMBAD (Wenger et al. 2000) database. Then, we cross-correlated this sample with the Gaia archive 5 (Gaia Collaboration et al. 2016, 2018 and downloaded all available measurements for stars that have relative parallax error (σ π /π) smaller than 0.2 (well-determined parallaxes; Astraatmadja & Bailer-Jones 2016). We have found 56 stars. We note that in most cases extinction values are missing in the Gaia table. This turned out to be a consequence of the data filtering done by the Gaia team (see Sect. 6.5, Eqs. (8-11) and Figs. 31 and 19d in Andrae et al. 2018), which practically removed all stars between the upper main sequence and the RGB/AGB in the Hertzsprung-Russell-diagram. Because of this and the strong degeneracy between extinction and temperatures, the Gaia T eff values were also found to be systematically biased (see details below).
Bailer-Jones (2015) showed that distance estimation from parallaxes becomes an inference problem when measurement errors are present. Traditional inverse approach gives an incorrect (symmetric) error estimate, which can be avoided by using a properly normalized prior. Astraatmadja & Bailer-Jones (2016) investigated the performance of various priors for estimating distances and found that the exponentially decreasing space density (EDSD) prior performs well with a length scale of 1.35 kpc. To determine Gaia distances we followed the prescription of the EDSD method.
To calculate the absolute magnitudes and luminosities, we used photometric measurements, extinction and bolometric correction values. 2MASS J,H,K s and Johnson V,I band photometric values were taken from the SIM-BAD catalog. However, during the calculation of luminosities, the Gaia magnitudes were preferred, if available. Extinctions A V were taken from the combination of 3D reddening maps by Marshall et al. (2006), Green et al. (2015), and Drimmel, Cabrera-Lavers & López-Corredoira (2003) as implemented in the python package of mwdust (Bovy et al. 2016 Absolute magnitude, luminosity and radius values were determined in a probabilistic approach using the direct mode of the slightly modified isoclassify code of Huber et al. (2017), which uses a Monte-Carlo sampling scheme and derives posterior distributions of all parameters. The parameter estimation was performed star by star as follows. First, the distance is determined, then the A V is estimated from the reddening map, which is used to calculate the absolute magnitude in an available photometric band. Using BC, luminosity is determined, which is then converted into radius using the effective temperature and the Stefan-Boltzmann law. The final values and the errors are the median and the 1σ confidence interval of the distributions.
The periods of pulsation and mean-brightness variation (in case of the RVb stars) were taken from the literature. For stars without published periods, we downloaded the AAVSO (American Association of Variable Star Observers) or ASAS (All Sky Automated Survey; Pojmanski 2002) light curves and determined the pulsation periods from the Fourier-spectra, if it was possible. In this paper, we consistently use the double-periods, i.e. the duration between two executive shallow or deep minima as the length of the pulsation cycles. When we plotted the Hertzsprung-Russell-diagram and the period-luminosity relation, we found that the observational scatter was huge. The initial sample contained several low-luminosity stars, and also objects located far from the theoretical instability strip. We interpreted the large scatter as due to contamination of misclassified objects, hence we performed a strict revision of the sample as follows.
First, we thoroughly surveyed the literature for all stars in the initial sample to remove all objects for which there was the slightest doubt about the RV Tau-nature. Second, we took into account the systematic investigation of misclassified RV Tauris by Zsoldos (1991). Third, we also excluded stars with poorly determined pulsation periods (meaning that we may have left out genuine RV Tauris, too, which need further photometric observations to measure the periods).
In the next step, we reviewed the AAVSO and ASAS light curves for each star remaining and checked the variability characteristics by visual inspection. Stars with few observations and those that do not show the alternating behavior clearly (i.e. every minima have almost the same depth) were filtered out. During the inspection, we found incorrectly determined magnitudes in the commonly used catalogues.
RVb-type stars needed a specific treatment in terms of calculating their luminosities. As it turned out, the catalogued mean magnitude values were previously determined by averaging the brightness over the whole light curve, including the long-term RVb cycles. However, given that the RVb phenomenon is indeed connected to the dust extinction around the stars (Kiss & Bódi 2017), we re-determined the mean brightnesses of these objects by averaging the light curves only in the vicinity of the maximum of the long-term variation. This resulted in much higher luminosities, which in turn decreased the scatter of the PL relation.
Finally, we decided to restrict our detailed analysis to those stars that: (i) were included in the spectral energy distribution (SED) study of Gezer et al. (2015) as objects that were chemically studied before; and (ii) the RVb stars of Kiss & Bódi (2017) that have spectroscopically determined parameters. This is the most reliable, high confidence collection of galactic RV Tauri-type variables with well-determined Gaia DR2 distances, which contains 12 RVa-and 6 RVb-type stars in the Galaxy.
Gaia versus spectroscopic T eff values
The second Gaia data release (Gaia DR2) contains photometry in three different bands. G is a broad band, while BP and RP were obtained by integrating the red and blue side of the grism spectra. From the difference of these Gaia bands, stellar effective temperatures were inferred for stars brighter than G=17 mag with T eff between 3000-10000 K. These results are reliable within an accuracy of 324 K (Andrae et al. 2018). This value represents the random errors, not taking into account the systematic uncertainties.
As we found spectroscopically determined (hence expected to be more reliable in relation to interstellar reddening) effective temperatures for several RV Tauri type stars, we can compare the Gaia inferred T eff values with the literature to test the reliability of the given error bars. Fig. 1 shows the T Gaia eff vs. T lit eff overplotted with the distribution of differences in insert. As can be seen in the plot, the temperatures are in reasonable agreement within the given error bars below T lit eff ∼ 4500 K. After this point, the deviation increases with increasing T lit eff (except one point which is covered by the insert). This increasing deviation is expected from Fig. 11.(c) of Andrae et al. (2018), but this should be symmetric. From the distribution in insert, we can estimate a mean deviation of 445.4 K, which significantly decreases the accuracy of the Gaia temperatures. This effect strongly influences the position of the RV Tau stars in the HR diagram (next subsection) and the mass estimation (Section 3.4), if no spectroscopic effective temperatures are available. That is why we restricted out investigation to those stars for which spectroscopic temperatures were available from the literature.
As has been pointed out by the referee, Gaia parallaxes are based on single star solution while many of the RV Tauris (especially the RVbs and the disk stars) are found in binaries that might affect the observed discrepancy between the Gaia and spectroscopic effective temperatures. However, Manick et al. (2017) showed that the spectra of RVb stars are dominated by the highly luminous primary star and no signature of the companion is seen in the spectra. Thus, we expect that the influence of the secondary on the determination of the effective temperatures is negligible.
The finally adopted fundamental physical parameters are listed in Table 1. These form the basis of the detailed discussion in the next Section.
The empirical Hertzsprung-Russell-diagram
In Fig. 2 we show two versions of the empirical Hertzsprung-Russel-diagram. The only difference is the temperature used in the horizontal axes: the top panel is based on the Gaia DR2 temperatures, while the bottom panel was plotted with the spectroscopic effective temperatures. The effect is quite dramatic, given that the Gaia DR2 temperatures only lead to a single RV Tau star falling into the expected instability strip. This clearly indicates that the lack of extinction correction in the Gaia DR2 data makes these temperatures systematically offset. When taking the more reliable spectroscopic temperatures, the majority of the stars is shifted into the instability strip or close to its red edge.
To put both plots in Fig. 2 into the context of stellar evolution, we overplotted evolutionary tracks of single low-mass stars from zero-age main sequence to the post-AGB phase (Charbonnel et al. 2017;Bertolami 2016). The blue and the red edges of the classical instability strip were adopted from Christensen-Dalsgaard (2003). The different symbols of RVa, RVb, dusty and Table 1 The physical parameters of high-confidence galactic RV Tauri stars (see text for details). The errors represent the 1σ confidence level of the posterior distributions. The effective temperatures were taken predominantly from Gezer et al. (2015) and are all based on spectroscopic measurements. Periods of pulsation and mean-brightness variation were calculated by us or were taken from the literature. (2008); (2) non-dusty stars were used to reveal any dependence of the pulsational characteristics on the presence of a disk. Furthermore, we also highlight the luminosity of the tip of the RGB of 1 M and 4 M models with Z=0.008.
Recently Manick et al. (2018) discussed the evolutionary status of SMC and LMC RV Tauri stars, which is based on comparison with single stellar evolutionary models and the relative position of stars to the luminosities of TRGBs. Here we adopt their argumentation regarding the nature of galactic RV Tauri stars.
Looking at the boundaries of the data, it is apparent that disk and RVb stars have luminosities between ∼700 L and ∼5500 L , all falling below the 1.5 M post-AGB track. Manick et al. (2018) argued that dusty stars that have higher luminosities than the 1 M TRGB (upper horizontal dash line) are probably post-AGB objects with initial mass higher than ∼1 M . Stars between the two horizontal lines would be post-AGBs if they are indeed descendants of ∼2-4 M stars. Otherwise they were likely formed from lower-mass binary post-RGB progenitors. For lower luminosities, the objects are presumably post-RGB stars with lower progenitor masses. These are probably binaries as all confirmed binary RV Tauri stars are most likely disk sources (Gezer et al. 2015), which was recently further strengthened by the RVb analysis of Kiss & Bódi (2017).
Most of the non-IR galactic RV Tauri stars fall below the theoretical TRGB of an 1 M star, and are near the post-AGB track of an 1 M star. Based on their position in the HRD, they should have gone through a mass-loss phase, but no sign of a dust is detected. Manick et al. (2018) speculate that these non-dusty RV Tauri stars are single low-luminosity post-AGB stars with an initial mass lower than 1.25 M , where the disk has dispersed on a timescale of 1000 years, making impossible to detect it with recent IR space telescopes (e.g. at 22 micron in case of WISE). However, if they are binaries then the dusty disk has been dispersed during the slow evolution of the low-mass primary.
There is an outlier with much higher luminosity than the others. This star is SS Gem, which is presumably a Pop. I Cepheid as it is well-above the PL relation defined by RV Tauri stars (see Sect. 3.2).
We also note that the location of the several lower luminosity stars with T eff ≤5000 K fall close to the blue loops of 3-4 M very metal-poor models (like corresponding to the Magellanic Cloud). However, such massive stars would rather be Pop I Cepheids than Pop II variables, in strong contradiction to the other properties. For example, one of the cool low-luminosity stars is the wellstudied DF Cygni, which has a very representative RV Tau nature (see Bódi, Szatmáry & Kiss 2016). Manick et al. (2018) and Groenewegen & Jurković (2017b) noted that the luminosity of the dusty RV Tauri stars is on average higher, which was attributed by the latter authors to the flux contribution from a companion. The small number of stars in Fig. 2 prevents drawing a similar conclusion, although a slight supporting tendency may be discovered in the distribution of the points (labelled as "Disk" and "RVb" in Fig. 2).
Overall, the position of the galactic RV Tauri stars in the HR diagram is consistent with those in the Magellanic Clouds. Hence, we can conclude that galactic RV Tauri stars share very similar evolutionary nature despite the different galactic environments.
From a pulsational point of view, all disk stars are located in the theoretical instability strip (IS) within the uncertainties, while a significant fraction of the non-IR and RVb RV Tauris are outside the red edge of the theoretical IS. The fact that some post-AGB stars are lo- cated further redward of the IS was already noted by Kiss et al. (2007), but they only found three stars with slightly lower temperatures than expected. Here we can see that this phenomenon is more pronounced, which most likely reflects the structural difference between classical Cepheids and RV Tau stars or the difference between the excitation mechanisms.
The period-luminosity relation
There is an extensive literature on PL relations of classical pulsating stars, such as RR Lyraes and Cepheids, which we do not attempt to review here. We only refer to a recent work of Groenewegen (2018), who presented a detailed analysis of PL-relations of Magellanic Cloud Cepheids and related variable stars, including RV Tauris. Our main goal here is to establish the first parallax-based PL relation of galactic RV Tau stars. Figure 3 shows the period-luminosity relationship for high-confidence galactic RV Tau type stars, with the data taken from Table 1. There is a noticeable scatter, but most of the points clearly define a linear relationship in the period range of ∼40-100 days. However, there is still an outlier, which is the already mentioned overluminous star SS Gem. Moreover, the luminosity of SX Cen and V820 Cen are either too high or to low, respectively, compared to the overall scatter of the relation.
We fitted a linear function to the logarithmic quantities of these stars with period less than 100 days using an iterative approach with a 2σ clipping, which yielded the following equation ( where errors represent the 1σ uncertainty and 2 out of the 17 points were excluded. This result is plotted in Fig. 3 as a black line, where we also depict the inferred PL relation of Population II Cepheids (black dashed line) and RV Tauri stars (black dotted-dashed line) in the Magel-lanic Clouds of Groenewegen & Jurković (2017a). Recently, Groenewegen & Jurković (2017a) found that RV Tauri-type stars are brighter than expected from the shorter-period Population II Cepheids (BL Her and W Vir objects), i.e. they follow a steeper PL relation. This kind of behaviour for longer period stars has been known for a long time (see e.g. Harris (1985)). Here we find an even steeper relation for the galactic RV Tauri stars than in the MCs. McNamara (1995) suggested that the reason behind the steeper PL relation is the increase of mass with the period of pulsation. This is, however, in contradiction with the model calculations (see the equations in Sect. 3.4).
Considering the outliers, we searched the literature for any information that could imply that these objects may not be of RV Tau-type after all, but we could not find anything conclusive. The physical parameters and the light curve of SX Cen strengthen its evolutionary status. As the PL relation of Pop. I Cepheids lies above that of Pop. II stars, one can naturally conclude that SS Gem may belong to the classical Cepheids instead of RV Tauris. However, its light curve is more similar to those of RV Taus, contradicting the suggestion which comes from the outlying luminosity alone. Finally, V820 Cen follows the PL relation of all Pop. II Cepheids in the Magellanic Clouds, hence its position may be a metallicity-related effect (i.e. being more metal-poor than the average in the Milky Way), rather than belonging to a different class of stars.
In addition to the period-luminosity relation we have also determined the period-absolute magnitude diagram in the V -band in Fig. 4. Here we fitted a linear function to the stars with pulsation periods less than 100 days, which yielded (χ 2 = 0.96): where 3 out of 17 points were excluded. Interestingly, the scatter of the points seems to be smaller and the outliers are the same as in the PL relation. This result can be more easily compared to previous studies because more investigations in the V -band are available in the literature. Some of the previous period-M V relation studies of the Magellanic Cloud and globular cluster variables found that the longer period Pop. II Cepheids follow a steeper slope than BL Her and W Vir stars. The derived slopes are around -4 (-4.35: McNamara 1995;-3.91: Alcock et al. 1998;-3.60: Harris 1981). This mean value is significantly different than ours, which makes our period-M V slope the steepest one ever found. Soszyński et al. (2018) published the most recent period vs. Wesenheit index diagram of Pop. II Cepheids of the Magellanic Clouds. Although the authors did not publish any fits, just the scatter plots in their Fig. 4, a closer look at the data suggests that there is indeed a break in the periodabsolute magnitude relation around 20 days. Unfortunately, our sample is too small to draw a firm conclusion and the next Gaia data release will be needed to expand the galactic sample.
In addition to the traditional period-absolute magnitude relation we noticed an interesting correlation between the RVb period and the absolute magnitude. We plot the V-band absolute magnitude against the period of the slow variation in Fig. 5. We found that 5 out of the 7 RVb stars of our sample follow a strikingly welldetermined linear relationship. Interestingly, SX Cen with the shortest period is the same outlier as in the PL relations in Figs. 3-4, the other one is TW Cam. For the sake of completeness we fitted a linear function using the same iterative 2σ clipping approach to these points which yielded the following parameters: Kiss & Bódi (2017) studied the nature of the RVb phenomenon and found supporting evidence for the model of periodic obscuration by a circumbinary dusty disk as an explanation of the slow variations. In this context, the central object is a binary star and the RVb period corresponds to the orbital period of the system. It is not yet clear why there should be an orbital period-absolute magnitude relation for post-AGB binaries, which, if proven, could indicate an important clue about the evolution of these heavily mass-losing binary systems.
3.3. The period-radius relation Fernie (1984) collected all available radii of classical Cepheids up to 1982 that were determined using the Baade-Wesselink method and defined a relation between the pulsation period and radius (P-R). This relation was confronted to theoretical expectations (Fernie 1984;Bono, Caputo & Marconi 1999), for which an agreement between theory and empirical results have been found for a wide range of periods. Woolley & Carter (1973) showed that a similar (parallel) relation exists for Pop. II Cepheids (BL Her and W Vir stars). Since then several studies have been conducted on the P-R relation of Type II Cepheids; Burki & Meylan (1986) and Balog, Vinkó & Kaszás (1997) presented such a relation in the Milky Way and recently Groenewegen & Jurković (2017a) in the Magellanic Clouds.
In Fig. 6 we plot the logarithmic period-radius relation for our sample. The radius dependence on the pulsation period is not so strict, as the points show a rel- atively large scatter. Nonetheless, a positive correlation is clearly visible, which can be strengthened by an iterative 3σ clipping fitting to these points, which yielded the following equation: This fit can be seen as a black line with the 1σ confidence level in Fig. 6. The labeled outliers are the same as in the previous PL plots. As the radii were calculated from the luminosities and effective temperatures, it is not surprising that SS Gem have the largest radius. Within the given errors, the radius of V820 Cen just happen to follow the fitted relationship. However, if we force to exclude this star in the fitting process, we get a slightly steeper slope.
The P-R relation of BL Her and W Vir stars of Burki & Meylan (1986) and of RV Tauris in the Magellanic Clouds by Groenewegen & Jurković (2017a) are shown by black dashed and dash-dotted lines in Fig. 6, respectively. Balog, Vinkó & Kaszás (1997) did not publish their results quantitatively, so we cannot directly compare theirs to ours. As can be seen in Fig. 6, the PR relation of Burki & Meylan (1986) and Groenewegen & Jurković (2017a) lie close to each other. Contrary to this, our sample of galactic RV Tauri stars appear to follow a steeper relation with a deviation larger than the uncertainties. As the plotted linear of Burki & Meylan (1986) is only an extrapolation of a fit to BL Her and W Wir stars, the deviation may arise from the different types of Pop. II Cepheids, even keeping in mind that their study is also based on a sample of galactic stars. The fit of Groenewegen & Jurković (2017a) covers the RV Tau regime, which makes it challenging to explain the observed deviation. As the metallicity is the main difference between the Milky Way and the Magellanic Clouds, a natural explanation could be a [Fe/H]-dependent PR relation; Groenewegen & Jurković (2017a) did not find any sign of it, hence this issue is also in need of further data in the next Gaia data release. lan (1986) to galactic Pop II Ceps and Groenewegen & Jurković (2017a) fit to RV Tauri stars in the Magellanic Clouds, respectively. Marconi et al. (2015) computed a large grid of nonlinear, time-dependent convective hydrodynamical models of fundamental and first overtone pulsators assuming a broad range in metal abundances (Z = 0.0001-0.02). Based on these models they constructed new metal-dependent pulsation relations, i.e. the correlations between pulsation and evolutionary observables. They omitted the most luminous models (called Sequence D; L/L ∼ 100), because such high values are untypical in case of RR Lyraes, the main targets of their study. However, Type II Cepheids lie in this higher luminosity range. Groenewegen & Jurković (2017a) re-derived these relations considering all models with logL > 1.65L and found the following equation for fundamental mode pulsators: log P = (11.468 ± 0.049) + (0.8627 ± 0.0028) log L − (0.617 ± 0.015) log M − (3.463 ± 0.012) log T eff + (0.0207 ± 0.0013) log Z (N = 195, σ = 0.0044).
If we know the pulsation period, luminosity, effective temperature and metallicity, these equations can be used to estimate the stellar mass. Groenewegen & Jurković (2017a) tested the method on a known classical Cepheid (OGLE-LMC-CEP-0227) and found agreement with the literature within the error bars. To estimate masses using Table 2 The estimated masses of high-confidence galactic RV Tauri stars. The RRL and Cep subscripts refer to the papers of Marconi et al. (2015) and Bono, Caputo & Marconi (2000), respectively, which were used for the calculations (see text for details). The errors were estimated from the uncertainties of the physical parameters. Eq. 5 and 6 we assumed Z=0.014 (solar metallicity). The results are listed in Table 2.
The estimated masses are in the range of ∼0.1-2.2 M independently from the period. There is only one outlier with significantly higher value, SS Gem. The resulting masses of the two methods are in agreement within the given errors. They differ each other mostly by 0.1 − 0.2 M . If we take the mean of the two kind of masses and separate the non-IR, disk and RVb stars, we find the following. The masses of the non-IR RV Tauri stars are in the range of 0.33-5.48 M with a median of 0.52 M , the masses of the disk stars are in the range of 0.18-0.9 M with a median of 0.45 M , while the mass of the RVb stars is in the range of 0.28-2.28 M with a median of 0.83 M . We note that these values are based on half of the formal periods, because the usage of double periods resulted in unphysically low masses. This phenomenon may imply that the real pulsation period of RV Tauri stars is the elapsed time between two consecutive minima aside from its depth.
The estimated masses spread on a wide range (see. Fig 7). Our results are generally consistent with those of Groenewegen & Jurković (2017a) in the Magellanic Clouds. If we look at the median masses of the different types of RV Tau stars, we can recognize the significant difference between non-IR/IR and RVb stars; non-IR and dusty ones have similar, median values (0.45-0.52 M ), while the RVb stars have approximately the double of that (0.83 M ). However, it is important to note that the RVb masses follow a bimodal distribution (∼0.7 M and ∼1.8 M ), which prevents drawing a firm conclusion. Just as above, the sample size is critical here and further stars will be needed when the distance limit of the Gaia data will be pushed further away.
Stars with masses greater than 1 M nearly follow the expectations from their position in the HRD compared to the single post-AGB theoretical evolutionary tracks, as the relevant models have initial masses of 0.8-1.5 M . However, this comparison would be more relevant if we were using binary model calculations (see Sect. 3.1). Regarding the lower mass stars, those ones that have masses around 0.5-0.6 M agree with the model calculations of fundamental pulsators of Bono, Caputo & Santolamazza (1997). All put together, we find that the derived physical parameters are more or less consistent with the theoretical expectations.
Finally, we note that the only star with significantly higher mass is SS Gem, with a mass of ∼5.48 M . Such a large value is again typical for Pop. I Cepheids (Turner 1996), which gives another supporting evidence to the previous conclusion that SS Gem is likely to be a massive young supergiant star instead of a post-AGB pulsator.
SUMMARY
We have compiled a carefully selected list of galactic RV Tauri stars. We took the dominant period values from the literature or determined by ourselves when needed from the available light curve data. Then we cross matched our list of coordinates with the Gaia DR2 database. To infer distances, bolometric magnitudes, luminosities, and radii, we used the slightly modified version of isoclassify code of Huber et al. (2017), which uses a Monte-Carlo sampling scheme and derives posterior distributions.
As the evolutionary status of several objects have been questioned in the literature or is uncertain, we restricted our sample to well-studied stars. To do so, we used the chemically studied sample of Gezer et al. (2015) and the RVb variables of Kiss & Bódi (2017) as our highconfidence sample. To carry out our analysis, we created the most reliable, high-confidence collection of galactic RV Tauri-type variables with well-determined Gaia DR2 distances, which contains 12 RVa-and 6 RVb-type stars in the Galaxy.
The main work in this paper is that we derived parallax-based period-luminosity and period-radius relations for galactic RV Tauri-type variable stars. The most important results inferred from our analysis can be listed as follows: 1. We showed that Gaia DR2 effective temperatures for RV Tau-tye stars deviate significantly from the spectroscopically determined values. They are lower with a median shift of ∼436 K. The reason for this systematics is the lack of reddening correction for stars that lie in the location of RV Tau-type stars in the Hertzsprung-Russell-diagram.
2. We discussed the evolutionary status of galactic RV Tau-type stars, which is fairly ambiguous. The most luminous ones that are brighter than the TRGB of 1 M model are presumably post-AGB objects that are descendants of stars with masses higher than 1 M . Fainter ones are probably post-AGBs if they have an initial mass between ∼2-4 M . Otherwise they likely were formed from lower mass binary post-RGB progenitors. Others are post-RGB binary stars with lower progenitor masses.
3. From the position of stars in the HR diagram we conclude that the instability strip of RV Tauri stars has a broader extension in the cooler range than the classical IS of classical Cepheids.
4. The galactic RV Tauris follow steeper periodluminosity and period-radius relations than those of the Population II Cepheids with shorter pulsation periods.
5. For the first time, we derived a period-absolute magnitude relation between the period of the mean-brightness variation of RVb stars and their V -band absolute magnitude. However, this relation is based on a very low number of stars; further observations will be needed to confirm this correlation.
6. We found that the median mass of RVa stars is around 0.45-0.52 M , which is in agreement with Type II Cepheid model calculations. The mass distribution of our very small sample of RVb stars is sort of bimodal, with masses around ∼0.7 M and ∼1.8 M .
Further understanding of galactic RV Tau-type stars will need the more accurate next Gaia data release, which is expected to increase the sample size significantly. This work has been supported by the Lendület LP2018-7/2018, the NKFIH K-115709 and the GINOP-2.3.2-15-2016-00003 grants of the Hungarian National Research, Development and Innovation Office, and the Hungarian Academy of Sciences. This research has made use of the International Variable Star Index (VSX) database, operated at AAVSO, Cambridge, Massachusetts, USA. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
Table A1
Derived physical parameters of other galactic RV Tau stars that are listed in Gezer et al. (2015). The errors represent the 1σ confidence level of the posterior distributions. The effective temperatures were taken from the literature. Pulsation periods were determined by us or were taken from the literature. | 2019-01-22T14:40:20.000Z | 2019-01-05T00:00:00.000 | {
"year": 2019,
"sha1": "5281fac8456aa9795753eb70d4ca4e37a60dc190",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1901.01409",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5281fac8456aa9795753eb70d4ca4e37a60dc190",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
226645452 | pes2o/s2orc | v3-fos-license | THE BOUNDEDNESS AND UPPER SEMICONTINUITY OF THE PULLBACK ATTRACTORS FOR A 2D MICROPOLAR FLUID FLOWS WITH DELAY
In this paper, two properties of the pullback attractor for a 2D non-autonomous micropolar fluid flows with delay on unbounded domains are investigated. First, we establish the H1-boundedness of the pullback attractor. Further, with an additional regularity limit on the force and moment with respect to time t, we remark the H2-boundedness of the pullback attractor. Then, we verify the upper semicontinuity of the pullback attractor with respect to the domains.
1.
Introduction. The micropolar fluid model is a qualitative generalization of the well-known Navier-Stokes model in the sense that it takes into account the microstructure of fluid [7]. The model was first derived in 1966 by Eringen [4] to describe the motion of a class of non-Newtonian fluid with micro-rotational effects and inertia involved. It can be expressed by the following equations: ∂u ∂t − (ν + ν r )∆u − 2ν r rotω + (u · ∇)u + ∇p = f, ∂ω ∂t − (c a + c d )∆ω + 4ν r ω + (u · ∇)ω −(c 0 + c d − c a )∇divω − 2ν r rotu =f , where u = (u 1 , u 2 , u 3 ) is the velocity, ω = (ω 1 , ω 2 , ω 3 ) is the angular velocity field of rotation of particles, p represents the pressure, f = (f 1 , f 2 , f 3 ) andf = (f 1 ,f 2 ,f 3 ) stand for the external force and moment, respectively. The positive parameters ν, ν r , c 0 , c a and c d are viscous coefficients. Actually, ν represents the usual Newtonian viscosity and ν r is called microrotation viscosity. Micropolar fluid models play an important role in the fields of applied and computational mathematics. There is a rich literature on the mathematical theory of micropolar fluid model. Particularly, the existence, uniqueness and regularity of solutions for the micropolar fluid flows have been investigated in [6]. Extensive studies on long time behavior of solutions for the micropolar fluid flows have also been done. For example, in the case of 2D bounded domains: Lukaszewicz [7] established the existence of L 2 -global attractors and its Hausdorff dimension and fractal dimension estimation. Chen, Chen and Dong proved the existence of H 2 -global attractor and uniform attractor in [1] and [2], respectively. Lukaszewicz and Tarasińska [9] investigated the existence of H 1 -pullback attractor. Zhao, Sun and Hsu [18] established the existence of L 2 -pullback attractor and H 1 -pullback attractor of solutions for a universe given by a tempered condition, respectively. For the case of 2D unbounded domains: Dong and Chen [3] investigated the existence and regularity of global attractors. Zhao, Zhou and Lian [19] established the existence of H 1 -uniform attractor and further gave the inclusion relation between L 2 -uniform attractor and the H 1 -uniform attractor. Sun and Li [15] verified the existence of pullback attractor and further investigated the tempered behavior and upper semicontinuity of the pullback attractor. More recently, Sun, Cheng and Han [14] investigated the existence of random attractors for 2D stochastic micropolar fluid flows.
As we know, in the real world, delay terms appear naturally, for instance as effects in wind tunnel experiments (see [10]). Also the delay situations may occur when we want to control the system via applying a force which considers not only the present state but also the history state of the system. The delay of partial differential equations (PDE) includes finite delays (constant, variable, distributed, etc) and infinite delays. Different types of delays need to be treated by different approaches.
In this paper, we consider the situation that the velocity component u 3 in the x 3 -direction is zero and the axes of rotation of particles are parallel to the x 3 axis, be an open set with boundary Γ that is not necessarily bounded but satisfies the following Poincaré inequality: There exists λ 1 > 0 such that λ 1 ϕ 2 L 2 (Ω) ≤ ∇ϕ 2 L 2 (Ω) , ∀ϕ ∈ H 1 0 (Ω). (2) Then we discuss the following 2D non-autonomous incompressible micropolar fluid flows with finite delay: whereᾱ := c 0 + 2c d > 0, x := (x 1 , x 2 ) ∈ Ω ⊆ R 2 , u := (u 1 , u 2 ), g andg stand for the external force containing some hereditary characteristics u t and ω t , which are defined on (−h, 0) as follows where h is a positive fixed number, and To complete the formulation of the initial boundary value problem to system (3), we give the following initial boundary conditions: For problem (3)-(5), Sun and Liu established the existence of pullback attractor in [16], recently.
The first purpose of this work is to investigate the boundedness of the pullback attractor obtained in [16]. We remark that García-Luengo, Marín-Rubio and Real [5] proved the H 2 -boundedness of the pullback attractors of the 2D Navier-Stokes equations in bounded domains. Motivated by [5] and following its main idea, we generalize their results to the 2D micropolar fluid flows with finite delay in unbounded domains. Compared with the Navier-Stokes equations (ω = 0, ν r = 0), the micropolar fluid flow consists of the angular velocity field ω, which leads to a different nonlinear term B(u, w) and an additional term N (u) in the abstract equations (13). In addition, the time-delay term considered in this work also increases the difficulty.
The second purpose of this work is to investigate the upper semicontinuity of the pullback attractor with respect to the domain Ω. To this end, using the arguments in [15,17], we first let {Ω m } ∞ m=1 be an expanding sequence of simply connected, bounded and smooth subdomains of Ω such that ∞ m=1 Ω m = Ω. Then we consider the Cauchy problem (3)- (5) in Ω m . We will conclude that there exists a pullback attractor A H(Ωm) for the problem (3)-(5) in each Ω m . Finally, we establish the upper semicontinuity by showing lim Throughout this paper, we denote the usual Lebesgue space and Sobolev space by L p (Ω) and W m,p (Ω) endowed with norms · p and · m,p , respectively. Especially, we denote H m (Ω) := W m,2 (Ω).
(·, ·)− the inner product in L 2 (Ω), H or H, ·, · − the dual pairing between V and V * or between V and V * . Throughout this article, we simplify the notations · 2 , · H and · H by the same notation · if there is no confusion. Furthermore, Following the above notations, we additionally denote The rest of this paper is organized as follows. In section 2, we make some preliminaries. In section 3, we investigate the boundedness of the pullback attractor. In section 4, we prove the upper semicontinuity of the pullback attractor with respect to the domains.
2.
Preliminaries. In this section, for the sake of discussion, we first introduce some useful operators and put problem (3)-(5) into an abstract form. Then we recall some important known results about the non-autonomous micropolar fluid flows.
To begin with, we define the operators A, B(·, ·) and N (·) by What follows are some useful estimates and properties for the operators A, B(·, ·) and N (·), which have been established in works [11,13]. (2) The operator B(·, ·) is continuous from V × V to V * . Moreover, for any u ∈ V and w ∈ V , there holds Lemma 2.2.
(1) There are two positive constants c 1 and c 2 such that (2) There exists a positive constant α 0 which depends only on Ω, such that for (3) There exists a positive constant c(ν r ) such that In addition, where δ 1 := min{ν,ᾱ}.
According to the definitions of operators A, B(·, ·) and N (·), equations (3)-(5) can be formulated into the following abstract form: where Before recalling the known results for problem (13), we need to make the following assumptions with respect to F and G.
There exists a constant L G > 0 such that for any t ∈ R and any ξ, η ∈ L 2 (−h, 0; H), (iv) There exists C G ∈ (0, δ 1 ) such that, for any t ≥ τ and any w, v ∈ L 2 (τ − h, t; H), In order to facilitate the discussion, we denote by P(X) the family of all nonempty subsets of X. Let D be a nonempty class of families parameterized in time D = {D(t) : t ∈ R} ⊆ P(X), which will be called a universe in P(X). Based on these notations, we can construct the universe D γ in the following.
We denote by D γ the class of all families H centered at zero with radius ρ D (t).
Based on the above assumptions, we can recall the global well-posedness of solutions and the existence of pullback attractor of problem (13).
Remark 2.1. According to Proposition 2.1, the biparametric mapping defined by U (t, τ ) : w in , φ in (s) → w(t; τ, w in , φ in (s)), w t (s; τ, w in , φ in (s)) , ∀ t ≥ τ, (15) generates a continuous process in E 2 H and E 2 V , respectively, which satisfies the following properties: Proposition 2.2. (Existence of pullback attractor, see [16]) Under the Assumption 2.1 and Assumption 2.2, there exists a pullback attractor A H = A H (t) t ∈ R for the process {U (t, τ )} t≥τ that satisfies the following properties: • Compactness: for any t ∈ R, A H (t) is a nonempty compact subset of E 2 H ; Finally, we introduce a useful lemma, which plays an important role in the proof of higher regularity of the pullback attractor.
Lemma 2.4. (see [12]) Let X, Y be Banach spaces such that X is reflexive, and the inclusion X ⊂ Y is continuous. Assume that {w n } n≥1 is a bounded sequence in L ∞ (τ, t; X) such that w n w weakly in L q (τ, t; X) for some q ∈ [1, +∞) and w ∈ C([τ, t]; Y ). Then w(t) ∈ X and 3. Boundedness of the pullback attractor for the universe D γ . This section is devoted to investigating the boundedness of the pullback attractor for the universe D γ given by a tempered condition in space E 2 H . To this end, we consider the Galerkin approximation of the solution w(t) of system (13), which is denoted by w n (t) = w n (t; τ, w in , φ in (s)) = n j=1 ξ nj (t)e j , w nt (·) = w n (t + ·), where the sequence {e j } ∞ j=1 is an orthonormal basis of H and formed by eigenvectors of the operator A, that is, for all j ≥ 1, where the eigenvalues {λ j } j≥1 of A are real number that we can order in such a way 0 < λ 1 ≤ λ 2 ≤ · · · ≤ λ j ≤ · · · , λ j → +∞ as j → ∞.
Next we verify the following estimates of the Galerkin approximate solutions defined by (16).
With the above lemma, we are ready to conclude this section with the following H 1 -boundedness of the pullback attractor A H for the universe D γ .
4.
Upper semicontinuity of the pullback attractor. In this section, we concentrate on verifying the upper semicontinuity of the pullback attractor A H obtained in Propositon 2.2 with respect to the spatial domain. To this end, first we let {Ω m } ∞ m=1 be an expanding sequence of simply connected, bounded and smooth subdomains of Ω such that ∞ m=1 Ω m = Ω. Then we consider the system (3) in each Ω m and define the operators A, B(·, ·) and N (·) as previous (in (6)) with the spatial domain Ω replaced by Ω m . Further we can formulate the weak version of problem (3)-(5) as follows: On each bounded domain Ω m , the well-posedness of solution can be established by Galerkin method and energy method, one can refer to [7]. Moreover, the solution w m (·) depends continuously on the initial value w in m with respect to H(Ω m ) norm.
In the following, we investigate the relationship between the solutions of system (30) and (13). Indeed, we devoted to proving the solutions w m of system (30) converges to the solution of system (13) as m → ∞. To this end, for w m ∈ H(Ω m ), we extend its domain from Ω m to Ω by setting Next, using the same proof as that of Lemma 8. | 2020-07-23T09:09:33.754Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "662aafa7a940ba602ca37abf8c78b70db71e886d",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=d25d8d9a-c2dc-4430-afda-83e3d5e65186",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bcddf342b40d7c05212578086887d313b7f59198",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119304723 | pes2o/s2orc | v3-fos-license | Far Infrared Edge Photoresponse and Persistent Edge Transport in an Inverted InAs/GaSb Heterostructure
Direct current (DC) transport and far infrared photoresponse were studied an InAs/GaSb double quantum well with an inverted band structure. The DC transport depends systematically upon the DC bias configuration and operating temperature. Surprisingly, it reveals robust edge conduction despite prevalent bulk transport in our device of macroscopic size. Under 180 GHz far infrared illumination at oblique incidence, we measured a strong photovoltaic response. We conclude that quantum spin Hall edge transport produces the observed transverse photovoltages. Overall, our experimental results support a hypothesis that the photoresponse arises from direct coupling of the incident radiation field to edge states.
InAs/GaSb double quantum well (DQW) structures with inverted type-II band alignment
have attracted a great deal of current interest because they support the quantum spin Hall (QSH) effect, 1 in which these two-dimensional (2D) topological insulators (TI) display conductive edge channels and an insulating bulk state. 2,3 The QSH edge states are helical in nature, with each edge channel carrying a pair of spin-polarized, counter-propagating components that are topologically-protected from backscatter by time-reversal symmetry. Experimentally, research on InAs/GaSb devices thus far has focused primarily upon direct current (DC) transport phenomenology 2-10 at sub-Kelvin cryogenic temperatures in order to minimize residual bulk conductivity and accentuate QSH edge transport. There have been to this point few activities directed towards far infrared characterization the InAs/GaSb system. 11 On the other hand, it can be expected that photon induced redistribution of carriers can strongly affect the edge transport in the QSH effect, and may reveal optical techniques for manipulating spin-polarized carriers.
In this Letter, we present evidence of an incident far infrared field directly coupling to QSH edge states in an InAs/GaSb DQW structure, and develop a phenomenological description through examination of robust DC edge transport in the presence of dominant bulk conduction.
The characterized InAs/GaSb Hall bar device, pictured in Fig. 1(a), has three gate terminals, denoted G1, G2 and G3, and eight Ohmic contacts, labeled C0 to C7 moving clockwise from the far left. Each of the Ohmic probes extending from the channel is 5 m wide, with each probe separated from adjacent probes by 5 m along the upper edge of the device. The 4 m wide gates are situated, within alignment tolerance, centrally between contacts C1, C2, C3 and C4.
From C0 to C5, the total length of the channel is 60 m, with a width of 10 m where Ohmic probes are absent.
The fabricated InAs/GaSb device is based upon a 14 nm InAs, 4 nm GaSb DQW structure 5,12 bookended by 50 nm AlSb layers with a 2 nm InAs cap. Assuming a priori the possibility of edge transport, the equivalent circuit representation in Fig. 1(b) includes nonidentical upper and lower edge channels in parallel with bulk 2D conduction. 13 The electronic band structure of the studied InAs/GaSb DQW is plotted in Fig. 1(c), calculated using a 14-band K•p model. 14,15 This calculation clearly shows the hybridization gap between the electron ground state in the InAs quantum well and the heavy hole ground state in the GaSb quantum well and highlights appreciable spin-splitting in the bulk bands due to spin-orbit interaction (SOI). This material is thus a candidate to display robust intrinsic spin Hall edge transport when electrostatically doped into a bulk conducting state [16][17][18][19] in addition to supporting the QSH effect in the TI phase when the Fermi level is tuned to the hybridization gap. [1][2][3][4][5][6][7][8][9][10] To first develop understanding of the device transport behavior, we consider a set of complementary DC measurements in Fig. 2. A sinusoidal 11 Hz current 50 = 500 nA was applied between contacts C5 and C0 to measure the four-terminal resistances 50, ≡ ( − ) 50 ⁄ using standard lock-in techniques. Gate G2 had an applied DC bias 2 while the other two gates, as well as C0, remained fixed at ground potential. Although all three gates tuned the device transport self-consistently, we focus here on G2 because it represents a mirror symmetry line between C0 and C5 and isolates built-in differences in the upper and lower edges of the device.
The four-terminal resistances measured at T = 8 K in Fig. 2 , it follows that 3 50,32 ∼ 50,41 because 1 ≅ 2 ≅ 3 . 50,32 also tunes with 1 and 3 (not plotted), a characteristic in agreement with the circuit in Fig. 1(b) and again inconsistent with purely bulk transport.
Consideration of the temperature dependence of the four-terminal DC resistances at 2 = -2.8 V in Fig. 2(b) aids in elucidating the underlying phenomenology. The longitudinal resistances 50,67 and 50,41 systematically decrease with temperature while both transverse resistances 50,17 and 50,46 increase in magnitude with temperature to around 80 K. First, this cannot be attributed to bulk transport alone because any measured transverse resistance due to a Hall probe misalignment of displacement is proportional to the bulk resistivity , ≈ ⁄ , where is the channel width. Assuming the circuit in Fig. 1 From the expressions for the transverse resistances, it is apparent that transport measurements require > , in contrast with a ballistic Landauer-Büttiker QSH transport description 20 in the absence of bulk conduction where 50,41 > 50,67 and < . The phasebreaking probes C2 and C3 effectively short circuit a portion of the upper edge, resulting in three 5 m channels in series with ≥ 3 ℎ 2 ⁄ . However, incoherent QSH transport where the phase-breaking mean free path is significantly shorter than the 25 m length 21 along the lower edge channel can nonetheless produce the observed transport behavior. Thus, we conclude that the measured transport characteristics result from incoherent transport along at least the lower edge channel, as illustrated in the magnified portion of Fig. 1(b) as a series of short helical QSH edge channels with broken phase coherence. [22][23][24] To characterize the far infrared photoresponse of the device, we measured the photovoltages ≡ ( − ) with 180 GHz radiation normally incident on the sample through z-cut quartz cryostat windows. A set of Virginia Diodes Inc. Schottky multipliers driven by an RF local oscillator and modulated at 75 Hz with 50% duty cycle provided a peak power of 0 = 3.9 mW. We applied no DC current ( 50 = 0) and tuned G2 because it presents a mirror symmetry line with respect to the device bulk. Furthermore, the sample underwent alignment to reduce signal artifacts resulting from spatial inhomogeneity 25 all on the order several V, in contrast with the four-terminal resistances that span an order or magnitude in Fig. 2(a). Furthermore, the photoresponse 67 in Figs.
3(c) and 3(d) decreases
precipitously with temperature at all gate biases, also in contrast with the more gentle temperature dependence of the four-terminal DC resistances in Fig. 2(b). These conspicuous differences suggest that the measured photovoltages are not driven by a bulk response, otherwise the far infrared photoresponse would largely mirror the DC transport at any fixed operating point.
plasmonic 26 homodyne mixing [27][28][29] and a photo-thermoelectric response, 30 Fig. 3(e) as a function of both gate voltage and temperature. In clear contrast with the photoresponse in Figs. 3(a)-(d) that drops rapidly with temperature and peaks at the CNP, χ has weak temperature dependence, approaches zero at the CNP, and changes polarity when majority carriers below G2 shift from electrons to holes. Given a characterization methodology that minimized experimental asymmetries, it is unsurprising that there is no evidence to support a bulk photoresponse predicated on asymmetry.
The photovoltaic response under tuning of G2 is further explored through polarizationdependent measurements in Fig. 4. A pair of wire grid polarizers, one fixed to project half of the total incident power along 0 = 135 o and the other freely rotating to project a fraction of this power along , enabled characterization of the polarization dependent response. In Fig. 4(a), the 180 GHz relative electric field amplitudes and orientations relative to Fig. 1(a) The excitation frequency of 180 GHz corresponds to a 0.7 meV photon energy, comparable to both the hybridization gap ∆ (~ 1-4 meV) 2,5,12 and kT at 10 K (~ 0.9 meV). In light of the strong temperature dependence of the photovoltage, this suggests that hybridization physics may play a role in the response. Given that ℎ < Δ, virtual photoconductivity 34 driven by the incident alternating current (AC) field along = 0 could contribute to a bulk rectified current. The lack of experimental asymmetry permits no net time averaged virtual photocurrent, though, and argues against this type of bulk response.
Instead, we posit that direct coupling of the AC radiation field to edge modes below G2 produces the photoresponse. Two possible mechanisms include the generation of a DC photocurrent 35 in non-ballistic, edge channels and the rectification of QSH edge plasmons. 36 The latter mechanism would be accompanied by spin rectification of the spin-polarized plasma density fluctuations. 37 Because of the broken translational symmetry along non-identical upper and lower edges, linearly polarized radiation along = 0 is appropriate to both couple with the device edge channels and produce a non-zero net response.
The longitudinal signals 67 and 41 observed in Fig. 3(a) arise because the two components of the spin-polarized helical currents propagate away from the edges below G2 in opposing directions. Whether viewed as single particle DC currents or rectification of collective AC currents, the spin up and down components have respective induced DC current densities ↑ = ± and ↓ = ∓ that traverse symmetric impedances such that ↑ = − ↓ at opposing sides of G2. The chemical potential must shift by + ↑ on one side of G2 and by − ↑ on the other side, thus conserving charge along both the upper and lower device edges. This also may explain the temperature dependence of the photoresponse since reduction in the QSH mean free path should degrade the induced coherent current . In summary, we have observed a far infrared photoresponse consistent with direct AC driving of QSH edge currents in an InAs/GaSb DQW field effect device that supports a TI phase.
Additionally, DC transport measurements have demonstrated that edge conductance remains important in our device of macroscopic size, even in the presence of significant bulk conduction and outside of the ballistic transport limit. QSH conductance, both in the TI and bulk conductive states, offers a potential explanation for the combined transport and photoresponse phenomenology. Our results point towards an open and potentially rich path of inquiry analogous to work already begun on optical 25 and far infrared 38 surface excitation in 3D TIs. | 2019-04-13T14:52:57.204Z | 2016-01-07T00:00:00.000 | {
"year": 2016,
"sha1": "4ba91f31506a9a0353d475a8861f6f1fe0313d71",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1605.02789",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3427ca782650b27f93fa8cb65a467994c7d0d387",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
226206126 | pes2o/s2orc | v3-fos-license | Ribosome Biogenesis Alterations in Colorectal Cancer
Many studies have focused on understanding the regulation and functions of aberrant protein synthesis in colorectal cancer (CRC), leaving the ribosome, its main effector, relatively underappreciated in CRC. The production of functional ribosomes is initiated in the nucleolus, requires coordinated ribosomal RNA (rRNA) processing and ribosomal protein (RP) assembly, and is frequently hyperactivated to support the needs in protein synthesis essential to withstand unremitting cancer cell growth. This elevated ribosome production in cancer cells includes a strong alteration of ribosome biogenesis homeostasis that represents one of the hallmarks of cancer cells. None of the ribosome production steps escape this cancer-specific dysregulation. This review summarizes the early and late steps of ribosome biogenesis dysregulations described in CRC cell lines, intestinal organoids, CRC stem cells and mouse models, and their possible clinical implications. We highlight how this cancer-related ribosome biogenesis, both at quantitative and qualitative levels, can lead to the synthesis of ribosomes favoring the translation of mRNAs encoding hyperproliferative and survival factors. We also discuss whether cancer-related ribosome biogenesis is a mere consequence of cancer progression or is a causal factor in CRC, and how altered ribosome biogenesis pathways can represent effective targets to kill CRC cells. The association between exacerbated CRC cell growth and alteration of specific steps of ribosome biogenesis is highlighted as a key driver of tumorigenesis, providing promising perspectives for the implementation of predictive biomarkers and the development of new therapeutic drugs.
Introduction
In 2018, colorectal cancer (CRC) was ranked the third most frequent cancer world-wide, after lung and prostate cancers in men and lung and breast cancers in women, by the International Agency for Research on Cancer (IARC) [1]. CRC is the most frequent form of gastrointestinal cancer and represents around 10% of all cancers, affecting 1.8 million new individuals each year [1]. The CRC mortality rate is slightly higher in men than in women and results in almost 880,000 deaths per year, making it the second-leading cause of cancer-related deaths [2,3]. CRC is a major health burden, as indicated by the steady increase in the incidence rate of age-standardized CRC cases over the period of 1990-2017 [4] and by more alarming data showing a significant rise in CRC cases in patients under 50 years of age [5].
An Overview of Ribosome Biogenesis in Human Cancers
Numerous studies mainly focusing on the control of rDNA gene expression have highlighted the link between dysregulation of expression of the rDNA genes and cancer. Hyperproliferative cancer cells are by definition cells with perturbed energy homeostasis and increased activity in protein synthesis [34][35][36]44], and thus, an increase in ribosome biogenesis participates in maintaining such a high rate of protein synthesis [45][46][47][48].
Regulation of Ribosome Biogenesis by Oncogenes and Tumor Suppressors
Cancer cells are characterized by constitutive activation of growth signals which alter the activity of major cell cycle regulating transcription factors [49,50], and the increase in ribosome biogenesis in cancer cells has been extensively linked to the sustained activation of RNA polymerases I and III by these Cells 2020, 9, 2361 4 of 25 cell cycle transcription factors [51,52]. However, in addition to regulating rDNA gene transcription, some oncogenic transcription factors have been shown to coordinate the overall ribosome biogenesis process. For example, the proto-oncogene MYC represents an active hub toward tumorigenesis [50]. Indeed in cancer cells, MYC plays a major role in the production of ribosomes, through the direct activation of rRNA synthesis but also of a gene network which includes genes encoding all rRNA processing factors, RPs, and factors implicated in the translation machinery [53][54][55][56]. The direct binding of MYC to rDNA activates RNA pol I-mediated synthesis of the 47S rRNA precursor [57]. MYC also activates RNA pol II transcription of genes coding for ribosomal assembly factors and for RPs, as well as that of RNA pol III of 5S rRNA [58]. Therefore, the increase in ribosome biogenesis by MYC is not solely restricted to the activation of rDNA transcription but is also mediated by MYC-induced upregulation of RP gene expression. The crucial causal tumorigenic impact of these coordinated transcriptional activations has been elegantly demonstrated in mouse models exhibiting haploinsufficiency for RP genes [59]. As an example, increased expression of RPL14 and RPL28 induced by MYC over-expression in vivo is a major event in lymphoma progression, leading to impaired translation of the cell cycle regulator cDK11, genomic instability, and defective mitosis that directly contribute to lymphomagenesis [59,60]. Interestingly, the combination of chemical inhibitors of ribosome biogenesis and of the mechanistic targeting of the rapamycin (mTOR) signaling pathway, the main driver of lymphomagenesis [61], has proven to be very powerful at inhibiting tumor growth of MYC-driven B-cell lymphoma in vivo in mouse models [62]. Thus, the uncontrolled activity of MYC observed in many types of cancers exacerbates ribosome biogenesis and promotes aberrant translation that sustains tumor progression [63,64]. Additional oncogenes such as mTOR, PI3K, and Akt were also shown to activate rRNA synthesis partly by directly interacting with the formation of the rDNA preinitiation complex (PIC) [63]. Conversely, tumor suppressors including TP53, PTEN, and RB are very potent ribosome biogenesis suppressors [51,52], regulating various levels from rDNA transcription and processing to RPs and accessory factor expression [64].
Regulation of Ribosome Biogenesis by RPs in Cancers
The causal tumorigenic effect of overexpressed RPL15 and RPL35 was similarly demonstrated in immune-deficient NOD SCID mouse models of human breast cancer metastasis to the lung and ovary [65]. RPL15 and RPL35 are assembled at an early stage during the processing of the pre-60S subunit and their stoichiometry is crucial for the structure and function of the human ribosome [37][38][39]. Circulating tumor cells (CTCs) from human breast cancer patients that promote metastasis formation were characterized by high RPL15/RPL35 expression associated with increased expression of regulators of translation (i.e., eukaryotic initiation factor 2F targets), RP expression, and global protein synthesis [65]. Importantly, this study also showed that in metastatic breast cancer patients expressing sex hormone receptors, characterization of CTCs with high RPL15/RPL35 expression could discriminate patients with the worse overall survival [65]. This study suggests that increased protein synthesis induced by RPL15/RPL35 high expression in CTCs contributes to breast cancer progression. However, the impact of high RPL15/RPL35 expression on ribosome biogenesis remains to be examined, as well as the correlation between inhibition of ribosome biogenesis and a prolonged survival of metastatic breast cancer patients. Similarly, we showed that ribosome biogenesis is increased in a model of MCF-7 mammary cancer cell progression [66] and the work by Prakash et al. [67] demonstrated that genetic inhibition of ribosome biogenesis reduced lung breast cancer metastasis seeding in a model of syngeneic mice [67].
The study of RP expression and activity has gained growing attention in cancer research. Notably, because of the observation that inherited RP gene mutations producing dysfunctional ribosomes are associated with tissue-specific human pathologies named ribosomopathies and with a strong cancer predisposition [68]. Interestingly from the clinical point of view and in the context of translational research, it was demonstrated a long time ago that alterations in ribosome biogenesis occurring in cancer cells could easily be visualized by silver staining of the AgNOR representing several Cells 2020, 9, 2361 5 of 25 argyrophilic nucleolar proteins that are master regulators of ribosome biogenesis, including, for example, nucleolin (NCL), fibrillarin (FBL), and nucleophosmin (B23) [64,69]. A plethora of studies performed in a large variety of tumor cells from human biopsies unambiguously showed that AgNOR distribution reflected unusual morphology, hypertrophy, and/or an abnormally elevated number of nucleoli that could be a pathological gold standard for cancer cell recognition in routine diagnosis [46,64,70]. However, although it is well-known that nucleoli-derived features (i.e., AgNOR) are a hallmark of cancer cell transformation, these are also present in numerous non-cancerous conditions, justifying why they cannot be used as a diagnostic tool. Albeit, some of these features are used as prognostic tools for some cancers, such as hepatocarcinoma [71] or clear cell and papillary carcinoma, the most frequent kidney cancers [72]. Moreover, a recent report on computer-assisted scoring systems, such as the index of nuclear disruption (iNO score), highlighted that such tools may help to establish a fine description of nucleoli exploitable for cancer diagnosis [73].
Regulation and Roles of Ribosome Biogenesis in Human CRC
This section presents the current knowledge regarding ribosome biogenesis regulation in intestinal physiology and CRC. The various steps and actors of ribosome biogenesis that are dysregulated in CRC could provide original targets or biomarkers to improve management of CRC patients.
Regulation of Ribosome Biogenesis in Intestinal Stem Cells
The growth and regeneration of the gastrointestinal tract is uninterrupted throughout life and leads in adulthood to the replenishment of 1-10 billion epithelial cells per day and is entirely renewed in~4-7 days [74]. This constant renewing of gastrointestinal tissue relies on intense stem cell self-renewal, differentiation, and proliferation activities that are sustained by the activation of all the machineries involved in cell growth, division, and protein synthesis, including ribosomes [75]. In mouse organoids, intestinal stem cell differentiation towards enterocytes is accompanied by an increased ribosome biogenesis signature at the transcriptional level [76], and chemical inhibition of ribosome biogenesis using the rDNA transcription inhibitor CX-5461 induces the disruption of the epithelial lining, most likely by stem cell targeting [77]. In addition to contributing to the normal renewing of the gastrointestinal tract, it appears that ribosome biogenesis also promotes colorectal tumorigenesis. In a mouse model of WNT-driven colorectal tumorigenesis, evidence has been provided that defective Notchless-dependent ribosome biogenesis blocked epithelial cell proliferation, imposed cell cycle arrest, and promoted enterocyte differentiation [78]. In humans, evidence of a link between dysregulated ribosome biogenesis and CRC arose from the demonstration that cells from chronic ulcerative diseases associated with a major risk of developing CRC exhibited hypertrophic nucleoli with upregulation of rDNA transcriptional activity [79]. In addition, it was demonstrated that the non-cancerous tissue surrounding colorectal tumor cells constitutes a field of cancerization strongly enriched in a ribosome biogenesis signature, and which is considered to be an early molecular event in human CRC progression [80]. Thus, it appears that dynamic and controlled activation of ribosome biogenesis ensures the homeostatic equilibrium of normal gastrointestinal cells and colorectal cancer cells defined by faulty homeostasis display ribosome biogenesis alterations [47]. These alterations could impact the early stages of ribosome biogenesis (i.e., RNA polymerases activation, rDNA transcription), but also the later steps in ribosome production (rRNA processing, rRNA modifications, rRNA export), extending the possibilities of new ribosome biogenesis-centered targeted therapies in CRC.
Regulation of Early Stages of Ribosome Biogenesis in CRC
Several studies have revealed that ribosome biogenesis could be the integrator and/or final effector of major altered signaling pathways which are often observed in colorectal tumorigenesis. These dysregulations impact every stage of ribosome biogenesis. The tumor suppressors TP53 and RB and the oncogenes MYC and KRAS have been shown to mainly modulate the early steps of ribosome biogenesis (i.e., rDNA transcription), whereas other factors such as nucleolin, RRS1, pescadillo, or BOP1 mostly alter the late stages of ribosome biogenesis (i.e., pre-rRNA cleavage and ribosomal subunits' maturation).
A series of studies illustrated that ribosome biogenesis plays a key role in the initial steps of colorectal tumorigenesis. As described above, activation of ribosome biogenesis mostly relies on hyperactivation of RNA polymerase. Frequent overexpression and/or mutations of co-factors or subunits constituting rRNA polymerases have been described in human CRC and are associated with colorectal tumorigenesis by activating ribosome production [81][82][83][84]. The very first step of rDNA transcription induced by RNA pol I in cooperation with UBF and SL1 factors leads to the production of the 47S pre-rRNA and constitutes the rate-limiting step in ribosome biogenesis [85] (Figure 1). Cells 2020, 9, x FOR PEER REVIEW 6 of 26 These dysregulations impact every stage of ribosome biogenesis. The tumor suppressors TP53 and RB and the oncogenes MYC and KRAS have been shown to mainly modulate the early steps of ribosome biogenesis (i.e., rDNA transcription), whereas other factors such as nucleolin, RRS1, pescadillo, or BOP1 mostly alter the late stages of ribosome biogenesis (i.e., pre-rRNA cleavage and ribosomal subunits' maturation). A series of studies illustrated that ribosome biogenesis plays a key role in the initial steps of colorectal tumorigenesis. As described above, activation of ribosome biogenesis mostly relies on hyperactivation of RNA polymerase. Frequent overexpression and/or mutations of co-factors or subunits constituting rRNA polymerases have been described in human CRC and are associated with colorectal tumorigenesis by activating ribosome production [81][82][83][84]. The very first step of rDNA transcription induced by RNA pol I in cooperation with UBF and SL1 factors leads to the production of the 47S pre-rRNA and constitutes the rate-limiting step in ribosome biogenesis [85] (Figure 1). Figure 1. Regulation of early transcriptional steps of ribosome biogenesis in CRC. MYC is overexpressed in colorectal tumor cells, binds to rDNA sequences, and in cooperation with UBF and SL1 factors, leads to the hyperactivation of RNA pol I-mediated synthesis of the 47S rRNA precursor [84]. MYC, in cooperation with RPF2 and RRS1, activates RNA pol II-mediated transcription of genes coding for ribosomal assembly factors and RPs, as well as RNA pol III-mediated synthesis of 5S rRNA [57]. Overexpression of the three rRNA polymerases and the associated factors is frequent in CRC [80][81][82][83]. Together with RPF2 (protein ribosome production factor 2 homolog), RRS1 cooperates with POL III and drives 5S rRNA synthesis, while POL II activates ribosomal proteins and assembly factors mRNAs and snoRNAs synthesis.
Then, 47S pre-rRNA is cleaved at both ends of the molecule to generate the 45S pre-rRNA which is further cleaved into 5.8S, 18S, and 28S pre-RNAs species [86]. Recently, it was shown that the level of expression of the 45S pre-rRNA is a CRC prognostic marker significantly associated with poor overall survival in two independent cohorts of eighty primary CRC patients [87]. Moreover, the pharmacological (UBF-interacting Ca 2+ chelators) or genetic (siRNA) inhibition of 45S pre-rRNA synthesis in nine human CRC cell lines was associated with p53 protein stabilization, cell cycle arrest, and apoptosis, suggesting that inhibition of rRNA synthesis promotes the tumor suppressive response and thus might be an early step in colorectal tumorigenesis [87]. Although the increased expression of the 45S pre-rRNA is synergistic to increased protein synthesis in CRC cells, the rate of synthesis of other rRNA precursors was not determined in this study [87]. Whether the high expression of the 45S pre-RNA is associated with the production of a specific translatome that would drive colorectal tumorigenesis remains an open question.
MYC andRAS Regulations of Ribosome Biogenesis in CRC
CRC cells often display a significant increase in MYC expression and/or activity [88], which directly and indirectly stimulates the expression of all components required for protein synthesis, including rDNA, mRNAs, and tRNAs as well as processing and maturation factors contributing to enhanced ribosome production [82][83][84][85]89]. Indeed, in human CRC, most of the essential factors [84]. MYC, in cooperation with RPF2 and RRS1, activates RNA pol II-mediated transcription of genes coding for ribosomal assembly factors and RPs, as well as RNA pol III-mediated synthesis of 5S rRNA [57]. Overexpression of the three rRNA polymerases and the associated factors is frequent in CRC [80][81][82][83]. Together with RPF2 (protein ribosome production factor 2 homolog), RRS1 cooperates with POL III and drives 5S rRNA synthesis, while POL II activates ribosomal proteins and assembly factors mRNAs and snoRNAs synthesis.
Then, 47S pre-rRNA is cleaved at both ends of the molecule to generate the 45S pre-rRNA which is further cleaved into 5.8S, 18S, and 28S pre-RNAs species [86]. Recently, it was shown that the level of expression of the 45S pre-rRNA is a CRC prognostic marker significantly associated with poor overall survival in two independent cohorts of eighty primary CRC patients [87]. Moreover, the pharmacological (UBF-interacting Ca 2+ chelators) or genetic (siRNA) inhibition of 45S pre-rRNA synthesis in nine human CRC cell lines was associated with p53 protein stabilization, cell cycle arrest, and apoptosis, suggesting that inhibition of rRNA synthesis promotes the tumor suppressive response and thus might be an early step in colorectal tumorigenesis [87]. Although the increased expression of the 45S pre-rRNA is synergistic to increased protein synthesis in CRC cells, the rate of synthesis of other rRNA precursors was not determined in this study [87]. Whether the high expression of the 45S pre-RNA is associated with the production of a specific translatome that would drive colorectal tumorigenesis remains an open question.
MYC and RAS Regulations of Ribosome Biogenesis in CRC
CRC cells often display a significant increase in MYC expression and/or activity [88], which directly and indirectly stimulates the expression of all components required for protein synthesis, including rDNA, mRNAs, and tRNAs as well as processing and maturation factors contributing to enhanced ribosome production [82][83][84][85]89]. Indeed, in human CRC, most of the essential factors involved in ribosome biogenesis are overexpressed due to highMYC expression levels, as exemplified by the MYC-target gene, aryl hydrocarbon receptor, which is co-upregulated with MYC in CRC and promotes HCT-116 CRC cell proliferation by activation of a ribosome biogenesis transcriptional signature [90].
Although it is well documented that ribosome biogenesis can be controlled by growth factor receptors through MAPK/RAS/signaling [52,91] and that constitutive activation of KRAS represents a major determinant of some CRC subtypes, only few studies have explored the contribution of the oncogenic RAS pathway in ribosome biogenesis-mediated colorectal tumorigenesis. Human colorectal cancer cell lines with constitutive KRAS activation display a gene signature enriched in ribosome biogenesis and translation factors [92]. A recent report showed that the cysteine protease calpain-2 is involved in the inhibition of 47S pre-rRNA biogenesis as well as in the disruption of nucleolar integrity in human CRC DLD-1 cells [93]. The inhibitory effect of calpain-2 on 47S pre-rRNA synthesis is abrogated in a cellular model of DLD-1 cells transfected with the constitutively activate KRAS (G13D) mutant [93], indicating that KRAS regulation of ribosome biogenesis could be a major event in CRC. During colorectal tumorigenesis, consecutive to APC loss, the tumor suppressor TP53 gene is frequently lost or mutated, accelerating the onset of tumor formation partly through the loss of inhibition of ribosome biogenesis [45,94]. Activation of the tumor suppressor RB is also a strong contributor to the inhibition of RNA pol I activity at the PIC complex, inducing a rapid reduction in ribosome biogenesis [95].
Nucleolin and Regulatory Protein Homolog RRS1 in CRC
Nucleolin (NCL) is a multifunctional RNA-binding protein which activates rRNA transcription and pre-rRNA processing [96]. Recently, a novel interaction partner of NCL was discovered, the long non-coding RNA cytoskeleton regulator RNA (CYTOR), which is functionally involved in human CRC tumorigenesis [97]. The CYTOR-NCL interaction activates the nuclear factor (NF)-κB pathway in CRC, and CYTOR and NCL, both overexpressed in human CRC, are associated with poor patient prognosis [97]. It would be of great interest to examine ribosome biogenesis status in these tumors. Additional experiments are now required to validate NCL as a potential target of ribosome biogenesis in CRC.
In eukaryotes, the ribosome biogenesis regulatory protein homolog (RRS1) cooperates with RNA pol III and the protein ribosome production factor 2 homolog (RPF2) to drive 5S rRNA synthesis and also participates in the maturation steps of the pre-60S particle [98]. The biological role of RRS1 was shown by experimental depletion of RRS1 in mouse embryonic fibroblasts, resulting in impaired ribosome biogenesis-associated nucleolar stress (see Section 4.6) and senescence [99]. The expression of RRS1 is significantly upregulated in a clinical CRC cohort, associated with tumor aggressiveness and poor overall survival [100]. RRS1 expression is correspondingly overexpressed in The Cancer Genome Atlas (http://cancergenome.nih.gov) database. The silencing of the RRS1 gene in two CRC cell lines, HCT-116 and RKO, inhibited cell proliferation, and induced cell cycle arrest and apoptosis. These effects were also demonstrated in vivo on subcutaneously implanted colorectal tumor cells, indicating a key role for the ribosome biogenesis factor RRS1 in promoting tumorigenic properties [100]. Although the consequences of RRS1 overexpression on the regulation of ribosome biogenesis per se was not determined, other reports showing that high RRS1 is oncogenic in breast [101,102], thyroid [103], and liver [104] cancers, clearly indicate that RRS1 represents a new promising target to be considered in CRC therapy.
Late Stages of Ribosome Biogenesis in CRC
As stated above, pre-rRNA processing is characterized by an elaborate succession of cleavage, folding, protein associations, and chemical modifications of rRNAs [39,105]. One important step of pre-rRNA cleavage involves the formation of the trimeric complex PeBoW composed of three proteins, the Pescadillo homolog 1 (PES1), the block of proliferation (BOP1), and the WD-repeat domain 12 protein (WDR12) [106]. In mammalian cells, the PeBoW complex is essential for ITS-2 processing of the 32S pre-rRNA into 28S and 5.8S rRNAs, and is mandatory for the formation and assembly of the 60S subunit [106][107][108][109][110]. Additionally, two RNA helicases from the DEAD-box helicase family, Cells 2020, 9, 2361 8 of 25 DDX21 and DDX27, associate with the PeBoW complex and participate in structural rearrangements that accompany the formation of the pre-60S subunit [111]. The activity and distribution of the PeBoW complex is synchronized with cell cycle progression, and perturbations in its organization were reported to be key mediators of nucleolar stress, cell cycle arrest, and p53-dependent apoptosis [109,110].
PES1 and Interacting Partners in CRC
PES1 is a MYC target gene [112] that is significantly increased in clinical samples of human CRC tumor cells vs. normal adjacent epithelial cells or normal colon [113]. The suppression of PES1 in various human CRC cell lines significantly reduced their proliferation and colony-forming ability on soft agar, as well as their growth in vivo following xenotransplantation in immune-deficient mice [113]. Xie et al. also showed that increased PES1 expression in CRC cells was associated with resistance to chemotherapeutic treatments (i.e., etoposide, 5-FU, doxorubicin, vincristine) and provided protection against DNA-induced damage [114]. The regulation of PES1 transcription in HCT-116 cells is mediated via the activation of the c-Jun NH2-terminal kinase (JNK) signaling pathway, indicating that inhibition of colorectal tumorigenesis by JNK inhibitors could also be mediated by a downregulation of PES1 leading to a decrease in ribosome biogenesis [113].
PES1 and DDX21 form with the G protein nucleolar 3 (GNL3), a complex involved in late processing steps of the 32S pre-rRNA to 28S rRNA, before the incorporation of the latter into the 60S particle [115]. GNL3 is overexpressed in CRC tumor cells compared to normal colon tissue and is significantly associated with poor patient overall survival [116]. The overexpression of GNL3 in HT29 CRC cells activates the WNT signaling pathway, cell proliferation, colony formation, epithelial-mesenchymal transition (EMT), migration, invasion, and in vivo tumor growth, whereas its suppression by siRNA reverses these effects [116]. This study indicates that the contribution of GNL3 to colorectal tumorigenesis could be mediated by altered ribosome production [116] and further analyses of ribosome biogenesis and protein synthesis rates under high GNL3 expression is now required.
The long non-coding RNA, circular antisense non-coding RNA in the INK4 locus (circANRIL) is a newly identified negative regulator of PES1 activity, which binds to the C-terminal domain of PES1 and inhibits its action on the cleavage of the 32S pre-rRNA [117]. circANRIL appears to play a broader role in ribosome biogenesis since it is also an interacting partner of the nucleolar protein NOP14 which is essential for the formation of the 40S subunit [117]. In human embryonic kidney HEK-293 cells, the overexpression of circANRIL impairs ribosome biogenesis, induces an accumulation of premature 32S and 36S rRNAs, inhibits cell proliferation, and increases cell death [117]. Interestingly, using primary cultures of human smooth muscle cells and macrophages as a model of atherogenesis, the authors demonstrated that impaired ribosome biogenesis and nucleolar stress-induction due to high levels of circANRIL, confers a protective response against atherosclerosis [117]. This mechanism of regulation of ribosome biogenesis through circular lncRNA provides exciting opportunities for investigating ribosome biogenesis in a different yet complementary context to its implication in the regulation of translation in CRC [118].
In addition to the role of PES1 and its partner in regulating ribosome biogenesis, an extra-ribosomal role for PES1 was recently described as a direct activator of human telomerase reverse transcriptase (hTERT), resulting in telomere length maintenance and senescence in breast and liver cancer cells [119]. It would be interesting to examine the possible association between the high hTERT activity in CRC [120] and PES1. Hence, the central position of PES1 within a protein-rRNA network integrating ribosome biogenesis, telomerase activity, proliferation, and apoptosis, makes it a target of choice for the future developments of targeted CRC therapies.
Contribution of BOP1 to CRC Tumorigenesis
The PeBoW component BOP1 was also shown to mediate activation of ribosome biogenesis and particularly during the processing of the 47S pre-rRNA to mature 18S and 28S rRNA activated by MYC [121]. It has been reported that the number of copies of the BOP1 gene present on the 8q24 chromosomal region is amplified in~40% of human primary CRC and associated with consecutive overexpression of BOP1 mRNA [122]. Interestingly, the BOP1 gene is close to the MYC oncogene, but its overexpression is more frequent and independent of MYC amplification, suggesting that BOP1 overexpression may be a major cause of 8q24 chromosomal region amplification in human colorectal tumorigenesis [122]. Another study confirmed the gain in the 8q24 chromosomal region of the BOP1 gene and its increased protein expression in frozen micro-dissected human rectal cancer cells [123]. In addition, liver metastases formed in immune-deficient SCID/NOD mice after spleen injection of various CRC cell lines with constitutively activated Wnt/β-catenin pathway, show a specific enrichment in BOP1 expression [124]. In human CRC clinical samples, BOP1 is overexpressed in cancer cells compared to normal adjacent epithelial cells and represents a biological marker associated with tumor progression and formation of distant metastases [125]. Very interestingly, the experimental ablation of BOP1 in the human DLD-1 CRC cell line resulted in altered chromosomal segregation and aberrant mitosis that induce chromosomal instability (CIN) distinctive of CRC cells [126], indicating the importance of BOP1 in maintaining intestinal cell physiology. The overexpression of BOP1 in SW480 CRC cells stimulated extracellular matrix invasion and two-dimensional (2D)-migratory properties and was accompanied by the activation of the EMT program [124]. In immune-deficient SCID/NOD mice, the splenic injection of SW620 CRC cells lacking BOP1 resulted in a significant decrease in the number of liver metastases [124]. Similarly, the overexpression of BOP1 in the HCT-116 CRC cell line stimulated their migratory and invasive properties concomitant to the activation of matrix metalloproteases MMP2 and MMP9, whereas BOP1 ablation in HT29 blocked their migratory and invasive capacities [125]. Moreover, the lncRNA colon cancer-associated transcript 2 gene (CCAT2) was recently related with BOP1 activation, chemoresistance to 5-fluorouracil, oxaliplatin, and with colorectal tumorigenesis [127]. Microarray and mass spectrometric analyses determined that ribosome biogenesis and translation factors were upregulated in CCAT2-overexpressing CRC cells, however the precise status of rRNA synthesis remains to be characterized [127]. The mechanism through which BOP1 activates colorectal tumorigenesis is mediated by the increased expression of active aurora kinase B [127], and phenotypic changes associated with migration and invasion are mediated by the activation of the JNK signaling pathway [124,125].
The contribution of BOP1 to colorectal tumorigenesis could be further explored in CRC cells that have lost p53 activity, since the inactivation of BOP1 in mouse TP53-KO 3T3 fibroblasts is associated with increased sensitivity to camptothecin cytotoxic treatment [128]. This highlights the potential usefulness of combining molecules derived from camptothecin like irinotecan frequently used to treat CRC with drugs that would target the PeBoW complex and/or ribosome biogenesis.
WD12: The Third PeBoW Constituent Deserving Further Attention in CRC
Likewise, WD repeat domain 12, the third stable constituent of the PeBoW complex that participates in 32S rRNA processing was shown to drive tumor progression in glioblastoma cell lines and is a clinical marker associated with poor prognosis in glioblastoma patients [129]. Sun and Qian screened the NCBI GEO database (https://www.ncbi.nlm.nih.gov/geo/) and identified WDR12 as a key factor upregulated in CRC cells vs. normal adjacent tissue [130]. Therefore, the role of WDR12-mediated alterations in ribosome biogenesis in human colorectal tumorigenesis warrants further investigations.
Regulation of Ribosome Biogenesis by Ribosomal Proteins in CRC
The formation of mature ribosomes is dependent on the orderly assembly of seventy-nine RPs [37,38]. In addition to their function in rRNA processing, some RPs also play extra-ribosomal roles during development, immune responses, and tumorigenesis [131]. The quantitative variations in RP expression levels described in CRC were mainly associated with extra-ribosomal effects leading to what is recognized as the nucleolar or ribosomal stress [132]. The nucleolar stress is caused by genetic mutations or altered expression levels of RP which are consequently not incorporated in ribosomes but bind to the E3 ubiquitin ligase, MDM2, thereby inducing stabilization of p53 and apoptosis activation [52,133]. Diseases associated with RP gene mutations are known as ribosomopathies and are often linked to cancer predisposition [46,68,94,131,[134][135][136][137]. Here, we examine the expression and rRNA processing activity of some RPs and how it could be related to experimental and clinical CRC.
Indeed, some RPs are not only implicated in ribosome structural building but are also critical in rRNA processing [37,38]. For instance, RPS20 is involved in maturation of the small pre-40S particle, in cytoplasmic export of the 20S rRNA precursor and in 18S rRNA processing [135,138]. Genetic analysis and exome sequencing indicated that RPS20 expression is dependent on an inactivating germline mutation that strongly predisposes humans to some forms of nonpolyposis CRC [139]. The authors demonstrated that experimental inactivation of RPS20 in Hela cells recapitulated late pre-rRNA processing and 18S rRNA maturation defects initially characterized in nonpolyposis CRC clinical samples, providing a causal link between disturbed ribosome biogenesis and CRC predisposition [139].
RPL14 controls the processing of the 45S pre-rRNA and 12S rRNA leading to the production of the mature 5.8S rRNA, and RPS17 controls the late processing stage of the 21S rRNA to nuclear 18S pre-rRNA [140]. Interestingly, RPL14 and RPS17 both potentially contribute to colorectal tumorigenesis [141]. Indeed, RPL14 and RPS17 are two of the few genes activated in CRC and associated with microsatellite instability (MSI) markers and inactivation of mismatch repair genes [141]. However, whether MSI in human CRC is dependent on a dysregulation of ribosome biogenesis through upregulation of RPL14 and RPS17 is unknown.
Additional RPs, RPS6 and RPS7, both involved in 5'ETS cleavage to generate the 30S rRNA [140], contribute to human colorectal tumorigenesis in vitro [25,26]. RPS6 expression is upregulated in CRC [132], stimulates cell proliferation, colony formation, and mediates resistance to the MEK1/2 kinase inhibitor selumetinib of a large panel of human CRC cells [142]. RPS7 is also overexpressed in human CRC cells and mainly appears to exert extraribosomal activation of genes linked to tumor hypoxia and glycolysis [141]. The impact of overexpressed RPS6 and RPS7 on CRC ribosome biogenesis has so far not been examined and warrants further investigation. RPS24, which is necessary for the formation of the 18S rRNA [140], is also an RP gene significantly overexpressed in human CRC [143]. RPS24 has been shown to stimulate proliferative and migratory capacities of CRC cell lines HT-29 and HCT-116 [144], and determining the status of ribosome biogenesis will be pivotal in understanding the role of RPS24 in colorectal tumorigenesis.
RPL15 acts as a ribosomal assembly factor essential for the formation of the 60S subunit [145] and is also directly involved in pre-rRNA processing at the internal transcribed spacer 1 (ITS1) site of the 47S pre-rRNA [146,147]. RPL15 is significantly upregulated in LoVo, HCT-116, SW-480, and SW-620 CRC cell lines compared to non-transformed epithelial cells, and screening of the ONCOMINE 3.0 database (www.oncomine.org) indicated that RPL15 is overexpressed in CRC and associated with disease progression [148]. RPL15 inhibition by siRNAs induced a striking reduction of the pre-60S subunit and is associated with cell cycle arrest at the G1-G1/S phase and apoptosis in HCT-116 CRC cells [148]. It would be interesting to define the translational regulation associated with increased synthesis of RPL15 and 60S subunit in colorectal tumorigenesis in order to find novel CRC targets. Similarly to CRC, high levels of RPL15 expression were found in human gastric cancer and shown to be involved in gastric tumor progression [149]. These data indicate that some RPs represent new potential targets to counteract the hyperactivation of ribosome biogenesis in CRC. The implication of PeBoW complex, RPs, and ribosome biogenesis processing factors upregulated in CRC is summarized in Figure 2. In colorectal cancer cells, overexpression of the 45S rRNA is a biomarker of poor prognosis [87]. RPL15 is involved in pre-rRNA processing at the internal transcribed spacer 1 (ITS1) site of the 47S pre-rRNA and is overexpressed in CRC [148]. RPL14 controls the processing of the 45S pre-rRNA and 12S rRNA and is highly expressed in CRC [140]. RPS6, RPS7, RPS 17, RPS20, and RPS24 are involved in the formation of the 18S rRNA and are overexpressed in CRC respectively, in References [132,139,141,142,150]. Pescadillo homolog 1 (PES1), block of proliferation (BOP1), and WD-repeat domain 12 protein (WDR12) are involved in the formation of the 12S rRNA and overexpressed in CRC respectively, in References [113,122,130]. PES1 is also involved with DDX21 and GNL3 in the processing of the 32S to the 28S rRNA and GNL3 is overexpressed in CRC [116]. RPL14, which is overexpressed in CRC [141], further activates the processing of the 12S rRNA to mature 5.8S. 18S pre-rRNA processing is activated by NIN1 (RPN12) binding protein 1 homolog (NOB1) in cooperation with "partner of NOB1" (PNO1) which is overexpressed in CRC [151]. 18S pre-rRNA processing is also activated by human U3 protein (UTP) 14a (hUTP14a), and is overexpressed and constitutes a marker of poor prognosis in CRC [152]. Base and nucleotide modifications are important modifications that control late steps of rRNA maturation. The human nucleolar enzyme NSUN5 catalyzes the C 5 methylation of cytosine residue C 3782 of the 28S rRNA (18S-m 5 C3782) and is upregulated in CRC and associated with disease progression [153]. The C/D-Box small nucleolar RNA 16 (SNORD16) guides fibrillarin (FBL) to methylate the 2 O-ribose on the 18S-Am484 site and constitutes a molecular marker of CRC and a driver of colorectal tumorigenesis [154]. The ribosome biogenesis protein TSR3 induces the 1-methyl-3α-amino-α-carboxyl-propyl pseudouridine (m 1 acp 3 Ψ) modification on uridine U1248 of the 18S rRNA and is overexpressed in CRC and associated with colorectal tumorigenesis [155].
Ribosome Biogenesis Processing Factors in CRC
Several studies have shown that human CRC progression is associated with the dysregulation of proteins besides RPs but also involved in ribosomal processing. These factors play an important role in the correct processing of rRNAs and ribosome assembly alongside RPs to achieve the production of a functional ribosome [156]. For example, the NIN1 (RPN12) binding protein 1 homolog, also known as NOB1, is an endonuclease which cleaves the 3 end of the 18S rRNA and controls the final maturation step of 18S rRNA [39]. Additionally, the cleavage of the 3 end of the 18S rRNA by NOB1 is potentiated by the specific binding of the ribosomal biogenesis factor named "partner of NOB1" or PNO1, which induces a conformational change of NOB1 that increases its binding affinity and activity on the 18S rRNA ( Figure 2) [157]. The expression of PNO1 was recently investigated in human CRC by microarray assays, RT-qPCR, and tissue microarray (TMA), and was shown to be overexpressed in cancer cells vs. adjacent normal tissue and associated with poor patient prognosis [151]. The overexpression of PNO1 in HT-29 and HCT-8 CRC cells prevented apoptosis and stimulated their proliferation in vitro. Moreover, the knock-down of PNO1 in HCT-116 and RKO cells significantly reduced tumor growth in vivo [151]. In this study, the link between the oncogenic effects of PNO1 and disturbed ribosome biogenesis was subsequently demonstrated by polysome profiling of rRNAs from PNO1-ablated HCT-116 cells, and indicated a significant decrease in the amount of 18S rRNA, 40S subunit, 60S subunit, and mature 80S ribosome [151]. The ablation of PNO1 also resulted in the reduction of global protein synthesis and restored p53 functionality [151]. Further exciting work should now clarify the translational mRNA targets regulated by high levels of PNO1/NOB1 expression in CRC cells and determine their clinical relevance.
Human U3 protein (UTP) 14a (hUTP14a) is a nucleolar protein associated with U3 snoRNA and the DEAH-box RNA helicase DHX37, and that is required for 18S rRNA processing and 40S subunit synthesis [158]. hUTP14a has been shown to participate in the formation of a nucleolar complex that inhibits MYC degradation, thereby promoting its activity during colorectal tumorigenesis [152]. In parallel, nucleolar hUTP14a binds to p53 and RB and stimulates their degradation [159]. It was reported that the nucleolar hUTP14a is significantly overexpressed in CRC TMA sections compared to adjacent normal epithelia, and the co-overexpression of both MYC and hUTP14a is a marker of poor prognosis in CRC [152]. The formation of a stable complex between hUTP14a and MYC supports the proliferation of HCT-116 CRC cells, whereas suppression of hUTP14a inhibits HCT-116 cell proliferation in vitro and in vivo after skin implantation in immune-deficient NOD/SCID mice [152]. Further analysis of high hUTP14a expression on ribosome biogenesis and general translation regulations should determine whether hUTP14a is a potentially meaningful target in human CRC.
The Shwachman-Bodian Diamond syndrome (SBDS) protein plays a dynamic structural and functional role in the late processing of the large 60S ribosomal precursor by association with the 28S rRNA, and is involved in the production of mature 80S ribosome [160]. Using CRC TMA sections, it has been shown that SBDS is overexpressed in tumor cells compared to normal adjacent cells and high SBDS expression is associated with an unfavorable prognosis [161]. The suppression of SBDS induced a significant p53-mediated decrease in HCT-116 cell growth and invasion [161] and further investigations may shed light on the link between SBDS expression and ribosome biogenesis dysregulation in colorectal tumorigenesis. Ribosome biogenesis processing factors that are upregulated and that could potentially be targeted in CRC are indicated in Figure 2.
Chemical Modifications of rRNA in CRC
rRNAs constitute the translational platform on which the mRNA decoding and peptidyl transferase activities are physically connected and functionally controlled [43]. Several types of base or nucleotide modifications accompany various late steps of eukaryotic rRNA biosynthesis and stabilize the three-dimensional (3D) structure of functional ribosomes [43]. However, evidence that altered chemical modifications affect a targeted set of translated mRNAs during development and disease has provided further insight into the impact of qualitative rRNA modifications on ribosome function [40].
The addition of a methyl group to the 2 -hydroxyl group of a ribose (2 -O-ribose methylation) catalyzed by fibrillarin (FBL) on 106 possible sites and the isomerization of uridine to pseudouridine (Ψ) catalyzed by dyskerin (DKC1) on 97 possible sites, are the most frequent 18S, 5.8S, and 28S rRNA chemical modifications [43]. Interestingly, the level of ribose methylation and uridine pseudouridylation at individual sites within the decoding and peptidyl transferase centers has been linked to major mRNA-specific translational defects that could drive tumorigenesis [40,43,162]. We have previously shown that the level of 2 -O-ribose-methylation at some given sites is sensitive to variation in FBL expression and influences the preferential translation of oncogenic Internal Ribosome Entry Sites (IRES)-containing mRNAs, like the IGF-IR in MCF-7 breast and in colorectal HCT-116 cancer cells [163]. The level of 2 -O-ribose methylation on human rRNAs in HCT-116 cells with suppressed FBL has been established and has resulted in the identification of sites of 2 -O-ribose-methylation vulnerability on the 18S, 5.8S, and 28S rRNAs, that are strongly dependent on FBL activity [164]. This work provides a great opportunity to further study the impact of rRNA 2 -O-ribose-methylation in experimental colorectal tumorigenesis and in human clinical CRC samples.
The C/D-Box small nucleolar RNAs (SNORD) are a conserved family of non-coding snoRNAs which guide, for example, the enzyme FBL to specific 2 -O-ribose methylation sites on rRNAs [165,166]. In human Hela cells, SNORD16 is the snoRNA which directly interacts with the 18S rRNA and guides FBL to methylate the 2 O-ribose on the 18S-Am484 site [167]. It was recently shown that SNORD16 is a molecular marker of human CRC and a driver of colorectal tumorigenesis [154]. SNORD16 overexpression is significantly correlated with age, cancer cell invasion, patient history of colon polyps, and is associated with a poor patient overall survival [154]. HCT-116 and SW-620 cells transduced with lentiviral-SNORD16 exhibited significant increases in cell proliferation, colony formation, and migratory and invasive capacities [154]. Although not determined in this study, the impact of SNORD16 on colorectal tumorigenesis is likely to be mediated through altered rRNA 2 -O-ribose methylation profiles and a translational reprogramming which could provide new targets when developing CRC therapies.
Base methylations represent another type of rRNA chemical modification which occurs in late stages of ribosome biogenesis and which are mostly involved in maintaining the ribosome translational fidelity [43]. The human nucleolar enzyme NSUN5 is the methyl transferase involved in cytosine methylation on C 5 position of residue C 3782 of the 28S rRNA (18S-m 5 C3782) and this chemical modification is necessary for stabilizing the peptidyl (P) transferase site [168]. Interestingly, the expression of NSUN5 is upregulated in human CRC and associated with disease progression [153]. Experiments using HT29 and RKO CRC cell lines demonstrated that NSUN5 expression promoted cell proliferation by controlling the expression and activity of major cell cycle regulators in vitro as well as in vivo [153]. The alteration of the level of 18S-m 5 C3782 in CRC samples was however not determined, nor was the translational profile mediated by overexpressed NSUN5 [153]. Future experiments with CRC cells overexpressing NSUN5 may provide important insights into the implication of rRNA base modifications in colorectal tumorigenesis.
Crucial evidence of the importance of rRNA modifications in CRC was recently unveiled by the discovery of the reduced frequency of a single nucleotide variation in the 18S rRNA, present in 46% of CRC samples of four independent large cohorts (~10,000 patients) compared to patient-matched normal epithelium (n = 708) [155]. The nucleotide alteration was found by screening changes in the average variant allele frequency on rRNAs and occurs on uridine U1248 of the 18S rRNA which can display a chemical modification, a 1-methyl-3α-amino-α-carboxyl-propyl pseudouridine (m 1 acp 3 Ψ), thus leading to a decrease in 18S-m 1 acp 3 ΨU1248 level in CRC patients [155]. The ribosome biogenesis protein Tsr3 is an enzyme involved in this nucleotide modification which occurs in the peptidyl transferase center [169]. Suppression of Tsr3 in HCT-166 cells reduced the level of m 1 acp 3 Ψ to the one found in CRC samples [155]. The reduced level of m 1 acp 3 Ψ is predicted to alter the structure of the P site and resulted in an enrichment in a proliferative and translational gene signature at transcriptional or translational levels, that could drive colorectal tumorigenesis [155]. All these data show that subtle rRNA modifications originally thought to be structural elements of the ribosome can generate specific ribosomes with preferential translation of mRNAs coding for proliferative factors. Therefore, the targeting of enzymes which catalyze these rRNA chemical modifications may represent a valuable therapeutic tool. The chemical modifications and associated enzymes that are distinctive of CRC are indicated in Figure 2.
Targeting Ribosome RNA Synthesis in Colorectal Cancer
The rationale for targeting ribosome biogenesis in cancer is based on experimental and clinical evidence showing that tumorigenesis is associated with quantitative increases in ribosome production and/or production of qualitatively-altered ribosome species [44,45,47,48,52,[170][171][172][173]. These characteristics have formed the basis for the development of new drugs that disrupt the activation of rRNA synthesis by directly targeting the formation and activity of the RNA pol I transcriptional complex on rDNA [51,[174][175][176]. Effective ribosome biogenesis inhibition in cancer cells has gained considerable attention with the development of the two inhibitors, CX-5461 and CX-3543, which selectively bind to rDNA-enriched G-quadruplex regions and halt ribosome production [51,175]. These new drugs that target rRNA transcription and induce cell cycle arrest could represent a novel approach in the treatment of human CRC. Compared to existing chemotherapy using oxaliplatin, 5-fluorouracil, and camptothecin that target rRNA production in an unselective manner, or actinomycin D that binds GC-rich regions of rDNA [177,178], CX-5461 and CX-3543 appear to kill in particular cancer cells or cancer stem cells that have high activation status of ribosome biogenesis. It is also important to note that, in contrast to most other chemotherapeutic molecules used in CRC treatment, oxaliplatin efficacy depends more on the activation of nucleolar ribosome stress than on the induction of a DNA damage response (DDR), further indicating the relevance of targeting ribosome biogenesis in colorectal tumorigenesis [178].
In the p53-wild-type HCT-116 CRC cell line, CX-5461 was shown to inhibit RNA pol I transcriptional activity by disrupting the binding of SL1/TIF-IB transcription factors to the rDNA promoter on the PIC complex, without affecting general transcription and global protein synthesis, thereby promoting nucleolar stress, stabilization of p53, and cell death [179,180]. Interestingly, the cytotoxic effect of CX-5461 in HCT-116 CRC cells is potentiated by cellular DDR induced by ionizing radiation treatment [181]. The recent demonstration in HCT-116 cells, that CX-5461 induces a prominent intracellular DNA damage through inhibition of topoisomerase II activity, indicates that DDR could be the major mechanism of CRC cell death induction [182]. Similarly, in p53 mutant HT-29 and COLO-205 CRC cell lines, CX-5461 treatment is inducing apoptosis [179], but possibly through a mechanism that triggers replication stress and DDR activation, as reported in high-grade serous ovarian cancer [183]. CX-5461 is in phase I/II clinical trials for hematological cancers [184], but further investigations are necessary to understand the crosstalk between ribosome biogenesis inhibition and DDR activation induced by CX-5461 treatment of CRC cells.
The other small molecule which inhibits RNA pol I activity, CX-3543, or quarfloxin, is a fluoroquinolone derivative which interferes with the binding of nucleolin on rDNA G-quadruplex regions, thereby inhibiting RNA pol I-driven transcription [185]. CX-3543 induced in vitro apoptosis of various human CRC cell lines, including p53 mutant COLO-205, HCC-2998, HCT-15, and KM12 cells, and p53-wild-type HCT-116 cells, and inhibited in vivo HCT-116 tumor xenograft growth [185,186]. CX-3543 has entered a phase II clinical trial in patients with low to intermediate grade neuroendocrine tumors [186,187]. Besides, the demonstration that it also causes a strong inhibition of MYC expression in CRC cells [186] should stimulate more comprehensive work to assess its impact on colorectal tumorigenesis. It has also been shown that induction of HCT-116 and DLD1 colorectal cancer cell death induced by CX-5461 and CX-3543 treatment in vitro and in tumor xenografts is mediated by the activation of a robust DNA damage response [181]. This type of genotoxic cell response indicates that CX-5461 and CX-3543 could be very potent in killing CRC cells with a defective homologous recombination pathway. This default is generally due to mutations in DNA damage repair enzymes or in BRCA1/2 genes and is frequently observed in the sub-group of CRC with MSI [10]. Patients with high MSI CRC show a positive response to immunotherapeutic treatment [188], and it will be meaningful to investigate whether CX-5461 and CX-3543 potentiate immunotherapies in CRC.
Another molecule interfering with rDNA transcription is BMH-21. BMH21 is a DNA intercalator molecule which impairs RNA pol I activity by binding to rDNA GC-rich regions and simultaneously disengages RNA pol I from rDNA chromatin and activates its proteasome-mediated degradation [188,189]. BMH-21 activates a rapid p53-dependent cytotoxic effect with little associated DNA damage in many human cancer cell lines, including HCT-116 CRC cells, yand is also very potent in inhibiting HCT-116 xenograft growth in mice [190]. Very interestingly, it was recently reported that CRC patient-derived xenografts contain a sub-population of cancer stem cells characterized by high expression levels of the RNA pol I subunit A (PolR 1 A) and elevated biosynthetic capacities [191]. Besides, PolR 1 A is shown as one of the prerequisites for in vivo tumor growth [191]. CRC stem cells with high expression of PolR 1 A were classified at the top of tumor stem cell hierarchy and ip injection of BMH-21 induced a significant decrease in PolR 1 A-high stem cell and in tumor xenograft growth [191]. In addition, it appears that the FDA-approved antimalarial drug amodiaquine was recently shown to block rDNA transcription and proliferation of various human CRC cells lines by a mechanism very close to the induction of ribosome biogenesis stress and cell death by BMH-21 [192]. Similarly, the natural plant-derived product alkaloid haemanthamine was shown to specifically inhibit pre-rRNA processing, leading to the accumulation of the 47S pre-rRNA and impeding the formation of the mature 28S and 5.8S species [193]. Interestingly, haemanthamine was reported to trigger p53-associated nucleolar stress and apoptosis in CRC HCT-116 [193], arguing in favor of its use in CRC treatment. Collectively, these data indicate that inhibitors of ribosome biogenesis belonging to the CX-5436 family [194], BMH-21 and its derivatives [195], and plant alkaloids [193] hold great potential for CRC treatment and evidence of their efficacy in clinical phase I/II for human CRC treatment are eagerly awaited.
Conclusions
Evidence that the ribosome biogenesis pathway is altered in CRC has markedly increased in recent years. Studies using cellular and animal models largely contributed to establishing that the alterations leading to quantitative increase in ribosome production were linked to colorectal tumorigenesis initiation and/or progression. At present, clinical studies have also highlighted several regulators of ribosome biogenesis as novel biomarkers of human CRC, reinforcing the role of ribosome biogenesis in CRC. However, while few studies initiated the long process of demonstrating that the ribosome biogenesis pathway is an innovative target in CRC management, pre-clinical studies on animal models with various chemical or natural inhibitors of ribosome biogenesis are now needed to determine their objective effectiveness and to address their potential benefit(s) for CRC patients. Moreover, research is ongoing in many laboratories worldwide to unravel new ribosome biogenesis inhibitors following various drug discovery strategies, from drug-repurposing to development of high-throughput screening. These strategies may identify molecules to target not only the increase in ribosome biogenesis observed in CRC cells but also the cancer-modified ribosomes, the recent discovery of which is owing to the tremendous progress that has recently been made to obtain high-resolution structural ribosomal features. It is compelling to find out that several molecules that were initially discovered with potent anti-CRC effects (i.e., catalpol, calcimycin, flavonoid derivatives, oxaliplatin) are target ribosome biogenesis [196]. Thus, a new phase in CRC patient management is envisioned, where characterization of ribosome biogenesis pathways together with qualitative analysis of rRNAs, will help to create personalized anticancer molecules with much less genotoxicity. | 2020-10-31T13:05:59.157Z | 2020-10-27T00:00:00.000 | {
"year": 2020,
"sha1": "6201120153736468ac90d858d2675a974022c33f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/9/11/2361/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc9a1aecc7c51992831472624d3b5c9d4bd3476a",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
268755311 | pes2o/s2orc | v3-fos-license | Genomic and biological control of Sclerotinia sclerotiorum using an extracellular extract from Bacillus velezensis 20507
Introduction Sclerotinia sclerotiorum is a known pathogen that harms crops and vegetables. Unfortunately, there is a lack of effective biological control measures for this pathogen. Bacillus velezensis 20507 has a strong antagonistic effect on S. Sclerotiorum; however, the biological basis of its antifungal effect is not fully understood. Methods In this study, the broad-spectrum antagonistic microorganisms of B. velezensis 20507 were investigated, and the active antifungal ingredients in this strain were isolated, purified, identified and thermal stability experiments were carried out to explore its antifungal mechanism. Results The B. velezensis 20507 genome comprised one circular chromosome with a length of 4,043,341 bp, including 3,879 genes, 185 tandem repeats, 87 tRNAs, and 27 rRNAs. Comparative genomic analysis revealed that our sequenced strain had the closest genetic relationship with Bacillus velezensis (GenBank ID: NC 009725.2); however, there were significant differences in the positions of genes within the two genomes. It is predicted that B. velezensis 20507 encode 12 secondary metabolites, including difficidin, macrolactin H, fengycin, surfactin, bacillibactin, bacillothiazole A-N, butirosin a/b, and bacillaene. Results showed that B. velezensis 20507 produced various antagonistic effects on six plant pathogen strains: Exserohilum turcicum, Pyricularia oryzae, Fusarium graminearum, Sclerotinia sclerotiorum, Fusarium oxysporum, and Fusarium verticillioides. Acid precipitation followed by 80% methanol leaching is an effective method for isolating the antifungal component ME80 in B. velezensis 20507, which can damage the membranes of S. sclerotiorum hyphae and has good heat resistance. Using high-performance liquid chromatography, and Mass Spectrometry analysis, it is believed that fengycin C72H110N12O20 is the main active antifungal substance. Discussion This study provides new resources for the biological control of S. Sclerotiorum in soybeans and a theoretical basis for further clarification of the mechanism of action of B. velezensis 20507.
Introduction: Sclerotinia sclerotiorum is a known pathogen that harms crops and vegetables.Unfortunately, there is a lack of effective biological control measures for this pathogen.Bacillus velezensis 20507 has a strong antagonistic effect on S. Sclerotiorum; however, the biological basis of its antifungal effect is not fully understood.
Methods: In this study, the broad-spectrum antagonistic microorganisms of B. velezensis 20507 were investigated, and the active antifungal ingredients in this strain were isolated, purified, identified and thermal stability experiments were carried out to explore its antifungal mechanism.
Results: The B. velezensis 20507 genome comprised one circular chromosome with a length of 4,043,341 bp, including 3,879 genes, 185 tandem repeats, 87 tRNAs, and 27 rRNAs.Comparative genomic analysis revealed that our sequenced strain had the closest genetic relationship with Bacillus velezensis (GenBank ID: NC 009725.2);however, there were significant differences in the positions of genes within the two genomes.It is predicted that B. velezensis 20507 encode 12 secondary metabolites, including difficidin, macrolactin H, fengycin, surfactin, bacillibactin, bacillothiazole A-N, butirosin a/b, and bacillaene.Results showed that B. velezensis 20507 produced various antagonistic effects on six plant pathogen strains: Exserohilum turcicum, Pyricularia oryzae, Fusarium graminearum, Sclerotinia sclerotiorum, Fusarium oxysporum, and Fusarium verticillioides.Acid precipitation followed by 80% methanol leaching is an effective method for isolating the antifungal component ME80 in B. velezensis 20507, which can damage the membranes of S. sclerotiorum hyphae and has good heat resistance.Using high-performance liquid chromatography, and Mass Spectrometry analysis, it is believed that fengycin C72H110N12O20 is the main active antifungal substance.
Introduction
Bacillus spp.are plant growth promoting rhizobacteria (PGPR) that promote plant growth, absorb and utilize mineral nutrients, and inhibit harmful organisms.Bacillus spp.are also the most studied and applied group of biocontrol bacteria and are characterized by broad-spectrum efficiency, easy cultivation, stress tolerance, and storage tolerance.Secondary metabolites are important for the biocontrol of Bacillus spp.(Djordje et al., 2018;Santos et al., 2023;Zhang et al., 2023).It has been found that Bacillus spp.produce various secondary metabolites beneficial to plants, including lipopeptide compounds (Chen et al., 2009) synthesized by non-ribosome peptide synthesis (NRPS) and polyketide compounds synthesized by polyketide synthase (PKS) (Ruckert et al., 2011), and linear azol (In) E-containing peptides (LAP) (Baniulis, 2021), bacteriocin (Diep and Nes, 2002), thiopeptide (Bleich et al., 2015), and terpene (Kontnik et al., 2008) synthesized by ribosome peptide synthesis (RPS).Various beneficial secondary metabolites are secreted by Bacillus spp., and wholegenome sequencing is important for understanding and utilizing biocontrol strains.
Sclerotinia sclerotiorum is a common disease in soybean crops that mainly causes stem rot and spreads quickly (Cheng et al., 2022;Liu J. et al., 2022).In severe cases, it can lead to crop failure (Chen et al., 2005).B. amyloliquefaciens CH-2 inhibits the growth of S. sclerotiorum, but also inhibited the formation of sclerotia.When managing the disease during the crop production cycle, the main form of control is through synthetic fungicides (Rocha et al., 2023).The biocontrol strain Bacillus velezensis was isolated from our laboratory.B. velezensis transports antifungal substances out of the cell and has a strong antagonistic effect on the pathogenic microorganism S. sclerotiorum in pot experiments.It can inhibit the expression of genes encoding the ribosomal subunit of S. sclerotiorum, indicating its potential for biocontrol applications.Owing to the high degree of genetic conservation, physiological analysis of the 16S rRNA gene cannot differentiate between B. velezensis and B. amyloliquefaciens (Chun et al., 2019;Wang et al., 2022).Therefore, we previously believed that the biocontrol bacterium, B. velezensis, was B. amyloliquefaciens.Currently, the antifungal substances produced by this strain are mainly secreted outside the cell.We found that this strain inhibited the expression of genes encoding the ribosomal subunit of S. scrotum, resulting in the inhibition of protein synthesis.As a biocontrol bacterium with potential application value, many characteristics need to be explored further, including genomic sequences and their annotations, potential metabolite, the broad-spectrum ability to inhibit fungi, detailed active ingredients for inhibiting fungi and their thermal stability, etc.A deeper understanding of B. velezensis is crucial for its application in production.
In this study, we sequenced the entire genome of B. velezensis and predicted the types of secondary metabolites it produces based on genetic information (NCBI accession no.PRJNA981422).Furthermore, purification of extracellular products was carried out using various methods, such as ammonium sulfate precipitation, and research was conducted on the broad-spectrum properties and thermal stability of the antifungal components.This study provides a theoretical basis for the utilization of B. velezensis in practice.
Genome sequencing, assembly, and gene annotation
In total, 12,592,676 raw reads with lengths of 1,888,901,400 bp were obtained by Illumina sequencing.After filtering low-quality reads, 12574834 clean reads with lengths of 1,828,325,506 bp and 46.17% GC content were acquired.After sequence assembly, the genome of B. velezensis 20507 was found to comprise one circular chromosome with a length of 4,043,341 bp and 46.34% GC content, including 3,879 genes, 185 tandem repeats, 87 tRNAs, and 27 rRNAs (Figure 1).The main parameters of the genes in the genome were as follows: the total length of the genes was 3582372 bp, with an average length of 923 bp for each gene.Gene and intergenic lengths accounted for 88.60 and 11.40% of the genome, respectively.
Gene Ontology (GO) annotation analysis was conducted on the BLAST results using the blast2go software, and 3,059 genes were annotated.GO annotations included three subcategories: biological processes (30 branches), cellular components (20 branches), and molecular functions (13 branches).In the subcategories of biological processes, the three largest branches were cellular processes (GO:0009987), metabolic processes (GO:0008152), and responses to stimuli (GO:0050896) (Figure 2).Among the cellular components subcategories, the three largest branches were cells (GO:0005623), cell parts (GO:0044464), and membranes (GO:0016020) (Figure 2).In the molecular function subcategory, the three largest branches were catalytic (GO:0003824), binding (GO:0005488), and transporter activities (GO:0005215) (Figure 2).Among the 3059 identified genes, 2250 and 2904 were annotated using the KEGG and COG functional databases, respectively.For the COG categories, the number of genes with unknown (354) function was the highest, followed by amino acid transport and metabolism (278), transcription (248), general function prediction only (240), and carbohydrate transport and metabolism (238); the number of other subcategories was less than 200 (Figure 3A).The KEGG pathway with the highest number of genes was global and overview maps (649), followed by carbohydrate metabolism (248), amino acid metabolism (206), metabolism of cofactors and vitamins (160), membrane transport (154), signal transduction (131), and energy metabolism (118); the number of other subcategories was less than 100 (Figure 3B).
Average Nucleotide Identity (ANI) is an indicator of the phylogenetic relationship between two genomes at the nucleotide level.ANI is defined as the average base similarity between homologous fragments of two microbial genomes and is characterized by a high degree of discrimination between closely related species.The ANI of the JDF and Bacillus velezensis (NC_009725) genomes was 98%, suggesting a high sequence similarity.Collinearity analysis of these two genomes was conducted using MCScan software, and the results confirmed that the sequences of the two genomes were highly similar, but there were significant differences in the positions of genes within the genome (Figure 4C).
Metabolite prediction
Through online prediction using anti-SMASH and alignment analysis with NCBI BLAST, it was found that B. velezensis 20507 encodes 12 secondary metabolite synthesis gene clusters, of which Compared to known gene clusters, the similarity between difficidin, fengycin, bacillaene, macrolactin H, bacilysin, bacillibactin, and bacillothiazole A-N gene clusters was 100%.The alignment between the gene clusters encoding butirosin and surfactin was low at 7 and 82%, respectively.In addition, B. velezensis 20507 may encode three unknown secondary metabolites, initially predicted to be one polyketide and two terpene compounds.
Antagonistic effects of B. velezensis 20507 on six plant pathogenic fungi
A plate confrontation experiment was conducted to determine the relationship between B. velezensis 20507 and six pathogenic plant fungi (Figure 5): Exserohilum turcicum, Pyricularia oryzae, Fusarium graminearum, Sclerotinia sclerotiorum, Fusarium oxysporum, and Fusarium verticillioides.Among these, S. sclerotiorum and E. turcicum had the smallest colonies, and these two pathogenic fungus grew only in a small area in the center of the plate (Figures 5A1, A2, D1, D2).It is believed that B. velezensis 20507 strongly inhibit the growth of S. sclerotiorum and E. turcicum.The colony sizes of P. oryzae (Figures 5B1, B2), F. graminearum (Figures 5C1, C2), F. oxysporum (Figures 5E1, E2), and F. verticillioides (Figures 5F1, F2) were relatively large, suggesting that B. velezensis 20507 could also effectively inhibit the growth of these four fungi.The colony radii of all six pathogenic fungi in the antagonistic experiment were measured and the inhibition rate was calculated.The results showed that the inhibition rate of B. velezensis 20507 against six pathogens was more than 60%.S. sclerotiorum and E. turcicum showed the highest inhibition rates of 91 and 79%, respectively, which were significantly higher than those of the other four pathogenic fungi (P < 0.05) (Figure 5G).These results showed that when inoculated with B. velezensis 20507, the biocontrol bacteria showed varying degrees of inhibitory effects on the growth of all six fungi.
Inhibitory effect of crude solution on the growth of S. sclerotiorum
Two methods of ammonium sulfate precision and methanol leaching were used to purify the antimicrobial components in the fermentation broth of B. velezensis 20507.The differences in antimicrobial activity of precipitates under different ammonium sulfate saturation conditions and different concentrations of methanol leaching were evaluated.Within the ammonium sulfate saturation range of 20-40%, as ammonium sulfate saturation increased, the colony diameter of S. sclerotiorum decreased (Figures 6A-C).However, when the saturation exceeded 40%, the diameter of inhibition zone decreases with the increase of ammonium sulfate saturation at 50-70% (Figures 6D-F).Consistent with these, the inhibition rate of AS40 against S. sclerotiorum was significantly higher than the other five treatments (P < 0.05) (Figure 6M).These results suggest that 40% ammonium sulfate saturation is most favorable for the precipitation of antimicrobial components.
The antifungal activity of sediments under different concentrations of methanol was evaluated, and the results showed that the diameter of the S. sclerotiorum colony decreased with the increase of methanol concentration in the range of 50-80% (Figures 6G-J).However, in the range of 90-100%, the diameter of the S. sclerotiorum colony increased with the increase of methanol concentration (Figures 6K, L).Consistent with these, the inhibition rate of ME80 and ME90 against S. sclerotiorum was significantly higher than the other four treatments (P < 0.05) (Figure 6N).Among all six treatments, the ME80 treatment had the highest inhibition rate.Therefore, 80% methanol concentration is most conducive to the precipitation of antifungal substances.In summary, we further purified and identified the antifungal substances using sediments obtained by 40% ammonium sulfate saturation (AS40) and 80% methanol leaching (ME80).
Effect of ME80 treatment on the integrity of S. sclerotiorum cell membranes
In this study, two methods were used for the separation and purification of antimicrobial substances, including ammonium sulfate prediction and acid prediction followed by methanol leaching.The amount of antimicrobial substances obtained using the latter method was approximately ten times higher than that obtained using the former method.The difference in antimicrobial effects between the two initial extracts was tested, and it was found that the antimicrobial effects were similar.Therefore, we focused on further analysis and testing of the antimicrobial substance (ME80) obtained using the latter method.ME80 aqueous solution was used to treat the mycelium of S. sclerotiorum, and PI staining was performed to observe damage to the cell membrane.PI can penetrate the cell membranes of dead cells and stain the nucleus red.However, PI cannot penetrate the membranes of living cells.After being dyed with PI, most control mycelia in S. sclerotiorum did not stain, and only a few mycelia were stained red by PI (Figures 7A1, A2).The red color was believed to be caused by physical damage to a single mycelium during the staining and washing processes, indicating that the control mycelial structure was generally intact.ME80 aqueous solution was used to treat S. sclerotiorum mycelia for 20 min, and all mycelia were stained red with PI (Figures 7B1, B2), indicating that ME80 treatment led to damage to the S. sclerotiorum mycelium.The damage to the control mycelia and ME80 aqueous solution mycelia was further observed using scanning electron microscopy, and the results were consistent with those observed under fluorescence microscopy (Figures 7C1, C2).In summary, ME80 treatment can damage S. sclerotiorum.Therefore, the ME80 extract mixture should contain antimicrobial components that cause cell membrane leakage in S. sclerotiorum.
Isolation and identification of antifungal components
The composition of ME80 was analyzed using HPLC.After separation using a chromatographic column, two components were obtained with retention times of 14.47 and 21.22 min, respectively.Among them, the peak area corresponding to the first chemical component was 2.33 times that of the second peak area (Figure 8A).Inhibitory effect of antagonistic bacteria Bacillus velezensis 20507 on growth of six plant pathogens.For (A2,B2,C2,D2,E2,F2), the plant pathogenic fungi were inoculated in the center of the Petri dish, and the biocontrol bacteria were inoculated on the four corners of the pathogenic fungi.We mixed the substance collection solution from 12 to 15 min to obtain component 1 and conducted the HPLC analysis again.These results confirm that pure component 1 was obtained (Figure 8B).We mixed the substance collection solutions for 21-23 min to obtain component 2. Antagonistic experiments were conducted using components 1 and 2 against S. sclerotiorum, confirming that component 1 indeed has antifungal activity (Figure 8C).Moreover, the chemical stability of component 1 is particularly good, and after high-pressure sterilization at 121 • C, the antifungal activity does not decrease at all (Figure 8C).Triple TOF-MS analysis was performed on component 1, and the mass-to-charge ratio of component 1 was
Discussion
Through a series of analyses, including genome sequencing, prediction of secondary metabolites, precipitation of antifungal substances in the fermentation broth, analysis of antifungal activity, HPLC purification, and mass spectrometry identification of antifungal components, the results suggested that the main antifungal component in Bacillus velezensis 20507 was fengycin.Fengycin cyclic lipopeptides contain a series of homologs (Honma et al., 2012), and are broad-spectrum antifungal preparations that are particularly effective against filamentous fungi (Vanittankom et al., 1986;Koumoutsi et al., 2004;Ongena and Jacques, 2008;Villegas-Escobar et al., 2013).They have antagonistic effects on pathogenic fungi in rapeseed and wheat, and are recommended for use in agriculture (Ramarathnam et al., 2007).Fengycin is synthesized by five non-ribosomal fengycin synthase enzymes: FenC, FenD, FenE, FenA, and FenB (Devine and Hancock, 2002;Zasloff, 2002).Fengycins exert anti-fungal effects by disrupting the cell membrane and damaging the cell structure, and the selectivity of this function is related to the composition of fungal cell membranes.The comparative genomic results indicated that the strain we isolated was Bacillus velezensis, which has 98% sequence similarity with the reported strain Bacillus velezensis (NC:009725), but there were significant differences in the gene arrangement order.Through anti SMASH and alignment analysis with NCBI BLAST, it was found that strain Bacillus velezensis 20507 might encode a 12 secondary metabolite synthesis gene cluster, which includes all genes of the fengycin synthase gene family: FenC, FenD, FenE, FenA, and FenB (Supplementary Table 1).Some studies have suggested that the components related to fengycin synthase are arranged in a modular manner, in the order of FenC-FenD-FenE-FenA-FenB, and exhibit collinearity with the arrangement of amino acid residues in fengycins (Wu et al., 2007).In this study, tig00000001_ Pilon_ 468, tig00000001_ Pilon_ 469, tig00000001_ Pilon_ 470, tig00000001_ Pilon_ 471, and tig00000001_pilon_472 sequentially encode five genes, including FenA, FenB, FenC, FenD, and FenE.This implied that there was a significant difference in the amino acid residue arrangement between the fengycins of our strain and the reported Bacillus velezensis FZB42 fengycins (Hanif et al., 2019).When fengycins bind to the fungal cell membrane, they form large aggregates, disrupting the normal ordering of phospholipid molecules on the cell membrane and causing cytoplasmic efflux, resulting in cell death (Sur et al., 2018).Similar to other lipopeptide antibiotics, fengycins exhibit broad-spectrum antifungal activity, low toxicity, and low susceptibility to drug resistance.They are a new type of antibiotics with developmental potential in medical, agricultural, and animal husbandry production (Medeot et al., 2020).Our results indicate that Bacillus velezensis 20507 can also cause cell membrane damage, which is consistent with a previous study (Sur et al., 2018).Generally, antibiotics are resistant to high temperatures.If antibiotics are added to the culture medium for plant or microbial cultivation, when the temperature of the culture medium is low, they must be filtered and sterilized before being added.Astonishingly, fengycins produced by Bacillus velezensis 20507 retained antifungal activity after high-temperature sterilization at 121 • C.This characteristic is far superior to that of ordinary antibiotics, and it is believed that this ingredient has broad application prospects in the prevention and control of plant pathogens, food additives, and even in the medical field.Triple TOF-MS/MS analysis of main antibacterial in Bacillus velezensis 20507.Component 1 in ME80 was used as a sample.Using liquid chromatography columns, the substance collection solution from 12 to 15 min was mixed to obtain component 1.
Experimental strain
The experimental B. velezensis strain was isolated and purified in our laboratory and is now stored at the China General Microbiological Culture Collection Center under strain number 20507.Six plant pathogenic fungi were also used to test the antifungal substances of B. velezensis 20507, including: Exserohilum turcicum, Pyricularia oryzae, Fusarium graminearum, Sclerotinia sclerotiorum, Fusarium oxysporum, and Fusarium verticillioides.The plant pathogens were isolated and preserved in our laboratory.
Genome sequencing
Bacillus velezensis 20507 was cultured in LB medium at 37 • C, at 200 r/min, for 12 h.After centrifugation at 5000 × g for 10 min at 4 • C, bacteria were collected, and the total genomic DNA was extracted.Genomic DNA was collected using 1% agarose gel electrophoresis.High purity DNA samples were sent to Shanghai Yuanshen Biomedical Technology Co., Ltd. for sequencing analysis.First, a Covaris M220 Focused Ultrasonicator (COVARIS, INC) was used for genomic DNA fragmentation (300-500 bp).Then, TruSeq TM DNA Sample Prep Kit DNA (Illumina Inc., San Diego, CA, USA) was used to construct a sequencing library.Bridge PCR amplification of the sequencing library was performed using TruSeq PE Cluster Kit v3-cBot-HS (Illumina).Finally, the bridge PCR amplification products were preprocessed using the Truseq SBS Kit v3-HS (200cycles) (Illumina), followed by sequencing on the Illumina NovaSeq 6000 platform.
Comparative genomics analysis
(1) Phylogenetic analysis: After downloading protein sequences from 15 species, homologous gene analysis was performed using Orthofinder software (Emms and Kelly, 2019).To avoid interference from collateral homologous proteins, homologous genes for which all 15 species had a single copy were selected to perform gene multiple sequence alignment and construct a single copy gene matrix using MUSCLE v3.7 software, 10 followed by the construction of a species phylogenetic tree using RAxml software.
(2) Gene family analysis: Based on the above phylogenetic tree, four nearby species were identified and selected for further gene family analysis, including Bacillus velezensis, Bacillus amyloliquefaciens, Bacillus subtilis subsp.subtilis str.168, and Bacillus spizizenii.OrthoFinder software was used to classify the predicted protein sequences of the sequenced strains and protein sequences of the reference genome into families.Then, the gene families were subjected to further analysis, and information including the unique gene families of the strains, common gene families of the strains, and single-copy gene families of each strain was obtained.Finally, a Venn diagram was constructed using the statistical results for the gene families.(3) Collinearity analysis: Genome average nucleotide consistency analysis between our sequenced genome and Bacillus velezensis (NC_009725.2) was performed using FastANI software.MCScan software was used to draw collinearity diagrams based on collinearity relationships.
Plate confrontation experiment
To determine whether B. velezensis 20507 has broad antifungal effects, plate confrontation experiments were conducted.Six plant pathogens (Exserohilum turcicum, Pyricularia oryzae, Fusarium graminearum, Sclerotinia sclerotiorum, Fusarium oxysporum, and Fusarium verticillioides) and biocontrol bacteria B. velezensis 20507 was cultured and activated at 28 • C for 5 days in PDA medium.Five-millimeter plugs of plant pathogens were taken from actively growing cultures and inoculated into the center of the PDA plate, followed by B. velezensis 20507 (1 × 10 9 CFU/mL, 20 µL) inoculation at four corners of the pathogen at a distance of 3 cm.Plant pathogen cultures without B. velezensis 20507 were used as the blank controls.Three biological replicates were used for each control and antagonistic plant-pathogen culture.After 5 days culture at 28 • C, the growth of bacterial colonies was photographed and the inhibition rate was calculated using the following formula: Inhibition rate (%) = (radius of control colony × radius of treatment colony) × 100%/control colony radius.
Purification and activity detection of antifungal components
Bacillus velezensis 20507 was cultured in LB medium at 37 • C at 200 r/min for 48 h.After centrifugation at 5000 × g for 10 min at 4 • C, the fermentation broth was used for further antifungal substance extraction.Antifungal substances in the fermentation broth were purified using two methods: ammonium sulfate precipitation (1) and acid precipitation followed by methanol leaching (2).( 1) Approximately 500 mL of fermentation liquid was poured into a 1 L triangular flask, and solid ammonium sulfate was slowly added to the flask until the ammonium sulfate saturation reached 10%.The solution was then placed in a magnetic stirrer and stirred overnight.Centrifugation was performed the next day at 5200 × g for 20 min at 4 • C, the precipitation and supernatant were collected, respectively.More solid ammonium sulfate was added to the supernatant until saturation reached 20%.Thereafter, stirring and centrifugation were repeated to obtain crude protein with ammonium sulfate saturations of 20, 30, 40, 50, 60, and 70% in the fermentation broth.The precipitate was dissolved in 25 mmol/L Tris-HCl solution at pH 8.0, and a crude protein solution with a saturation of 10-100% ammonium sulfate in the fermentation broth was obtained.The obtained crude protein solution was dialyzed for 2 days to remove ammonium sulfate using a dialysis bag with a dialysis solution of 25 mmol/L Tris-HCl.( 2) Approximately 500 mL of fermentation broth was poured into a 1 L triangular flask, and the pH was adjusted to 1.9 with concentrated hydrochloric acid.Then, the fermentation broth was placed in a 4 • C refrigerator overnight.Centrifugation was performed the next day at 5200 × g for 20 min at 4 • C to collect the sediment (Zhang and Sun, 2018), which was dried in a 60 • C oven.The obtained solid matter was ground into a powder.Then, 250 mg of the powder was dissolved in 50 mL ddH 2 O to obtain a crude extract solution.The C18 solid-phase extraction column was activated sequentially with chromatography-grade methanol and ddH 2 O, and 30 mL of the solution was passed through the C18 solid-phase extraction column.The C18 column was then rinsed with 50, 60, 70, 80, 90, and 100% methanol and each eluent was collected.A rotary evaporator was used to removed water and methanol in the eluent at 40 • C. The solid matter was dissolved in 4 mL 25 mmol/L Tris-HCl.The agar well diffusion method was used to determine the antagonistic activity of B. velezensis 20507 against S. sclerotiorum.
The destructive effect of antifungal substances on cell membranes
Five-millimeter plugs of S. sclerotiorum were obtained from an actively growing culture.The plugs were then inoculated into 6 cm diameter Petri dishes containing PDA medium.After being cultured for 7 days, 20 five-millimeter plugs were taken from the PDA medium and inoculated into 500 mL of potato dextrose broth (PDB), and cultured on a shaker for 48 h.Culture temperature and shaking speed were set as 28 • C and 180 rpm.During the oscillation cultivation process, the mycelia of S. sclerotiorum intertwined to form mycelium balls with diameters of approximately 0.6 cm.Based on gradient processing pre-experiments, a small amount of mycelium was taken from the mycelium ball using needle-nose pliers, placed in 100 µg/mL ME80 aqueous solution for 20 min, and rinsed twice with PBS buffer.Mycelium soaked in distilled water for 20 min was used as the control.The fluorescent dye propidium iodide (PI, 2.5 µg/mL) dissolved in PBS was used to stain the mycelia, followed by rinsing twice with PBS to remove PI (Liu W. et al., 2022).The mycelia were observed under a Nikon Ti-S inverted fluorescence microscope at an excitation wavelength of 535 nm.
Purification and identification of antifungal substances
Full-wavelength scanning of antifungal substances in ME80 was performed using high-performance liquid chromatography (HPLC) to obtain the UV absorption peaks of each antifungal substance.The mobile phase was pure water, with a flow rate of 1 mL/min, a wavelength of 203 nm, and an injection volume of 50 µL.According to the peak-time graph, after the substance started to peak, the substance collection solution from 12 to 15 min was mixed to obtain component 1, and the substance collection solution from 21 to 23 min was mixed to obtain component 2. After freeze-drying, components 1 and 2 were dissolved in ddH 2 O for antagonistic performance testing.Substance 1 was subjected to Mass Spectrometry, and was commissioned to the Scientific Compass Analysis and Testing Center (Hangzhou, China).
Thermo stability evaluation
After sterilization at 121 • C for 20 min, Potato Dextrose Agar (PDA) medium was prepared for evaluating the thermal stability of components 1 and 2 isolated from B. velezensis.Five millet plugs of S. sclerotiorum, taken from an actively growing culture, was inoculated in the center of the PDA medium and cultured for 2 days.Components 1 and 2 were dissolved in ddH 2 O to achieve a final concentration of 100 mg/L.Each component included two treatments: heat treatment (HT) and ordinary temperature treatment (CK).For HT, the substance to be tested is treated at 121 • C for 20 min, which is also a condition for sterilization of PDA culture medium.Then, a punch was used to drill four holes with a diameter of 6 mm around the inoculation site in the culture medium, and 1CK, 1HT, 2CK, and 2HT was added to each of the four holes, respectively, with a volume of 50 µL for each hole.The cultures were maintained in a 25 • C incubator and observed whether the heating treatment had an impact on components 1 and 2.
FIGURE 1
FIGURE 1 Genomic cycle diagram of Bacillus velezensis 20507.The outermost circle of the circle chart is an indicator of genome size, with each scale being 0.5 Mb.The second and third circles represent CDS on positive and negative chains, with different colors indicating the functional classification of different COGs in CDS.The fourth circle contains rRNA and tRNA.The fifth circle shows the GC content.The innermost circle is the GC skew value.
FIGURE 3
FIGURE 3Cluster of orthologous groups of proteins [COG, (A)] and Kyoto encyclopedia of genes and genomes (B) annotation of Bacillus velezensis 20507 genome.
FIGURE 4
FIGURE 4 Comparative genomics analysis of Bacillus velezensis 20507.(A) Phylogenetic analysis of Bacillus velezensis 20507.(B) Venn diagram based on gene family analysis of Bacillus velezensis 20507 and four nearby species.(C) Collinearity analysis of at gene level.(C) The lines in the figure represent the positional connections of homologous genes between two species, and the colors do not represent specific meanings.Colored areas have a span greater than 100 in the contiguous area.JDF, Bacillus velezensis 20507; NC_009725.2,Bacillus velezensis; NZ_CP082278.1,Bacillus amyloliquefaciens; NZ_CP019663.1,Bacillus subtilis subsp.subtilis str.168; NC_016047.1,Bacillus spizizenii.
1463. 8 ,
corresponding to a molecular weight of 1463.8Da, which was derived from the molecular weight of fengycin C 72 H 110 N 12 O 20 (Yu et al., 2024; Figure 9).
FIGURE 7
FIGURE 7 Effect of 80% methanol leaching treatment on the integrity of S. sclerotiorum cell membrane.(A1) Control hypha of S. sclerotiorum, observed with fluorescence microscope under visible light conditions.(A2) Control hypha of S. sclerotiorum, observed with fluorescence microscope under fluorescence conditions.(B1) Treatment hypha of S. sclerotiorum, observed with fluorescence microscope under visible light conditions.(B2) Treatment hypha of S. sclerotiorum, observation with fluorescence microscope under fluorescence conditions.(C1) Control hypha of S. sclerotiorum, observed with scanning electron microscope.(C2) Treatment hypha of S. sclerotiorum, observed with scanning electron microscope.White Arrows indicate damaged hyphae.Control, hypha of S. sclerotiorum was treated with distilled water for 20 min.Treatment, hypha of S. sclerotiorum was treated with 100 µg/mL ME80 aqueous solution for 20 min.An excitation wavelength was 535 nm.
TABLE 1
Classification and statistics of gene families.
NC_009725.2,Bacillus velezensis; NZ_CP082278.1,Bacillus amyloliquefaciens; NZ_CP019663.1,Bacillus subtilis subsp.subtilis str.168; NZ_016047.1,Bacillus spizizenii.Number of genes: the total number of genes in the strain; Number of genes in orthogroups: the number of genes in a gene family; Number of unassigned genes: the number of genes that cannot be clustered with other genes; Percentage of genes in orthogroups: the proportion of sample genes in the gene family; Percentage of unassigned genes: the proportion of genes in a species that cannot be clustered with other genes.Number of orthogroups containing specifications: the number of gene families in the sample; Percentage of orthogroups containing species: the proportion of gene families in the sample; Number of species-specific orthogroups: number of specific gene families in the strain; Number of genes in species-specific orthogroups: gene number of specific gene families in the strain; Percentage of genes in species-specific orthogroups: the proportion of genes in the unique gene family of the strain.
TABLE 2
Predicted secondary metabolite synthesis gene cluster in genome of Bacillus velezensis 20507.
nine gene clusters had similar known clusters and three gene clusters had no similar known clusters (Table 2).The length of the 12 secondary metabolite synthesis gene clusters was 735,348 bp (Table 2), accounting for 18.19% of the B. velezensis 20507 genome.It was predicted that B. velezensis 20507 encodes nine secondary metabolites, including difficidin and macrolactin H synthesized via the polyketide pathway; fengycin, surfactin, bacillibactin, and bacillothiazol A-N synthesized through the NRP pathway; butirosin a/butirosin b synthesized through the saccharide pathway; and bacillaene synthesized through the Polyketide + NRP pathway.10.3389/fmicb.2024.1385067 | 2024-03-31T15:53:51.856Z | 2024-03-26T00:00:00.000 | {
"year": 2024,
"sha1": "e2a39ff67120ca17bce769dcc912f1d12aefdc2b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/microbiology/articles/10.3389/fmicb.2024.1385067/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "241262f911bc48583f7acceb7aaeb4b000b3e6e4",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
103454049 | pes2o/s2orc | v3-fos-license | Flow around a confined cylinder LES and PIV study
. We study the flow over a cylinder placed between two parallel rigid walls using Large-eddy simulations and Particle Image Velocimetry. The Reynolds number based on the inflow velocity and diameter of the cylinder is 3750 corresponding to the subcritical regime with laminar separation. Three-dimensional visualization shows the presence of the horseshoe vortex system prior to the cylinder. The comparison of time-averaged velocity fields and fluctuations shows good agreement between simulations and experiments. Spectral analysis suggests the presence of low-frequency modulations of the recirculating bubble.
Introduction
Flows over obstacles in a duct are common in many engineering applications such as cooling systems, bridge piers, heat exchangers, building sections, junctions in wing-body and turbine blade-rotor systems, among others. In such configurations a horseshoe vortex system appears prior to a bluff body increasing the local shear stress and heat transfer [1] while the flow is characterized by periodic shedding of large-scale vortices behind the body that form the Kármán vortex street. Low-frequency modulations of the recirculating zone are detected for various configurations such as a cylinder [2,3], disk and sphere [4], prism [5], bullet [6], among others. The period is typically 10 ÷ 100 times lower compared to the main vortex shedding frequency. In the present work we consider a flow over confined cylinder in a narrow rectangular duct to investigate the effect of walls on the dynamics of the recirculation bubble.
Computational and experimental details
We study a water flow over a circular cylinder which is fixed perpendicular to a pair of side walls at the Reynolds number Re = 3750 based on the bulk inflow velocity U b and cylinder diameter D. The inflow velocity distribution represents a steady laminar parabolic profile. The distance between narrow parallel walls is H = 0.4 D. The case is studied using numerical simulations and experiments described below. We perform Large-eddy simulations (LES) using the unstructured finite-volume computational code T-FlowS. The filtered Navier-Stokes and continuity equations for incompressible fluid are closed with the dynamic Smagorinsky subgrid-scale model. The spatial discretization is performed with the second-order central-difference scheme, whereas for the time-marching we use a fully-implicit three-level scheme. The velocity and pressure are coupled with the SIMPLE algorithm. The computational domain shown in Fig. 1 represents a box with a size x × y × z = 29D × 20D × H, where x, y, z stand for the streamwise, spanwise and wall-normal directions. The computations were performed on two meshes with 8.7 × 10 6 and 16.6 × 10 6 cells, respectively, with no significant differences in the results. Both meshes satisfy wall-resolved LES criteria. In particular, even the 'coarse' mesh corresponds to high resolution since the first cell near the cylinder did not exceed the following limits: Δr + < 1, (RΔφ) + < 8 and Δz + < 4, where '+' denotes the wall units and R = D/2. The total computational time was around 10 3 D/U b with a nondimensional timestep 2.5 × 10 -3 .
The experiments were performed in a slot channel with the length and width of 38D and 20D, respectively, where D = 10 mm. In order to provide steady velocity distribution close to parabolic at the inflow, the flow passed through a set of two honeycombs. Velocity fields were measured using Particle Image Velocimetry (PIV) technique. The system consists of a digital PCO camera (1024 × 1280 pix, 500 Hz max. frame rate) and dual cavity Nd:YAG laser (1000 Hz max. rate, 10 mJ max. pulse energy). The camera was located perpendicular to the main channel. The thickness of the laser sheet was equal to 0.7 mm. PIV measurements were performed in a 2D × 2D region behing the cylinder. The averaged characteristics were calculated using 1000 instantaneous velocity fields. The spatial resolution was estimated to be 0.3 mm.
Results
The flow regime corresponds to the subcritical one at this relatively low Reynolds number with the separation of the laminar boundary layer and subsequent turbulization of the shear layer. A highly three-dimensional flow appears in the near wake region within the recirculating bubble due to the bounding narrow walls. The hourseshoe vortices decay while interacting with the shear-layer turbulence (Fig. 1). Further downstream the wake becomes fully developed. Figures 2 and 3
Discussion
We performed LES and PIV of the flow over a confined cylinder in a narrow duct at Re = 3750. Spectral analysis suggests the presence of low-frequency modulations of the recirculating bubble. This will be the topic for the future study. Another issue is the effect of bounding walls on the developed wake. Our observations [7,8] in confined jets revealed the existence of streamwise meandering vortices in the flow influencing the heat transfer across the channel. It is expected that a similar phenomenon should be present in a confined wake flow. | 2019-04-09T13:02:55.865Z | 2017-07-10T00:00:00.000 | {
"year": 2017,
"sha1": "3470f26fe45eb077208f85d4331db6892afd505f",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/29/matecconf_sts2017_02010.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "af9fdaeebf455cb9c513c9791935285e69b0737f",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
} |
237875105 | pes2o/s2orc | v3-fos-license | A Designed Eco-Art and Place-Based Curriculum Encouraging Students’ Empathy for the Environment
: Environmental art education is gaining importance in schools as arts education begins to acquire a more significant role in environmental education. This emerging field of study is an interdisciplinary endeavor that is centered on the different fields of environmental education and visual art education and provides a means of making students aware of environmental issues through environmental art education. It has been suggested that students get into a relationship with nature prior to the request to conserve the environment in order to be nature connected. This abstract focuses on teaching and learning through the arts, a pedagogical way in which students discuss the challenging aspects of environmental issues. The aim of this study is to make students act like protectors of their environment through an eco-art place-based curriculum. This paper’s pedagogies will provide educators with a framework for developing environmental art education lessons and curricula. This experimental study has been planned to gather data from interviews and observation of students and by making the students participate in nature-related activities. The findings show that students prefer to let go of their fascination with formulating better ecological perspectives. On the positive side, a few students went through some frustration during the program and the activities. Students have given positive feedback on the program in positive terms, such as “fun”, “interesting”, and “cool”, to express their experience gained through the class activities.
Introduction
Everyone is affected by environmental issues. There is news describing different impacts on the environment, including water pollution, rising global warming, an increase in trash in the sea, and an increase in sea levels that will be dangerous for some islands. Though these warnings or adverse signals are received in the global context, we can still experience some of the problems in our own society. When an individual disregards his or her responsibility to nature, the cumulative effect of such individuals may affect the entire world in a broad context, and everyone will suffer as a result. The researchers stated that this problem came into existence in the late 20th century [1,2]. They also mentioned that insufficient information and lack of direct contact with the environment have become major issues. Due to this, humans are causing deforestation, excess consumption, and pollution that is harming the environment. The reasons may be the absence of environmental education and empathy for the environment [3]. Empathy should be built into school education, and researchers have stated that the absence of eco-art education in schools has created this destruction by causing adults to have no empathy towards the environment.
According to the present context, it seems that people do not want to learn about the environment or the earth as they do not have any experience with it. However, Louise Chawla, who is an environmental psychologist, advised that learning through her own experience was better than learning from others when it came to environmental education [4]. She also mentioned how place attachments and personal relationships, both of which are forms of outdoor learning, can help to protect places and people. This opportunity offers an amazing platform for children to learn empathy for the environment and for all living things. Additionally, this helps to raise students' environmental awareness.
Empathy, which simply means "to feel within", originated from a German concept called "Einfühlung", and the meaning of "empathy" is the reaction of an individual to an object or living being [5,6]. Moreover, empathy is further developed to describe an aesthetic experience. This strong bond makes art education an essential element in developing empathy towards environmental concerns. We, as art educators, should increase students' empathy by encouraging them to participate in their artwork. Forbes, for example, explained the connection with the plant after a long period of observation and revealed some positive aspects. She explained that this observation had increased her empathy towards nature [7]. Additionally, artists create some concerns regarding nature, society, the earth, and the environment that reflect our relationship with the world. Their work is useful to enhance the ecological relationship in a more meaningful manner and increase the awareness towards nature. According to researchers, children should be exposed to nature on their own. Moreover, Hull stated that students should be given independence to experience it by responding [8]. It is identified that involvement and experience are major things in building up a relationship, and the same concept is applied to the environment too.
However, environmental education should be incorporated into school systems to enhance this relationship. It is said that place-based education is ideal for modifying education [9]. The reason is that place-based education facilitates students to learn through experience, and student-centered approaches make it easy to communicate with students. The education system can create connections between the environment and students' reallife experiences that are full of emotions. The feelings and attitudes in those experiences help the students to create a relationship and prepare the students to protect the earth and the community. In addition, a place-based system is an educational tactic that leads to many advantages, such as exploring nature and aesthetic experiences [10]. It aids as a knowledge strategy for youngsters to study the major elements of eco-art concepts, which comprise notions such as interdependence, preservation, and sustainability. Educational researchers promote the integration of place-based education and eco-art education to enable empathetic understanding and address ecological issues [11].
Leaders should formulate educational proposals to promote student engagement in order to build a community of sustainability. This requires schools to enable students to maintain sustainable lifestyles. In Mauritius, the government is promoting the green island of Mauritius and has already developed a variety of strategies to increase environmental awareness among Mauritians, particularly from an early age [12,13]. Stanisstreet argued that environmental campaigns in schools and educational institutions are important to the effective integration of the idea of environmental education [14]. Research has clearly shown that teachers give students many opportunities to increase their skills and knowledge by enthusiastically entering the community by teaching them about environmental concerns and issues. This will make it possible for students to become confident environmental protectors. This paper aims to demonstrate a successfully designed curriculum that has been implemented to develop students' empathy in order for them to adopt a more environmentally sustainable approach. This paper will also motivate other teachers to use this designed curriculum in their teaching.
Experimental Study
This experimental study intends to collect data through in-depth observation of students by examining students' empathy for the environment as a result of a designed eco-art and place-based curriculum, including what students believe, think, feel, and say about their direct experiences with the natural environment. The sample consisted of 25 male students aged 14 years old. The students were used to understand the empathy level for the environment. For this study, pre-test and post-test drawing exercises were included in the designed curriculum to observe students' empathy for the environment. Instructions were given for them to draw an incident or time where they got closer to the environment. They appreciated natural phenomena and articulated their thought processes in words on the back of their sketch pads. The students described their drawing exercise in words on the back page of their sketch pad. These discussions have allowed us to have in-depth knowledge about evaluation before and after test scenarios. A "typical" case is important as there were no such studies conducted in this field. However, this case study is identified as "unique" [15]. Qualitative methods play a great role in this complicated, single case. The teacher-researcher role was the main approach in this study. Hence, the teacher played the role of a researcher. Rather than using other teachers' classrooms, the students in my classroom were chosen for this study because they were familiar with me as a teacher-researcher. My familiarity and friendship aided in providing additional insights into the research. Another factor was that my class was smaller than the other three classes and it was more representative of a typical art class.
Characteristics of the Case
The public school is located in the north of Mauritius. There are approximately 1200 students ranging in age from 12 to 20 years. The school compound consists of several landscaped courtyards, including a playground, and segments of trees and natural growth at the back of the school. Additionally, the school offers many clubs for the activity period, such as the art club program, DUKE program, lecture program, health and safety programs, and others where students are required to participate as they are part of the school requirements. The subject period is about 70 min per subject, and the students can select a subject from electives from a minimum of 15 years old. The art club program was founded by an art teacher in 2016. The art club is a classroom consisting of two storage cabinets and shelves, a whiteboard, critique walls to display students' works, and tables for students and the teacher. One wall of the room consists of windows. The achievement level of students varies. However, there is a higher percentage of significant achievers or academically talented people. This class is a typical class where there are no students with learning disabilities. However, the socio-economic level of the families of the students varies and has a high range. This class was used to do a place-based art study. The goal of this research was to find ways to get closer to the environment, to imagine ecological changes, and to engage in responsible activities as a responsible citizen. The curriculum had recommendations, which were nature studies, history and culture, and transformative education that were appropriate for place-based art pedagogy [16]. Apart from that, placebased education involved listening to senior student speakers from DUKE, taking a nature walk, or gathering information about environmental artists.
Observational Data Collected
According to one researcher, students' observations are based on "first-hand experience of naturally occurring events" (p. 49) [17]. The involvement of the researcher can vary from non-participating to full participation. The teacher-researcher played a significant role here by actively participating in class activities and gathering data from the behavior of students regarding empathy for the environment. The teacher observed the activities of the students to collect data. The majority of qualitative research questions were answered through observation and detailed explanation. These observations were conducted throughout the data collection period from the beginning. As a teacher-researcher, observations were recorded regularly to know how the activities in the curriculum and methods were applied, to analyze the empathetic behavior of students, and for documentation purposes. Empathy can be related to any other person, animal, or plant, not just the environment. Planting a tree, watering a plant, going for a nature walk, appreciating nature, sharing art materials among students, and participating by showing interest in senior students from DUKE who care about the environment are examples of the behaviors.
Student Sketch Pad
The use of image-based research has increased in recent years and it has been accepted as a valid research methodology for the social science community [18]. This research method can use found, researcher generated, or participant-generated images. To expose hidden emotional perspectives and to provide data triangulation, this study used participant-generated images that were created in a student's sketch pad. The students' sketch pad was created for these study requirements. See Figure 1.
observations were recorded regularly to know how the activities in the curriculum and methods were applied, to analyze the empathetic behavior of students, and for documentation purposes. Empathy can be related to any other person, animal, or plant, not just the environment. Planting a tree, watering a plant, going for a nature walk, appreciating nature, sharing art materials among students, and participating by showing interest in senior students from DUKE who care about the environment are examples of the behaviors.
Student Sketch Pad
The use of image-based research has increased in recent years and it has been accepted as a valid research methodology for the social science community [18]. This research method can use found, researcher generated, or participant-generated images. To expose hidden emotional perspectives and to provide data triangulation, this study used participant-generated images that were created in a student's sketch pad. The students' sketch pad was created for these study requirements. See Figure 1. The sketch pad was also known as a sketchbook where students were encouraged to produce sketches or images of what they see and then write about them. This method included both writing and drawing. However, the control of the data was with the students, hence, the data ownership was retained by them. The teachers created some prompts and free sketches where the students responded once a week as part of this study. Apart from that, students used their sketch pads for their drawings to express themselves. This research methodology supports the designed curriculum as per the method of observation, and it expresses the relationship between student empathy and relationships. The empathy of students is exposed through art, pictures, drawing, text, or a combination of all, which reflects an appreciation of each other. The prompts shown by the teacher were open-ended, and students could freely think about their sketches and could use their own experience. It was identified that students revealed their real-life experiences through prompts when reviewing the documents. The teacher-researcher who showed the prompt did not explain it in detail and the students were asked to complete it. Then, students used their free-thinking and worked to complete the sketch through group work where they did their work in school. Hence, the content of their sketches or images was critically molded by the responses of the other team members and by the awareness of the audience.
Relationship with Their Friends
This study is an observational research method used to gather data from the same sample of students repeatedly. This class was formed into friendship groups at the beginning of the art club. From the first day of class, once they entered the room, these groups were formed. Even though the seats were not assigned, the seating arrangement was not changed throughout the year. The best friend's group in the class consisted of four boys. This group consisted of both high-and middle-class families who were high achievers. The sketch pad was also known as a sketchbook where students were encouraged to produce sketches or images of what they see and then write about them. This method included both writing and drawing. However, the control of the data was with the students, hence, the data ownership was retained by them. The teachers created some prompts and free sketches where the students responded once a week as part of this study. Apart from that, students used their sketch pads for their drawings to express themselves. This research methodology supports the designed curriculum as per the method of observation, and it expresses the relationship between student empathy and relationships. The empathy of students is exposed through art, pictures, drawing, text, or a combination of all, which reflects an appreciation of each other. The prompts shown by the teacher were open-ended, and students could freely think about their sketches and could use their own experience. It was identified that students revealed their real-life experiences through prompts when reviewing the documents. The teacher-researcher who showed the prompt did not explain it in detail and the students were asked to complete it. Then, students used their freethinking and worked to complete the sketch through group work where they did their work in school. Hence, the content of their sketches or images was critically molded by the responses of the other team members and by the awareness of the audience.
Relationship with Their Friends
This study is an observational research method used to gather data from the same sample of students repeatedly. This class was formed into friendship groups at the beginning of the art club. From the first day of class, once they entered the room, these groups were formed. Even though the seats were not assigned, the seating arrangement was not changed throughout the year. The best friend's group in the class consisted of four boys. This group consisted of both high-and middle-class families who were high achievers. They obviously knew each other well and had maintained a close friendship. Two were cousins and the rest were close friends from primary school. This group was usually seated together in class. Even when the seating places were changed, they would sit within the group. There were four friends in this group who interacted with one another. The students showed a connection with all the students. Another group of four boys had varying socioeconomic backgrounds and levels of achievement. The students were friendly with each other. Some students went around socializing before class began. The class was a heterogeneous group consisting of students with diversified backgrounds when it came to their education, interests, etc. However, the vast majority of them were high-achieving students. All of these students had different academic, social, and emotional backgrounds, but they were all the same age. Heterogeneous grouping favorably enabled students to learn from others about their differences. Furthermore, this method encouraged students to actively integrate with other students while sharing all of their abilities and interests with others. These students seemed to be very close and knew each other very well. Many of them had known each other since primary school. As the term progressed, the students were familiar with each other. Teachers had to wait longer for the students to stop talking so that they could start a lesson. The amount of socialization became more evident after students completed their eco-art projects. The group project motivated students to work together, and students felt very comfortable engaging with each other. When the assignments were finished, the students were not used to being quiet and listening. However, this conversation was not a big behavioral problem. The students treated their classmates with respect and offered artistic encouragement to each other. During criticism, students gave positive and constructive feedback to each other. In building their eco-artwork, the students shared the resources they took home with the other classes. These circumstances reflected constructive, respectful behavior from the students. Overall, student conduct was exemplary and did not require disciplinary actions, such as detention or remarks in their school journals. The overall atmosphere was one of mutual respect amongst the students and between the students and the teacher.
Relationship with the Teacher
Teachers must ensure that students engage in one-to-one exercises for successful learning. Today, educators say that students are in the position of consumers of information and that students should play an active role in the environmental education process [19,20]. I ensured that the teacher's role was approached in a caring, supportive, friendly, firm, and equitable manner. In order to establish a collaborative environment with respect, kindness, and caring, a conscious effort was given. As a way of building a healthier relationship within the class, I took the opportunity to interact with students when I was not teaching. Regular feedback was shared with the students on their artwork, and I reviewed the progression of their assignments while observing their manners to get to know the students well. It was clear that these students valued our relationship as the school year came to a close. As a teacher, the interaction I had with these students was the most interactive I have ever had. I preferred to avoid making definitive comments about the environment and instead opted to provide students with knowledge, opportunities for alternative experiences, and a chance to learn from the experiences of others. The reason I chose this method was to prevent students from copying my own comments. However, I recognize that my choices of curriculum and questions for the students may have made my opinions clear. Overall, I hoped to be able to encourage rather than promote dialogue. I tried to encourage discussion rather than openly offering my thoughts to students.
Curriculum Design
Generally, it has been suggested in numerous research articles that students should develop a living relationship with nature prior to the request to conserve the environment; the subject of place-based art education based on guiding students to be naturally connected to nature. According to the researchers, teaching natural issues to students can be mindboggling and monotonous for them [21]. This step, in which the students participated in various nature-related activities, represented the process's climax. In this study, three units of the curriculum were designed, providing justification for the insertion of different tasks and artists, and debating on relevant problems. This curriculum was created to be given out during the post-test test.
Harmony
Harmony can be identified as a key component of the place-based art curriculum since it has been identified as a significant principle of design and it has been known as a basic ecological concept in a place-based art curriculum [22]. The students learned different perspectives on art education, such as harmony, empathy, and care, throughout this unit of the place-based eco-art education curriculum. The students moved forward in the lesson to recap the elements of design in order to prepare them to engage with these understandings. Aside from that, they were encouraged to use a variety of different types of media. The students were mostly interested in learning how harmony can be used in the arts and other fields, including from an ecological standpoint. As an example, the students were given the task of identifying the connection between the artwork and elements of design and, at the same time, between fauna and flora, living species, and environmental systems. Graham (2007) depicts three critical components of environmental art education: transformative education, cultural journalism, and natural illustration. Nature illustration and cultural journalism were covered in this unit. In this unit, there were two projects: sketches of natural objects and garden planters. The main aim of this research was to stimulate the intentions of students regarding the natural environment and natural history illustrations, and the projects were mainly designed to cover those parts and the observation skills of students. This will enhance the rapport between the students and the environment, and also, the students will learn how to respond to environmental stimulation.
Natural Illustrations
Graham (2007), a well-known art education theorist, observed the historical background of the criticism of natural illustration to praise its benefits. Art educators were invited by Graham to revisit sketches and illustrations as they can promote love and care for the earth and the environment. Additionally, it is mentioned that students who have completed an environmental education program become conscious of the environment, empathize with nature, and draw strongly aesthetically valued pictures [23]. In this curriculum, the students started the lesson by reviewing the art of Vaco Baissac. Vaco was born in Mauritius in 1940. He studied art in Paris from 1964 to 1970. After he completed his studies, he went to Africa for 20 years. He returned to Mauritius in 1990 to continue his painting. Vaco has represented Mauritius at different exhibitions with his paintings of the island and its natural resources. His main motive in life is to show his "Creole" heritage through his paintings. His paintings feature natural landscapes, Mauritian culture, and flora and fauna. His inspiration is mostly drawn from his garden, the streets, the beach, and the people he meets, and he also shows the "dodo" through his paintings. The aim of introducing Vaco Baissac's painting to the students was to make them feel the beauty and nature of Mauritius and to be inspired by the local artists. The students were also introduced to the artist William Bartram, a natural illustrator who traveled through many countries to study and record the flora and fauna. Bartram is well known as an artist who works through his vigilant observations of nature; his paintings were unique in comparison to other art works at the time, and they clearly emphasized the capability factor; he wanted to capture the liveliness of his field of studies and his constant interactions with nature. As an explorer, Bartram discovered the value of learning the diversity of trees and plants for naturalists [24]. In order to enhance students' skills and abilities, including observation skills and drawing skills, thus enabling direct experience with the environment and forming a strong connection with nature, these illustrations were included in the curriculum. Bartram and Vaco represent the artists who demonstrated these behaviors through their cautious observations of the environment and their capability to detect the liveliness of their elements. However, the painting of Vaco Baissac is more abstract compared to Bartram, who included more details in his painting.
The class first watched an educational video on naturalists' nature to prepare them for the natural illustrations. These videos demonstrated a method for drawing plants creatively from students' live observations. The students went over a PowerPoint presentation about Vaco Baissac and William Bartram, including their biographical information and, most importantly, their paintings. During the presentation, it discussed the students' desire to depict the life of a natural plant through their own artwork. The students also attempted to paint the Vaco Baissac drawing. These methods and strategies exemplifed "harmony" with the environment. Then, the following step to consider was the relationship between science and art, which were two streams in the curriculum. As a first step, students must read books on nature studies. That is the way that scientists and artists study nature. The students teamed up together and then explained their views using a Venn diagram that reflected the differences in nature's studies through science and art, and the purpose was to identify the overlapping parts of science and art. Students stated the importance of observation for both streams, and it was highlighted. The students were instructed to carefully observe the natural objects they had chosen for drawing. See Figure 2. The students also talked about how art can be utilized to express an idea and can be used to embrace subjectivity and how science can be used for objectivity. The latter is described as art that has the power to connect feelings and thoughts. ment and forming a strong connection with nature, these illustrations were included in the curriculum. Bartram and Vaco represent the artists who demonstrated these behaviors through their cautious observations of the environment and their capability to detect the liveliness of their elements. However, the painting of Vaco Baissac is more abstract compared to Bartram, who included more details in his painting.
The class first watched an educational video on naturalists' nature to prepare them for the natural illustrations. These videos demonstrated a method for drawing plants creatively from students' live observations. The students went over a PowerPoint presentation about Vaco Baissac and William Bartram, including their biographical information and, most importantly, their paintings. During the presentation, it discussed the students' desire to depict the life of a natural plant through their own artwork. The students also attempted to paint the Vaco Baissac drawing. These methods and strategies exemplifed "harmony" with the environment. Then, the following step to consider was the relationship between science and art, which were two streams in the curriculum. As a first step, students must read books on nature studies. That is the way that scientists and artists study nature. The students teamed up together and then explained their views using a Venn diagram that reflected the differences in nature's studies through science and art, and the purpose was to identify the overlapping parts of science and art. Students stated the importance of observation for both streams, and it was highlighted. The students were instructed to carefully observe the natural objects they had chosen for drawing. See Figure 2.
The students also talked about how art can be utilized to express an idea and can be used to embrace subjectivity and how science can be used for objectivity. The latter is described as art that has the power to connect feelings and thoughts. The key topic of the lesson was "harmony". At the initial stage, the students were asked to define their relationships with the term "harmony." Students explained it as music and peace, among other associations. Students were given standard definitions, synonyms, and a few meaningful sentences, including one for the term harmony. Then, the discussion was arranged to explain the role of harmony in different fields, including art, nature, the earth, racial relationships, politics, and music. When going through Vaco and Bartram's illustrations, the class analyzed how their illustrations demonstrated harmony. Students were given their definitions of what harmony is and of different harmonious relationships in their sketch pads. Later, the class began to participate in outdoor sketching on sketch pads. First, they visited the school's courtyard, filled with a garden. Then, they spread out with their sketch pads and started to sketch various natural objects around them. They also found natural objects such as leaves, bird feathers, flowers, and brought them to the class to finish their work. During the period given to observe the students, they had the chance to directly experience nature. See Figure 3. Furthermore, The key topic of the lesson was "harmony". At the initial stage, the students were asked to define their relationships with the term "harmony." Students explained it as music and peace, among other associations. Students were given standard definitions, synonyms, and a few meaningful sentences, including one for the term harmony. Then, the discussion was arranged to explain the role of harmony in different fields, including art, nature, the earth, racial relationships, politics, and music. When going through Vaco and Bartram's illustrations, the class analyzed how their illustrations demonstrated harmony. Students were given their definitions of what harmony is and of different harmonious relationships in their sketch pads. Later, the class began to participate in outdoor sketching on sketch pads. First, they visited the school's courtyard, filled with a garden. Then, they spread out with their sketch pads and started to sketch various natural objects around them. They also found natural objects such as leaves, bird feathers, flowers, and brought them to the class to finish their work. During the period given to observe the students, they had the chance to directly experience nature. See Figure 3. Furthermore, they conducted observations and studied real-life natural phenomena in order to gain firsthand experience for their drawings. The goal of this interaction was to make education meaningful since it links education to their lives in practice. In class, students had a few discussions and practiced drawing pictures with various media. As a first step, the students were exposed to various techniques and media. Because the techniques were similar to the techniques they already used to draw, they tended to come as natural skills. In addition, an art tutor, a member of the art club, visited the students and taught them how to draw plants. She incorporated and assisted students in learning how to carefully observe plants as they drew them. The art teacher was invited since she had experience and would be able to provide an additional perspective on the drawing. Aside from that, she was expected to inspire students with her enthusiasm for drawing plants, as well as her own views on how fine art can become a lifelong endeavor. The guest speaker session is a communal and important task in place-based education that brings a sense of community, relationship building, and life connections. tion meaningful since it links education to their lives in practice. In class, students had a few discussions and practiced drawing pictures with various media. As a first step, the students were exposed to various techniques and media. Because the techniques were similar to the techniques they already used to draw, they tended to come as natural skills. In addition, an art tutor, a member of the art club, visited the students and taught them how to draw plants. She incorporated and assisted students in learning how to carefully observe plants as they drew them. The art teacher was invited since she had experience and would be able to provide an additional perspective on the drawing. Aside from that, she was expected to inspire students with her enthusiasm for drawing plants, as well as her own views on how fine art can become a lifelong endeavor. The guest speaker session is a communal and important task in place-based education that brings a sense of community, relationship building, and life connections. The students were compelled to utilize their paintings to share their ideas and feelings when they were creating their drawings. Students drew their ideas on a sketch pad to represent their own thoughts. They had several options to choose different media, such as chalk, pastels, charcoal, colored pencils, and markers, which were used as tools to represent the ecological objects they found in the yard of the school. The students were encouraged to decide on any method of drawing. Art media and line techniques were used to show ideas and feelings. Finally, the class reviewed the students' drawings through verbal analysis. Each student was allowed to display their sketch on the critique wall and the rest of the classmates opened up the discussion to express, analyze, interpret, or evaluate the displayed art. The same opportunity was given to the next student after the discussion of the first artwork. The selected elements of designs and techniques by the students were explained in the classroom by the students and, furthermore, how they associated themselves with the thoughts they were attempting to convey. The lesson was concluded by stating how their drawings displayed harmony. The activities took place in nature's drawings, and students' development was accelerated, particularly in their drawing and observation skills, in order to examine and communicate philosophies about nature. This lesson was the initial launch of supporting students in developing awareness of the natural world and having supplementary experiences with nature, to protect it, and also to step into the natural environment with an understanding of how harmony is possible within nature. The students were compelled to utilize their paintings to share their ideas and feelings when they were creating their drawings. Students drew their ideas on a sketch pad to represent their own thoughts. They had several options to choose different media, such as chalk, pastels, charcoal, colored pencils, and markers, which were used as tools to represent the ecological objects they found in the yard of the school. The students were encouraged to decide on any method of drawing. Art media and line techniques were used to show ideas and feelings. Finally, the class reviewed the students' drawings through verbal analysis. Each student was allowed to display their sketch on the critique wall and the rest of the classmates opened up the discussion to express, analyze, interpret, or evaluate the displayed art. The same opportunity was given to the next student after the discussion of the first artwork. The selected elements of designs and techniques by the students were explained in the classroom by the students and, furthermore, how they associated themselves with the thoughts they were attempting to convey. The lesson was concluded by stating how their drawings displayed harmony. The activities took place in nature's drawings, and students' development was accelerated, particularly in their drawing and observation skills, in order to examine and communicate philosophies about nature. This lesson was the initial launch of supporting students in developing awareness of the natural world and having supplementary experiences with nature, to protect it, and also to step into the natural environment with an understanding of how harmony is possible within nature.
Place
Soon after learning how harmony works in art and the real natural world, in the next stage, students began to discover the importance of places. The place contributes to nature being a key positive result in environmental education programs, outdoor education programs, and even early childhood education as a whole [25]. Place investigation is considered to be a basic step in a place-based education program as it offers a physical environment for inspecting the association and diversity of social, ecological, and aesthetic notions. As a result of that, it was required to demonstrate the usefulness and response rate of art to the social and ecological problems of a place. The supposed outcome of this unit was to enhance the students' understanding, including the ability of art to share views about a place, and, additionally, how art is capable of foreseeing different realities. At the beginning of this unit, students were encouraged to see their connection with a place and to express the same through a drawing. When they fully grasped the concept, they were asked to consider the future of their place in society or the community through their drawings. These two projects were essential for the students to know how a place played an important role in the students' lives, and they were concerned about the actions that they can take to impact the place.
A Special Place
The lesson began by studying the art of the local modern artist, Róisín Curé. Roisin grew up in the west of Ireland. Roisin used to visit Mauritius several times. Normally, Róisín Curé drew whenever she could recall or capture the memories, her thoughts and feelings that made her realize who she is, and the world that she lives in. Even though she had been drawing in the proverbial world for some time, she only discovered urban sketching in 2012. While on a sabbatical in Mauritius, she took to the streets, beaches, and countryside of Mauritius and discovered the joy and peace of quiet sketching and the fun of meeting the public. She drew or painted nearly every day and saw the country in terms of capturing it in line and color. Generally, most of her drawings illustrated local and recognizable locations within our country, including the Botanical Garden of Pamplemousses, Triolet Temple, giant lilies, fishing boats at Trou aux Biches, and the market stall at Triolet. In one of her Trou-aux-Biches beach paintings, Róisín Curé mentioned, "Yesterday I saw some beautiful beach leaves that I wanted to paint. I did try to paint and capture the liveliness; I had been stuck in front of the beach because of its magical feel." Despite the fact that these paintings are nostalgic, she claims that they are appropriate for students. Three outstanding examples were her paintings the market stall of Triolet, Trou aux Biches, and the giant lilies in the Botanical Garden, Pamplemousses, which were so familiar to the students. The students were stunned by the painting because half of the students lived nearby in the north. Roisin manages to discover the beauty of our country's landscape. Her art is most relevant to this subject program in the location given because she has been recognized as an artist who has been actively involved in the community. Further, her drawings directly address our society and reveal a pure visualization of the place.
When referring to Róisín Curé's art, the class reviewed the different criteria of her art, which were subject matter, style of painting, and her vision of communication through discussion. The themes of these paintings were extremely familiar to the students, and they actively responded soon after recognizing the location. The majority of them were ecstatic about communicating their connection to the places she had depicted in her drawings. The quote below is from Rosin, and it describes how she assisted students in incorporating the relationship with the place into their paintings. "Stuff happens when I'm out sketching," one of the quotes reads. "It can be something that words can't describe, like the serenity I feel surrounded by the sound of birds, some beautiful leaves, the places and people I visit in Mauritius that I have always wanted to paint. There was something so magical about them. The colours, the hues I try to bring my world to life in my sketches and the stories that go with them".
In the next class, the students started to study their relationships with places. Roisin's paintings of the Botanical Garden of Pamplemousses, the market stalls of Triolet, and Trou aux Biches were shown using the projector. The aim of showing these paintings to the students was to make them connect with the place. The students were so amazed and excited by the painting. The students expressed their thoughts on the painting and described their experiences with it.
Some of Róisín Curé's quotes in her paintings helped students understand their relationship with the place, as well as the possibility of how the relationship with the place might be illustrated in her drawings. One of these quotes was: "I really love when I hear some really meaningful stories connected with the places I paint". After studying the works of Róisín Curé and exploring how art can reveal ideas about the place, the students concluded the lesson by updating their definitions of the place. The word "place" is a common and recognizable term for students and, as a result, students will come up with their own visions of the place. The opportunity was given to students to share their views with other students in the class. This method has lots of advantages, which include openness, the ability to lead discussions on issues and limitations, emotional connections, and community relationships. The students sketched their favorite place to visit and how they feel when they are alone in that place. It focused on the connection between the students' lives and specific locations through such an exercise. Students were encouraged to choose a specific location that they had visited once or more in order to reveal a close and ongoing connection. It was slightly difficult if they chose a place that they visited once on a vacation.
The ultimate expectation was for students to replicate a meaningful experience of the place. In addition, this may be a greater foundation and could be extended to provide a connection with the natural world. Then, all the students were advised to design a painting that illustrates their relationship with the selected place. Reflections on places. Students looked at the concepts and techniques of art that would be necessary to accurately reflect the place and its relationship with it. Lately, the students have come up with a few examples of paintings done by Vincent Van Gogh, Róisín Curé, and more. They also explained the various painting techniques and styles used in the paintings of those artists. Then, the students were allowed to design their painting style to use in their project while experimenting with various techniques and styles used by the examples of paintings. These activities urged them to either embrace one of these famous styles or use their own method. As a result, students determined what style of suit they should wear and began sketching it on their sketch pad.
The students began to get involved in developing their paintings in the next lesson. As the first step, thumbnail sketches were created on the sketch pad of a favorite place they often went to in nature. They were given instructions to work according to the style they preferred or adapted to the place in the previous lesson. They sketched the locations based on their memories and relationships with the locations. Since they were drawn from memory, the paintings tended to be less realistic. First, students visualized the relationship with the place and added more expression to the paintings. The unit also consisted of a portion of the class where they discussed their paintings in the same way as the drawings of nature. Based on the simple method of explaining, examining, interpreting, and reviewing their work, the classmates had a discussion and received feedback. Moreover, the students explained their painting, showing the relationship with the selected place and how it was demonstrated in the painting. Also, students explored their preferences for topics, painting techniques, styles, and colors, in particular. Through this project, the students began to be concerned about the place, the relationship between individuals and the place, and the rapport with specific places. The purpose of this study was to develop a much deeper and enhanced relationship between students and the connection to the places where they live.
Alternatives to Places
The second section of the unit began with a task where the students were requested to map their communities. Students were divided into 3-4 groups to map the important places within their work community. Also, the groups were informed to keep their emphasis on specific locations. These maps were representations of important places, not the exact map, but the approximate vicinity of the community. Then, there was a discussion about the specific locations on the maps and the possible reasons for their comprehensive inclusion. Some mentioned, "I have a garden area in my locality and there are many trees. People always sit there and enjoy chatting with their friends". Others mentioned, "they have a river and around five monkeys are living there. I saw them when they were crossing the road to go to the side river". The activity map provided guidance for students to understand the most critical impression of the society and reflect on the community of the environment. Then, each student was told to select a particular area from the cultural map that had intense feelings for them and that they would later discuss it in a drawing. At the end of the lesson, the students reflected on their relationship with the chosen area on their sketch pad. The class moved to studying the art of Markus Vesper, an environmental artist. He is known for his paintings describing environmental issues and their effects. Generally, most of his paintings represent the consequences of human actions that led to global warming. These detailed paintings have also given us apocalyptic visions of our future where human evolution is in decline. Imagination is not only allowed to envision positive alternatives, but it may also bring us to the severity of our actual conditions. Such imaginations inspire all to make a prominent change. The majority of Markus Vesper's paintings are works of imagination in which he sees negative alternatives or a condition. The majority of his paintings depict a wacky, out-of-the-box state of affairs. As previously stated, these artworks provide a forewarning about the negative consequences of modern society's bad habits and lead to new action plans. In the previous lesson about the place, activities were given to students to take a precise place that they thought was significant to their community. The activity concluded by giving them the task of creating an artwork related to the chosen place for them. Keeping this in mind, a PowerPoint presentation on Markus Vesper's art was shown that included his paintings, the symbols of the place, and the philosophies they communicated.
To enhance students' understanding of potential ways of creating a sustainable future for the community, a discussion was conducted in the presence of four senior students from the DUKE of Edinburgh. They mainly elucidated green initiatives, including environmentally friendly construction methods, renewable energy sources, organic food production, green gas emission, and protecting the land. The discussion helped students to anticipate a better future that could happen if the explained initiatives were widely implemented. The presenters also assisted students in visualizing other ecological changes. Finally, students finished an assignment comprising a picture of "how your preferred place has been handled". "How do you see the future of your own place? Why? How do you want to see the future of your place?" Based on the answers they received for each question, the students were encouraged to place the ideas on their sketch pads, "The future of their place". When the students had finished their journal entries, they volunteered their sketch pad. A brief discussion about our hopes for our community was held in front of the class. Doing such classroom activities allowed students to excel in their imaginations while also exercising for a better future in their community. Some boys implemented some ecologically responsible measures in their communities as responsible citizens and some students followed Markus Vesper's example of devastating realities to provide an alarm about the results of their actions. See Figure 4. Finally, the unit concluded with an assessment of the areas covered and a written response describing their artwork. The students explained the place they had chosen, the future predictions they had imagined as an alternative, and why they chose to represent the alternative. During the assessment, the written notes, including additional information, were shared with the entire class, and other students were asked to explain, analyze, understand, or assess their classmates' paintings. Later, the complete artworks were displayed in the school corridor with students' writing designed to accompany their Finally, the unit concluded with an assessment of the areas covered and a written response describing their artwork. The students explained the place they had chosen, the future predictions they had imagined as an alternative, and why they chose to represent the alternative. During the assessment, the written notes, including additional information, were shared with the entire class, and other students were asked to explain, analyze, understand, or assess their classmates' paintings. Later, the complete artworks were displayed in the school corridor with students' writing designed to accompany their works.
After the few sessions explained, the student demonstrated the relationship between place and on the sketch pad as the final assessing activity of the unit. This unit was adapted for a place-based art pedagogy in which the boys were interested in local place and environmental issues. During the sessions, students were invited to choose places they had previously encountered, to explore them, and to project complex ecological realities on the places that form the foundation of their communities. Opportunities to display their artworks and writings in the school corridor provided them with a platform to bring their ideas outside of the classroom. This step was the initial step of their process. The next unit stimulated the students to take ecological actions for transformation.
Transformation
By observing the environment, the students' progress to a final unit on transformation, a unit outlined to assist the students in steadily and imaginatively commuting into their environment. Adherents to the views of the ecological imagination, students should evolve an affiliation with the natural world and with the place prior to safeguarding it. The previous units were built to promote connections with the natural world, places, and the community, as well as to spread perceptions of ecological problems, and students were now focusing on environmental change. Graham (2007) defined the transformative component as the possession of a transformative education within an important educational setting. It can be identified as the most important factor in the place-based pedagogy in this unit as it outlines a move from developing connections to influencing change. In this unit, there was a requirement to obtain the following interpretation: social and ecological matters can be countered by art; art can convert emotions, space, feelings, attitudes, and communities; and artists are inventive and, as a result of being inventive, artists can utilize any kind of material. Students created eco-artworks from debris and leftovers, such as plastic bottles, to attract students with mastery.
Eco-Artworks
The lesson started with an introduction to a variety of eco-artists through a PowerPoint presentation, including Baisa Irland, Lynne Hull, and Steven Siegel. To address students' environmental concerns, we considered a variety of approaches. These approaches were about discussing and, endearingly, building ecological abodes through art. Students were able to form new relationships as a result of the eco-artists' efforts. Soon, the students began to make connections with the efforts of others through their artwork. The students also began an extensive inspection of the eco-art of Steven Siegel, where the artist utilized consumer debris in site-specific sculptures. The site contains the work of an artist who used consumer waste to create massive, site-specific sculptures. His body of work comprised a pile-up set, large outdoor sculptures collected from piled and decayed newspapers, and large outdoor sculptures collected from consumer waste.
The work causes awareness of the massive amount of consumer waste that exists, prompting us to consider the fate of such waste and how our lifestyle choices may contribute to its accumulation. The students considered the transformation of materials, the transformation of outdoor space, the transformation of art over time, and the transformation of ecological attitudes. Students concluded, based on prior experience, that the power of art in verbal journals has transformative power.
To be innovative, the students were requested to gather debris and waste items, such as plastic bottles, newspapers, and aluminum cans, for the artwork. Additionally, students had the opportunity to consider and get ideas from their colleagues and share their ideas with each other and outline the pad.
Among their colleagues, they thought about how their drawings could be transformative as they received their ideas from prior learning and experience. First, they chose the design, and then, they confirmed the sketched drawing for acceptance. Many students came up with the idea of creating an eco-artwork out of plastic bottles. The plan was to come up with a design with vertical hanging plastic in the yard by joining a string of plastic PET bottles together with a strong cord and fixing them to a wall.
By looking at the sketch drawing plan, there were a lot of constraints that were needed for moderation in the areas of safety concerns, construction methods, and in the materials. Finally, after all the work, the sketch drawing plan designs were approved, and the students started to collect different debris and waste materials to design the ecoartworks. Additionally, students received advice and help with tools and materials from the design and technology department.
Finally, after completing the assembly of their eco-artworks, the students replanted their plants for their eco-art project and received permission from the school to display and demonstrate the project. Furthermore, students thought about a place where the project could be kept and displayed for all the students and staff in the school. Many students agreed to keep the project in the assembly area so that it could be seen by as many students as possible. Students also came up with a statement in the assembly area for the audience, stating the fact that waste and debris could be repurposed. Finally, when concluding the unit, with prior experience and knowledge, students asked the audience about the artwork, and they used the designed pad to get the responses from the audience while stating their responses. This led many students to learn about local ecological issues, to be creative with artistic tools, to focus on how art has the ability to transform, to be unique and to think differently, and to adapt to the ecological concerns and impact of other students and teachers at their school.
In the comments, it was indicated that the teachers, administrators, and students stated that the project was informally briefed and explained in detail to all of them. Furthermore, many students who were not in the same class responded enthusiastically about the work and stated that they could design and create related artwork that is similar, and those students signed up for the class and were eager to join the art club.
Drawing in the Schoolyard
It is believed that the use of artistic expression for environmental consciousness will cultivate enthusiasm, as students' love for beauty makes it essential to see and appreciate it [26]. The activity was set up in the school yard to make the students care and love their environment, and the students were there with their pencils and sketch pads. First and foremost, all of them walked around the yard to explore the surroundings. It seemed like interesting work for the students, and they were advised to recreate the scenery on their sketch pads. Some chose a suitable place to sit, others remained standing and started to draw leafless trees, grass, or rocks. After they spent 35 min outside, a few people went back in the classroom. A few of them, nearly a third of the class, wanted to stay outside. They were hesitant to go inside, so one student asked, "Can we please stay for a few minutes and also take the other class?" This outdoor activity was well received by the students. Then, after a few minutes, the rest of the students also came back to the class and started to draw the indoor plants. Therefore, for the rest of the minutes left, they went back to the classroom to draw indoor plants.
Walking in Nature
If we are to examine pedagogy and practices more closely, as suggested by Barrable, rather than focusing solely on contact, a set of specific objectives must be defined. The goal of her paper was to place natural relations at the core of the practice of natural settings and to open the way for more comprehensive methodologies to be used to determine what works in outdoor settings [27]. Following Barrable's methodologies, the students again walked to the back of the school to look for more natural objects to illustrate in their drawings. When outdoors, several students walked along the paths and preferred to study their surroundings and gather natural objects. When the boys were all together, they talked about how they could use their time outside to observe and collect interesting objects to draw from the nature that surrounded them. After the instruction session, the students walked together and found natural objects of their interest. When returning to the classroom, all of them carried at least one object to draw.
Drawing Nature
Students brought in a range of items, including small branches, rocks, flowers, and leaves. They were encouraged to observe their objects closely while they engaged in drawing a particular object on their sketch pads. One student, who was most interested in drawing cars and cartoonish characters, was having difficulty drawing natural objects. Since the lesson was mainly focused on drawing by observation, the student was allowed to draw an observed person as a part of nature in order to ease his frustration with drawing natural objects. Therefore, it was recommended to sketch his friend who was seated next to him. The student drew his friend with a leaf in his hand. Some students preferred to use bigger natural objects or more than one object to fill in the composition, although a few students used the same objects and drew different angles on the objects several times. The majority of the participants chose an abstract background for their drawings that included solid colors and color gradations. The rest were demonstrated in a realistic context. Many students were attentive to the texture when depicting the drawing. Several students struggled to accurately depict the object's proportions. They were free to draw their objects.
Growing Plants
Students planted strawberry plants in the school garden, which they then transferred to their Eco-Art project. All were very keen on planting their own plants. Few people were certain how much water they should use for their plants. Even after being told to check the soil, some students overwatered the plants. The soil seemed to be submerged in about an inch of water. Meanwhile, someone had watered other people's plants without their permission, resulting in overwatering. On the following day, students came and asked whether they could water their plants. Every morning, a student came in and asked me to look at his plant. Two weeks later, it appeared that most plants had started to grow. From the beginning, they were all concerned and excited about what their plants would look like when they grew up. Unfortunately, some plants did not grow properly due to overwatering. Those students requested that they be replanted, and those plants gradually began to grow well as well. When one student saw his plant growing, he shouted and said, "Look, look, it is growing! I won't let it die now!", as he walked out of the plant area with a huge smile. As the plants started to grow, they quickly moved to their eco-art project. On two separate occasions, students requested a larger space for their plants. Most plants grew up to four inches higher before stopping. The students were disappointed when they noticed that their plants did not grow well. They felt disappointed since they had been overwatering and, therefore, the plants were not flourishing. Once upon a time, the students were very excited about transferring their plants once they completed the eco-art Project. At the end of the program, a few students commented that their plant was doing well.
Painting a Special Place
This time, students chose a place where they go often to be alone. The students were shown, for example, the Róisn Curé painting and her quotes. "Is it okay if we choose an indoor location?" a student inquired. It was acceptable in this activity whether the location was indoors or outdoors. These areas included reservoirs, ponds, gardens, backyards, bedrooms, a living room, a pond, and a big rock. The majority of the locations they had chosen were outdoors. The majority of the paintings lacked human figures, but a few inserted themselves into the landscape. The majority of the students chose the watercolor painting style to depict their paintings in various other painting styles and techniques, while other students selected the acrylic style. In a nutshell, the finished painting used naturalistic colors and depicted natural settings. In the evaluation, students described the locations as places to rest, relax, or have fun.
Visualizing the Future of Their Place
When students connected with Markus Vesper's art, some had minor difficulties understanding his works. The students were given an explanation about his work. After the explanation, the students responded positively. One student said, "I love his art!". Another student said, "He is the first artist I really felt connected with." A few others made comments on how they were attracted to his art. Then, we started to make thumbnail sketches of an alternative place. During the activity, people commented on Markus Vesper's apocalyptic approach. Apart from drawing a better future of a place like others do, they were advised to use their traditional apocalyptic depictions of their city. Based on the approach discussed, a few of them drew their city with flooded houses and towers where they saw only the roof tops, and one other student drew a portrait of the city with floating viruses in the air. Many of the students, in particular, appeared to be focused on a better future rather than an apocalyptic outlook. In the students' imaginations, parking slots that were ideal were replaced with gardens and parks in their selected community; houses utilized new energy sources, such as solar power. Cities were modernized with locally produced organic produce restaurants and grocery stores; sidewalks, bike lanes, nature trails, and cycling stalls, minimizing our reliance on automobiles; roof gardens on urban structures; energy-efficient streetlight bulbs; small shops offering items made from recycled materials; and planted trees. It was an eco-friendly setting and readily available recycling bins could be found around the city, and it appeared that wildlife and birds lived happily. See Figure 5. This class and one of my grade 13 classes went on an educational tour to visit a nature park at "Bras D'eau Mauritius" and walk along the path. The students were greeted by one the tour guides while we were at the nature park. He directed us to the path. He also provided the students with information about the trees and their importance. He also talked about environmental issues and told the students not to throw trash. The students were focused and attentively listened and infrequently questioned relevant things to clarify. The visit lasted an hour and a half. After the visit to nature, the students had lunch at the beach. They also played football with their friends. Some walked in the sand, and they played with the sand. Some of the students sat and admired the sea and the sunlight. One This class and one of my grade 13 classes went on an educational tour to visit a nature park at "Bras D'eau Mauritius" and walk along the path. The students were greeted by one the tour guides while we were at the nature park. He directed us to the path. He also provided the students with information about the trees and their importance. He also talked about environmental issues and told the students not to throw trash. The students were focused and attentively listened and infrequently questioned relevant things to clarify. The visit lasted an hour and a half. After the visit to nature, the students had lunch at the beach. They also played football with their friends. Some walked in the sand, and they played with the sand. Some of the students sat and admired the sea and the sunlight. One student saw a plastic bottle near where he was sitting, and he immediately took the plastic bottle and put it in the dustbin. Even once the students came back to the school, they felt reluctant to get off the bus; also, all the students conveyed their gratitude to the bus driver, their friends, and the teachers. Before leaving, some students said we should go there more often; it was a nice trip, and it was so peaceful and beautiful.
Creating Eco-Artworks
The research of Fragkoulis and Koutsoukos explains a unique teaching approach using two works of art in environmental education, making products for teaching conservation, and re-using more experiential, participatory, and original work [28]. The purpose of their project was to enable students to discover and learn about environmental issues through art and to develop their skills, including collaborative learning, teamwork, dialogue, and exchange of views when working in groups. In this study, the students also formed groups to produce eco-artwork and experimented with the use of a range of recycled materials, such as plastic. They created a vertical hanging plant garden by stringing plastic PET bottles together with strong cord and securing them to a wall built with wood frame. See Figure 6. As they worked, a few students showed anxiety about storing their project because they were afraid that students in one of the other classes could play with it. Students felt slightly nervous to display their project to the school once they were done with the project. All of them were keen about the date and the options for displaying the project. Assembly areas were selected by the majority of students to display their project.
The students' Pro Environmental Behavior
The student's environmental behavior was evaluated using the numbers 1, 2, 4, 5, 8, and 10. Question items 3, 6, 7, and 9 were used to determine the children's anti-environmental attitudes. Through the activities of growing plants, students had the opportunity to consider design connections between form and function as the planters were inspired by the form of their plant and responded to the needs of their plant. However, to determine students' ecological paradigms (pro-environmental orientations), the New Ecological Paradigm (NEP) scale for children was administered with students both pre-and posttest. It provided an overall score from 10 to 50, indicating their position on a continuum between an anthropocentric and eco-centric orientation, as well as mean scores from 1 to 5. Students were given the NEP questionnaire pre-and post-survey. The New Ecological Model (NEP) scale was updated in this questionnaire. Students between the ages of 12 and 14 were given the revised NEP Scale. This scale was more appropriate for the students who were between 12 to 14 years old than the adult student, having appropriately developed the language and reduced the number of scale items. The NEP scale is a 5-point Likert scale that ranges from 1 (strongly disagree) to 5 (strongly agree). The total scale score ranges from 10 to 50, with 10 representing support for the dominant social paradigm (DSP) and 50 representing support for the NEP. A 30 denotes a balanced response.
The questionnaire examined and then sorted the participating 672 participants for a two-year study. This questionnaire has been determined to be the most suitable for students aged 10 to 14. The three stages of this analysis are all closely related facets of stu-
The Students' Pro Environmental Behavior
The student's environmental behavior was evaluated using the numbers 1, 2, 4, 5, 8, and 10. Question items 3, 6, 7, and 9 were used to determine the children's antienvironmental attitudes. Through the activities of growing plants, students had the opportunity to consider design connections between form and function as the planters were inspired by the form of their plant and responded to the needs of their plant. However, to determine students' ecological paradigms (pro-environmental orientations), the New Ecological Paradigm (NEP) scale for children was administered with students both preand post-test. It provided an overall score from 10 to 50, indicating their position on a continuum between an anthropocentric and eco-centric orientation, as well as mean scores from 1 to 5. Students were given the NEP questionnaire pre-and post-survey. The New Ecological Model (NEP) scale was updated in this questionnaire. Students between the ages of 12 and 14 were given the revised NEP Scale. This scale was more appropriate for the students who were between 12 to 14 years old than the adult student, having appropriately developed the language and reduced the number of scale items. The NEP scale is a 5-point Likert scale that ranges from 1 (strongly disagree) to 5 (strongly agree). The total scale score ranges from 10 to 50, with 10 representing support for the dominant social paradigm (DSP) and 50 representing support for the NEP. A 30 denotes a balanced response.
The questionnaire examined and then sorted the participating 672 participants for a two-year study. This questionnaire has been determined to be the most suitable for students aged 10 to 14. The three stages of this analysis are all closely related facets of students' pro-environmental attitudes: nature's rights, the eco-crisis, and human exceptionalism. Both aspects are connected with the belief that humans are the exception to nature's laws. Furthermore, they discovered that an overall score can be assigned to reflect a student's viewpoint on a scale ranging from anthropocentric to eco-centric orientation.
The aim of introducing the questionnaire was to learn how the eco-art and place-based curriculum influenced the empathy of the students towards the environment through these activities. The findings presented an average score of students' environmental orientations to the New Ecological Paradigm (NEP), as well as information on their attitudes toward nature's rights, the eco-crisis, and human exceptionalism. The survey data were tabulated and translated using descriptive statistics to include the frequency distribution of each item. Next, the mean was conducted to ascertain whether the change between pre and post test scores was statistically significant.
The means and standard deviation for both pre-and post-test are found in Table 1. When comparing the post-test and pre-test mean values, the researchers identified that the mean value decreased for the statement "Humans on the earth are destroying the earth". The mean value for the pre-test for the above-mentioned statement was 2.52, which decreased to 2.40 in the post-test. However, the mean value for the first statement called "Animals and trees have a similar right to humans to live on this earth" increased in the post-test slightly compared to the pre-test, as the pre survey mean value was 1.48 and post survey mean value was 1.68, mean difference of 0.2. The increment in the positive attitudes of the students was related to the statement "A larger number of people live on the earth". The mean value for the pre survey was 2.72 and the post survey was 3.80. Also, the students' positive attitudes related to items 4 and 5 increased according to the mean value of the pre-survey and the post-survey, namely, "Humans should conform to nature and its rules" and "People must suffer the consequences of nature." Moreover, strong improvements can be seen in the mean value of the pre-survey and post-survey results of "Humans try to handle things in nature" (pre-test mean value was 3.16; post-test mean value was 3.88) and "Human behavior hardly impacts nature" (pre-test mean value was 1.72; post-test mean value was 2.32). Students' responses showed an improvement in the 10th question item, which was "If humans have not changed, they have to face the negative impact on the environment" with a mean difference of 0.36 between the surveys. The mean values of the pre-survey highlighted the pro-ecological perspective view of the students in one semester. The post-test test score indicates a positive improvement in the students' views on the pro-ecological point of view.
Discussion and Conclusions
Students shared their enthusiasm for the art club program. Moreover, in their second year, students admitted that this was their favorite lecture. They expressed that they enjoyed the class. Staff also echoed these comments. The students frequently expressed a desire to attend the classes. They frequently took their sketch pads home on weekends and admitted that they were willing to clean up after the session. Furthermore, they had a strong desire to participate in more and more experiential activities. Students were given positive feedback on projects and activities in positive terms. Students also used words such as "fun", "interesting", "cool", "nice", and "neat" to express their experience gained through the class activities. Even during class, students freely commented on their experiences with specific projects, describing how much they enjoyed them. Unfortunately, a few students went through some frustration during the course and the activities. However, they are managing to catch up gradually. The overall response from the class and the project was positive. One student expressed his delight, saying, "It's fun." The information gathered through interviews and drawing activities aided in understanding and comparing the program's impact, as well as increasing empathy for the environment. The curriculum designed allowed the students to demonstrate a considerable improvement in ecological awareness over the year. They increased their awareness of their relationship with the environment, broadened their perception of nature, and gained various insights into the ecological crisis, allowing them to accept and make a change. The improvements in awareness were identified through these activities. Terms like "realized", "awakened my eyes", and variations, such as "eyecatcher" and "opened my eyes", were used to describe how this experience changed their awareness over the course of a year. These terms and phrases gave a clear indication of how the students saw the world and the environment in a new way and connected them with the experience. Students' feedback indicated that through the program, they received a greater awareness of the benefits of nature. The creation of this awareness had the greatest impact on activities such as nature walks and successive nature drawings. One student's explanation said that when he was drawing the plant, the close observation helped him to notice things he had not noticed before, and he mentioned that he was surprised by his new findings. Another student described the amazing moment he went through once he saw the plants in his grandparents' garden. Students improved their observation skills and stimulated concern about the other wonders in nature. Some students reported how the class helped them to become more aware of the environment and develop positive attitudes toward it. "What we learned about nature [the environment] opened my eyes, and I realized I could be optimistic about things," said one student. Moreover, he explained how he responded to nature before the course. He was not a person who paid much attention to the environment, and most of his thoughts were negative, such as pollution, waste, and his commitment to the environment. Similarly, another student said, "I believe that wherever we go, we are inspired to interact with nature, connect with life, the things we do." Moreover, he explained that, "Even though we know that the earth is highly polluted, I have realized that every place is not like that." His response makes sense: that not every place on the planet is polluted, and it made him realize that there are more beautiful places on the planet. In addition, the painting activity was dedicated to painting a specific location.
According to the findings of these data, students demonstrated empathy for the environment until the end of the program. Because empathy occurred internally, evidence of it must be found through students' behaviors and reflections. The care they have for nature, awareness, and the acceptance of responsibility towards the environment emphasize the empathy of the students. In a nutshell, it appeared that students advanced in their level of empathy for the environment and gained extensive ecological awareness. They also demonstrated pro-environmental behavior. As a result, children's perceptions of "the greater number of people living on the earth" may be used to justify changing environmental factors. However, the researchers disregarded the responses, "Human behavior hardly impacts nature" and "If humans do not change, they have to face a negative impact on the environment." Though these question items have nothing to do with the environmental crisis, they are the most important things (curriculums) that students learn in the classroom. The survey results improved the overall perception of curriculum development and implementation. Therefore, it is easier to implement the necessary changes to the overall subject area than to make changes to specific concepts. The sample size of this study was relatively small at 25, and the test results were significant. Wilkerson and Olsen explained the current research's misunderstandings about significant test results and interpretation of significant test results. The literature provides an opposing viewpoint on the significance of the test result and its use in social science. All these studies explain that a larger sample size can enhance the confidence level of the researcher and trust related to the investigation and the result of the study. However, the same level of confidence and trust cannot be implemented among researchers through an investigation with smaller sample sizes. It is difficult to understand the behavior related to the sample size. Mathematical approaches related to the significance of the tests explained that the confidence level related to the smaller sample size had been lost as the researcher took more detailed action than in the larger sample size to get the equivalent level of statistical significance. The statistical significance of this investigation was not weakened as it chose a smaller sample size based on the explanation of the early literature. Qualitative data highlighted that the improvement of the students in the ecological aspect is related to the view of the eco-crisis, which is important in the conclusion. The pre-and post-survey results of the survey explained the increase in the students' test scores.
The place-based art curriculum's distinct history, community, and culture were designed to allow students' ecological creativity to be thought about and acted upon. This will create a caring relationship with place by helping the students to find the meaningful essence of Mother Nature, by permitting and investigating visual arts, and also by encouraging unique thinking techniques to evolve into a variety of different suggestions of negative environmental impacts through art [29]. Prior experience with artwork was used to help the students identify with the natural world by expressing the natural world, which consists of all of the plants, animals, and other living organisms in nature. These tasks were competent in motivating and inspiring a connection with the earth. Since the students had prior experience and knowledge of the roles and the characters in their existing human lives, the students had the chance to increase their fondness for those places and reinforce the values in their relationships with those places. As students envision a better future in their community centers and a better understanding of the ecological beginnings that have already begun in their community centers, students prefer to let go of their fascination with formulating better ecological perspectives. Students gained experience posing the placement of their interest in work for the commitment for ecological change because they had the opportunity to engage in art-making. This is supported by the research of Yeşilyurt, who stated that many of the students empathized with nature and placed it in the role of nature themselves. This indicates that students fully accept the behavior. In addition, the engagement of students with nature and learning through doing and living has made it easier for students to engage with nature and has allowed students to build an ecological outlook [30]. | 2021-09-01T15:07:22.430Z | 2021-06-28T00:00:00.000 | {
"year": 2021,
"sha1": "b6f080f857a0cd3fd8d18a9f88e628340e271c59",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-4133/2/3/14/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7121085ba2dad05b22973164c0fcf868902aff6d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
229461684 | pes2o/s2orc | v3-fos-license | City , history , and memory : from a destroyed environment to a constructed one . A case study of Natividade da Serra , state of São Paulo , Brazil
Between 1973 and 1974, the city of Natividade da Serra, state of São Paulo, was relocated in order to provide space to construct a hydroelectric dam for power generation. Based on the idea that the relationship between society and physical space acts to shape and influence the culture of a society, we conducted this study with the principal objective of analyzing the social impacts on the local community by highlighting the cultural transformations provoked by the restructuring of a space that was constructed over the period of a century. By conducting a qualitative analysis of data from the document repository of the City Hall and Parish of Our Lady of the Nativity (Nossa Senhora da Natividade), and from oral sources (descriptions collected through oral history), we elaborated a representative image of daily life in the city as it existed before the restructuring. This reconstruction started with the process of destruction and then proceeded to describe the reorganization of the community in the new space, as well as detailing perceptions of the city as a place that has passed through multiple historical experiences. By relating the data obtained from the written and oral sources to the historical context, we attempted to reveal the political, cultural, and ideological dimensions involved in this process of manipulation of inhabited space. We conclude that the innumerous implications caused by the abrupt transformation of this space has altered the customs and forms of sociability of the population. The analysis of the reasons and motivations for the disappearance of the city suggests that, even if only implicitly, this cultural transformation was desired by those who were in power in Brazil at that moment in time.
INTRODUCTION
The municipality of Natividade da Serra, situated in the region of Vale do Paraíba Paulista, was founded in the middle of the 19th century. In the decade of the 1970s, slightly more than ten percent of its territory was inundated to construct a reservoir for a hydroelectric dam. Two of its villages, a district and the urban area of the city were affected by the waters. Between 1973 and 1974, the old city was destroyed and a new municipal seat was installed about one kilometer from the old one. The new municipal seat was modeled on the architectural and urbanistic patterns of the new period, within the limits imposed by the availability of time and financial resources.
The case of Natividade da Serra became emblematic. The modification of the space did not occur following the new objective necessities that were emerging in that period, but rather obeyed factors that were external to the local community. One of the questions that this research raised was, "would it be possible to preserve a pre-existing urban scenario in a new, recentlycreated form"? In order to understand the many aspects of the relocation of this city, research was conducted regarding the political, cultural, and ideological situation of Natividade da Serra from 1950 to the end of the 1980s. The objective was to examine how the urban space before the inundation was constituted and how the relocation was perceived by the indigenous population.
This research is linked to discussions on the social impacts caused by immense infrastructure projects that are executed through a developmentalist vision. According to Fausto (1996), projects of this nature have been part of Brazilian history during decades, the undertaking of which has been intensified in light of the historical emergence of the industrial model, which has been the paradigm of progress since the end of the 1950s.
The relationship between history and space is described in pertinent literature, and the events and processes that it relates are analyzed from a chronological and geographical approach. The action of man in the course of time happens in a specific space, but the inhabited space is not just the environment in which men construct their historical reality because he himself is a social construction (Lefebvre, 1969;Fenelon, 2000;Rykwert, 2004).
According to Tuan (1983), through this experience, these subjects are able to confer emotional value to a specific space, transforming it into "place"a type of refuge where one can seek out comfort and security. The term 'place' is used by geographers in opposition to the word 'space', which is understood as something that is limitless, and that by allowing for liberty of movement implies a constant necessity of choices and decisions.
According to Bosi (1994), subjects can also establish a relationship with space that is directly associated with their experience as members of a social group. This experience transforms a city into the territory of one group or another. Oftentimes, different groups confer different meanings to the same space, based on the experience of each one. In this context, the experience of the individual and the group confers a symbolic dimension to the forms of a city and creates an emotional map that reflects the social relations established in the constructed environment.
MATERIALS AND METHODS
The complete body of documents used in this investigation is available in three institutions: the City Hall, local Catholic church (local parish) and the Energy Company of São Paulo S/A -(CESP, the previous state company that was responsible for the construction of the reservoir, which was divided into several private companies that still exist).
The collection of documents, which are maintained by the City Hall, are in a deplorable state of conservation. The goal of the methodological procedure was to select only the documents that were relevant to this research, but in order to do so a survey of all the available documents was initially conducted. This procedure resulted in the elaboration of a catalog of the entire collection of documents.
A catalog of a collection of documents is an essential item for historical research (Glénisson, 1961), and in the case of the current research the catalog was elaborated using Microsoft Excel 2007, wherein the origin of the document, its type, period of production, and place of storage were registered.
A decision was made to begin this cataloging by examining only official correspondence of the City Hall. Two reasons motivated this decision: first, the poor physical state in which these documents were found; and second, the nature of the information registered in them. As a function of the problems associated with conservation of these documents as explained above, the series of the Official Expedited Letters from 1950 to 1980 is one of the few resources that is relatively continuous and that also permits a qualitative analysis.
Oral history was the second methodological resource used as a method of historical research (François, 2002;Amado and Ferreira, 2002). This procedure, however, allowed for partial glimpses of the reality of the historical period -a complex plot which partially reveals itself in the meanings conferred by the subjects to their social practices (Khoury, 2001).
Narratives that were symbolic of the diversity of perspectives of reality were collected and organized to attempt to understand them, not as a deviation from a standard pattern, but as part of a constituted reality. In this context, seven people were interviewed who had lived through both stages of the modification of the urban environment, namely, before and after the inundation of the city. In order to organize the responses of the interviewees, a set of questions was used for the interviews, which was structured along the themes: Vila Velha (daily life and space), its disappearance, and the new city.
RESULTS AND DISCUSSION
At the beginning of the second half of the 20th century, the municipality of Natividade da Serra had about 16,000 inhabitants. The rural population was about 90% of this total and was responsible for the largest portion of economic wealth possessed by the city -its agricultural and cattle ranching production -which was based on cultivation of corn, beans, cassava, rice, tobacco, and sugar cane, which were all used for consumption by the municipality itself. Later, dairy farming became important, and this production was used by the dairy product factories of the region of the Middle Paraíba Paulista.
The municipality was composed of two urban nuclei: the municipal seat and the district of Bairro Alto. There were also many rural neighborhoods and nuclei with different demographic concentrations spread across its vast territory (about 800 km 2 ). The municipality was traversed by numerous rivers and streams, and for this reason depended not only on many municipal roads and bridges but also on barges to connect the rural population with the municipal seat. The documents consulted, especially the official expedited letters from the City Hall (1964-1971, revealed a poor urban population that was lacking in basic necessities such as public sanitation, health care, and education. In 1970, of a total of 483 buildings in the city, just 172 had running water, and only five were connected to sewer lines. Table 1 shows the evolution of the population between the years1950 and 1970. Source: Natividade da Serra (1964-1971, p. 174; 1971-1975, fl. 56).
These data reveal a significant reduction in the rural populationwhich was not equal in magnitude to the increase in the native urban populationexactly during the period in which industrialization was advancing into the cities of the Middle Paraíba region in São Paulo, and in which there was a considerable increase in the populations of these large urban centers.
In order to sustain the indices of industrial development of the Middle Paraíba region in the decades of the 1960s and 1970s, large investments were made in energy production to satisfy the demand of large factories. According to Ricci (1996), energy production in the Paraíba Valley in São Paulo was done by many small companies which took advantage of the hydropower potential in the region but had reduced capacity for energy generation and distribution. Throughout Brazil in this period, there were hydroelectric plants that had been created by industries to meet their own energy demands. Such was the case of the Taubaté Industrial Company (C.T.I.), whose machinery was moved by energy produced by the hydroelectric plant of the company, which was installed in the city of Redenção da Serra, very near Natividade da Serra.
In an attempt to contain the successive episodes of flooding of the rivers of this region during the rainy season, it was necessary to intervene using specialized engineering techniques to streamline the process of controlling the waters. The construction of a hydroelectric dam would solve two problems with one solution. Besides the production of electrical energy, the damming up of the inconstant rivers (Paraibuna, Paraitinga and Lourenço Velho) using dikes, polders, and dams, would facilitate efficient regulation of water levels, thus saving large areas of fertile soil for planting. Ricci (1996) affirms that this situation was the motivation for the construction of the Paraibuna hydroelectric plant, a project that was led by the state company CESP. However, for the plant to exist the city had to be submerged. Furthermore, Natividade was not the only city that was affected, as Paraibuna and Redenção da Serra also suffered loss of area, but Natividade da Serra was the only one that lost its entire urban area.
Since it was a small, rural city with little potential for industrial development due to its location, Natividade did not fit the ideas of the developmentalists who were in power in Brazil. For a government dominated by an elite class that based their actions on the beliefs of the developmentalists, a city such as Natividade da Serra, without much contribution to the regional economic scenario, could be sacrificed for the greater good, which was the necessary and 5 City, history, and memory: from a … Rev. Ambient. Água vol. 7 (supplement) -Taubaté 2020 inevitable development of industrial areas with more dynamic economies.
Although the planning for this project stipulated that the damming of these rivers would affect only a portion of the territory of Natividade da Serra, most likely, for the reason of the greater good, the elimination of the entire municipality was considered, according to official documents published by the City Hall and the Ministry of the Interior in 1968. In this case, the areas that were not submerged would have been assimilated by neighboring municipalities.
The analysis of the official documents showed that the necessity to guarantee the construction of a new city would have been understood by the City Hall and the state government. Official letters expedited by the City Hall, starting in 1973, demonstrate the insistence of local authorities in demanding just compensation from the state government to guarantee the functioning of state public offices in the city.
Rivers and streams that ran through the region passed near the city, and in some cases went through the backyards of some houses. Most of the time these rivers were sources of leisure activities, and at the edge of the city the small Paraibuna beach was used for fishing and leisure. Previous inhabitants related that they regularly interacted with the natural environment, especially with the rivers and streams, which provided them with a means of subsistence and direct contact with aspects of the local natural ecosystem.
During many decades, subsistence agriculture and sale of any surplus production was predominant in this region. Around 1950, families coming from the state of Minas Gerais began to buy large tracts of land and invest in dairy farming. The milk was sold to a dairy product company in Taubaté, which had a refrigerator tank at the municipal seat of Natividade. The cattle ranchers did not have the habit of contracting permanent workers, but rather hired sporadic help with jobs such as pasture maintenance, an occupation of many residents of the old city.
The Paraibuna Dam inundated 206 sq. km of territory of Natividade da Serra (14% of its total). The settlements of Pouso Alto and Remédios and the district of Bairro Alto were affected and relocated to higher ground, nearer to Caraguatatuba than Natividade. The municipal seat was completely submerged, with only a wooden cross remaining, which was constructed in 1954 and installed on the highest hill in the city.
According to the Master Plan for the Paraibuna Reservoir (CESP, 1978), the construction of the dams that form the reservoir began in 1964 and ended in 1977. The Paraibuna hydroelectric plant began activity in 1978, but we were unable to affirm exactly when the municipal authorities were notified of the necessity of the destruction of the city.
The first allusion to the possibility of inundation of the municipal seat were found in official letters from the 1960s, with one in 1966 discussing the building of the dam without mentioning the disappearance of the city. Apparently, the executive powers were not officially informed about these plans, which is demonstrated by an official letter from 1968, four years after the start of dam construction. In this correspondence, which was addressed to the institution responsible for maintenance of state roads, it was requested that the stretch of the road that linked Natividade to Taubaté not be abandoned, justified by the fact that the city would be condemned to disappear, since the inundation was planned to occur during 1970 and 1971, according to information reported by the media (Natividade da Serra, 1950Serra, -1980. The process of transformation of this space also brought changes in the configuration of the population that inhabited the city. All the interviewees affirmed that many residents had serious doubts about the success of the new city, and for this reason abandoned the municipality (Table 2). Table 2 shows a decrease in the total number of inhabitants in the municipality for the decade ending in 1980, after the change in location of the municipal seat. In comparison with the previous decade, the rural population was reduced a little more than 40%, while the number of urban residents increased considerably. The increase in the urban population is partially explained by the migration of residents of the rural zone whose properties were inundated or isolated by the dam and then took up residence in the municipal seat. Furthermore, as previously discussed, there was a large demand for plots of land in the new city, baptized Nova Natividade, by people from various regions of the state of São Paulo. Due to the scarcity of sources, it was not possible to precisely state the number of residents who left the city, or the number of new inhabitants attracted by the incentives provided by the municipal government. This situation created a paradox: abandon the municipality or be inserted into the developmentalist rationale of intervention in natural and social milieus of autochthonous populations that was heavily preached by the government during this period. Table 3 below shows evidence for this situation. Table 3 quantifies the decrease in workers occupied in different activities related to agriculture and cattle ranchingpermanent or temporaryand the increase in the number of residents dedicated to activities for which permanent salaries were paid.
In Natividade, whether in the urban or rural zones, fishing was an activity linked to leisure and sustenance of families. The damming of the rivers provoked a change in the aquatic habitat, making it unhabitable for some endogenous species, and a few of these, typical of rapids, disappeared.
Species that were better adapted to the new aquatic environment were introduced through a conservation program called Ictiofauna, implemented by CESP in 1988. However, the fact that the dam affected natural reproduction areas created the necessity of maintaining a program of continuous artificial reproduction to maintain the supply of fish at constant levels. (Noffs and Salgado, 1992) The changes in species and the supply of fish in Natividade da Serra modified the fishing habits in the municipality. CESP itself, in the Master Plan of the Paraibuna Reservoir, even recognized that the dam would make fishing unviable as an economic activity.
Subsistence agriculture, with the sale of excess production in the local market, was one of the most important economic activities in the municipality. The importance of this activity was registered in documents extracted from the state and catholic collections of documents and was also emphasized by the interviewees. The construction of the dam negatively affected the continuation of this activity in the new space because the waters inundated the várzea (floodplain) areas which were traditionally cultivated with species that were adapted to this area in Natividade da Serra. Currently, according to information obtained through the interviews, the local population needs to purchase agricultural products from neighboring cities to satisfy its demand.
CONCLUSIONS
The transformed space and its impact on the collective population resulted in changes in social groups involved in their daily actions and interactions. The disappearance of fertile cultivation areas and the modification of the aquatic habitat reduced traditional modes of subsistence of the majority of the local population, which resulted in increased dependence on official jobs regulated by labor laws, with salaries provided by the municipality. The adoption of this type of work altered the perception and use of time, which was reflected in several spheres of human activity and transformed ways of life.
Spatial and environmental modifications brought about in an abrupt manner and in the absence of dialogue, to which the local community was subjected, coexisted with the insertion of new members into the new city from diverse segments of the regional population. This changed the previous sociological profile of the local population, which was historically constructed in an autonomous and empirical manner. This research was not able to detect any record of explicit and organized resistance from autochthonous and dedicated groups that were against the paradigm of progress that was promoted by the governmental authorities during the period of the civil-military developmentalist dictatorship.
The implications of this radical modification of space were perceived by the new configurations of the collective identity. The population was forcibly made to initiate a new process of creation of a collective history and created forms of sociability and intervention in space, which resulted in its abandonment, through its own perception of developmentalist progress. | 2020-12-03T09:02:23.214Z | 2020-11-30T00:00:00.000 | {
"year": 2020,
"sha1": "5b2c03ccecf3ac9ce5e9ac54a7596dd9f686757a",
"oa_license": "CCBY",
"oa_url": "http://www.ambi-agua.net/seer/index.php/ambi-agua/article/download/2283/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6b18eed85dc0741af7495c9c43781cbba1eb9626",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Geography"
]
} |
17356909 | pes2o/s2orc | v3-fos-license | Researchers’ Individual Publication Rate Has Not Increased in a Century
Debates over the pros and cons of a “publish or perish” philosophy have inflamed academia for at least half a century. Growing concerns, in particular, are expressed for policies that reward “quantity” at the expense of “quality,” because these might prompt scientists to unduly multiply their publications by fractioning (“salami slicing”), duplicating, rushing, simplifying, or even fabricating their results. To assess the reasonableness of these concerns, we analyzed publication patterns of over 40,000 researchers that, between the years 1900 and 2013, have published two or more papers within 15 years, in any of the disciplines covered by the Web of Science. The total number of papers published by researchers during their early career period (first fifteen years) has increased in recent decades, but so has their average number of co-authors. If we take the latter factor into account, by measuring productivity fractionally or by only counting papers published as first author, we observe no increase in productivity throughout the century. Even after the 1980s, adjusted productivity has not increased for most disciplines and countries. These results are robust to methodological choices and are actually conservative with respect to the hypothesis that publication rates are growing. Therefore, the widespread belief that pressures to publish are causing the scientific literature to be flooded with salami-sliced, trivial, incomplete, duplicated, plagiarized and false results is likely to be incorrect or at least exaggerated.
Introduction
Ever since the early 20th century, academic lives and careers have been guided, first in the United States and later in other countries, by a "publish or perish" philosophy whose effects are increasingly controversial [1]. Already in the 1950s, academics were publically disputing that, whilst publishing promptly one's results is a duty for all researchers, setting explicit productivity expectations was a recipe for disaster [2]. The debate escalated after the 1980s, with the increasing adoption of formal performance evaluation practices and the growing use in such context of quantitative metrics of productivity and citation impact, such as the Impact Factor and the h-index [3][4][5].
Today, whenever problems of contemporary science are discussed, it is commonplace to suggest that scientists, being pressured to pad their CVs with publications, might be increasingly fractioning their results (i.e. "salami slicing" data sets to the smallest publishable unit), surreptitiously re-using data in multiple publications, duplicating their papers, publishing results that are preliminary or incomplete, underemphasizing limitations, making exaggerated claims and even resorting to data fabrication, falsification and plagiarism e.g. [6][7][8][9][10]. Research evaluation policies in scientifically prominent countries have reacted to these concerns by derewarding productivity. The German Research Foundation (DFG), for example, has imposed a limit on the number of papers that researchers can include in their CVs in support of a grant application [11]. In The Netherlands, the national research assessment exercise has revised its policies and has dropped the "productivity" category, which counted total number of publications, from its ranking system [12].
Evidence that modern science suffers from over-productivity, however, is mostly anecdotal or indirect. Scientific journals are said to be increasingly flooded with submissions, but such estimates are not adjusted for various confounding factors, and in particular the fact that the population of scientists is growing [13,14]. Several surveys have probed scientists' perceptions of pressures to publish, and found them to be high in all disciplines and particularly high in the United Kingdom and North America, but it is unclear how such perceptions reflect actual behaviour, or even if they may represent a self-fulfilling prophecy [15][16][17]. Meta-analyses suggest that positive outcome bias in the literature has increased in recent decades and might be higher in academically productive areas, especially in the United States [18][19][20][21], but the causes underlying these patterns remain highly speculative.
To the best of our knowledge, no study has conclusively assessed whether individual publication rates of scientists have actually increased as commonly speculated. In particular, whilst a rise in collaborations has been amply documented across the sciences e.g. [22] and whilst some evidence suggests that individual scientists are publishing more studies overall, at least in the physical sciences [23], no study has verified whether researchers are actually publishing more on an individual basis and independent of their specific collaboration patterns. This gap in the literature is not surprising, because assessing individual publication rates in the literature is technically very challenging.
By sampling researchers whose names included three initials (e.g. Vleminckx-SGE), we were able to analyse publication patterns of individual scientists that operated during the 20th century in all disciplines covered by the Web of Science database. Since individual careers vary widely in length, and since pressures to publish are supposed to be highest at the beginning of scientific careers, we limited all measurements to the first 15 years of publication activity. We will refer to this category as "early-career researchers". Since our literature database included studies up to the year 2013, we could retrieve the publication patterns of 98 cohorts of earlycareer researchers, whose first year publication ranged between 1900 and 1998. Since publications are assumed to be crucial to survival in academia, we excluded all authors who had ceased publishing before the end of their early-career period, because these might not be representative of successful and/or active scientists. Moreover, to minimize name disambiguation errors (see Materials and Methods), analyses were limited to researchers working in the Unites States, Canada, Europe-15 countries, Australia and New Zealand. Our final sample thus included 41,427 individuals.
Materials and Methods
To identify individual authors unambiguously, we retrieved from the Web of Science database (henceforth, WOS) all authors whose names included three or more initials (surname plus initials for first name and at least two middle names; for example, Vleminckx-SGE), a combination that greatly reduced author identification errors and makes our results conservative (see section Disambiguation error risk).
We retrieved from the WOS all records that had been co-authored by any one of these names. In order to identify early-career researchers, we selected researchers that had coauthored at least two papers, and whose papers spanned a period of at least 15 years, starting from the year of the first publication. We will refer to members of this subset as "early-career" authors.
The publication lists of each early-career author were analysed in order to extract the following information: • year when first paper was published.
• total number of papers co-authored during the 14 years following the year of first paper.
• average number of co-authors in these papers.
• total number of citations accrued by these papers, counted at the time of data retrieval (i.e. December 2014).
• Average 5-year impact factor for these papers, normalized by discipline.
• Number of papers in which the name of the author is first in the co-author list.
• Most likely country of activity of the author.
Records in the WOS only started linking each author in a paper to his/her individual address in recent years. Earlier records only include a list of all addresses, as they appear in the paper. In order to match with certainty addresses to authors, therefore, we recorded the first country of affiliation listed in papers in which the researcher was first author. If more than one country was associated with this name over the publication period, country was attributed by majority rule. Authors for which no country could be inferred based on this method (in particular, because they had never published an article as first author), were placed in an "unknown" category, which in the full text figures is aggregated with the "other country" category.
Disambiguation error risk
The likelihood that two authors share the same surname as well as the initials of first and two middle names is extremely low. In theory, the number of possible combinations could be as high as 26 8 , assuming an average surname length of five letters and 26 letters in the alphabet. In practice, however, surnames tend to be country-specific, and some countries are more likely to have two middle names as well as to have shorter surnames, leading to a higher theoretical homonymy rate, i.e. multiple researchers sharing the same surname and initials. Previous evidence suggests that error rate tends to be higher for authors from Latin-American and South-East Asian countries, and particularly from China, in which surnames are translated in an alphabet that renders them highly similar. Therefore all our main analyses were limited to authors that our algorithm (see above) attributed to countries from North America, Europe-15, Australia and New Zealand.
It is important to emphasize that disambiguation errors in our study are virtually unidirectional, and thus render our analysis very conservative. The main type of error that our sampling strategy might encounter is the merging of two distinct authors into one. This would inflate the apparent productivity associated with that name because the population of scientists has grown steadily over the century, which increases the likelihood that two or more scientists share the same name. Therefore, non-disambiguation errors are likely to have increased across our time series, leading to, if anything, a spurious increase in the apparent productivity of authors. The opposite error, in which the bibliography of a single individual is incorrectly split in two, may only occur when authors change their names or surnames over time, which is a relatively rare event. Therefore, the non-increasing trends we report for productivity are likely to be still over-estimating the true changes of individual productivity over time.
To estimate the rate of disambiguation error in our sample, we retrieved 50 names at random, and examined the coherence of their publication lists. This analysis showed that 47 researchers had no homonyms, and that the retrieved list of papers did not include any misattributed papers. Two researchers had one homonym to which one paper could be attributed, and one researcher name included papers from three distinct researchers.
Representativeness of the sample
We probed the representativeness of our sampling strategy by conducting two tests. First, we assessed the prevalence of three-initialled names in the Web of Science. Since addresses in the Web of Science were not linked to author names until recently, we had to limit the analysis to first-author names, which could then be unambiguously associated to the first affiliation listed. Thus we looked at how the proportion of three-lettered individuals varied amongst first-author names associated with each country, from one year to the next. Both the magnitude and temporal change of this proportion varied significantly both in magnitude and temporal change across the countries included, showing either increases and decreases of this quantity over the years, depending on country (S1 Fig, numerical analyses in S1 File). These temporal trends are uncorrelated with the publication patterns found by our study, which are unidirectional instead. For example, the strongest declines in this proportion over the years were observed in Portugal and United Kingdom, i.e. in country that exhibit widely different trends in fractional and first-author publication rate (S1 File, and see Results).
Second, we assessed whether the names of the authors in our sample exhibited patterns that might suggest a biased sampling. In the supplementary text file we report the names of the 20 most productive and least productive authors in our sample, for United States and for the United Kingdom, for the years 1978, 1988, 1998. No overabundance of foreign names, or any other clear difference between any categories or years was evident. For example, South East Asian names were rare in all samples, independent of productivity level or year of sampling (S1 File). This suggests that our sampling strategy produced a representative sample of researchers operating in any given country or discipline.
Analyses
Temporal trends for all parameters examined were analysed in a generalized linear model, in which year of researcher's first publication was the independent variable, and quantities measured over the researcher's publications of the subsequent 15-years constituted the dependent variables. The distribution of errors was modelled as quasi-Poisson for count data (i.e. number of papers, average number of co-authors, citations) and Gaussian for the remaining variables. These choices were made based on theoretical considerations dictated by the nature of the data, corroborated by an examination of the shape of the distribution of data after necessary transformations. There was no overlap in authorship amongst the publication profiles of the sampled authors, fully justifying an assumption of independence of errors. All analyses involving cross-century patterns fitted a cubic polynomial, whereas those limited to the post-1980 period fitted a first-degree polynomial (i.e. a univariate, linear regression). Cubic polynomials were preferred over more complex models because the scope of the analysis is primarily to illustrate long-term trends. The distribution of residuals was assessed for all models and was deemed sufficiently close to a normal distribution for the purposes of the analyses, which is merely descriptive and based on a very large sample-making our results not crucially dependent on null-hypothesis testing.
To assess whether analytical choices might have concealed a different trend from the one we report, we plotted the corresponding "raw" average values of fractional and first-author publication rate for each year (S2 Fig). Trends in means closely match those suggested by the regression models and confirm that average fractional and first-author publication rates in all disciplines cannot be said to have increased over the century.
Database coverage of the literature
The WOS database only captures a portion of the literature, so we examined the possibility that the trends observed could be an artefact cause by decreasing coverage of key journals. On the contrary, we found that the WOS coverage has risen steadily for all disciplines, at rates that do not mirror the patterns we report (S3 Fig). If scientists were actually publishing more papers, they would be doing so in a decreasing number of journals and precisely in those not included in the database, which is very unlikely.
The lower coverage by the WOS of the older literature also implies that, if anything, our study might be underestimating the actual publication rate of older authors, making our test once again conservative with respect to the hypothesis of growing publication rates.
Results
Our sampling procedure yielded an initial pool of 1,219,067 records of articles authored by 543,789 three-initialled names, of which 70,310 had authored at least two papers during fifteen years and 41,427 could be ascertained to have worked either in North America, Europe-15 or The average number of papers published by early-career researchers has been stable or increasing for all disciplines during 20 th century, and has increased for most disciplines after the year 1980. The number of co-authors appearing on these papers has also increased, and at a visibly faster rate than the number of publications. Scientists in all disciplines went from having almost no co-authors at the start of the century to having, by the end of it, on average between 2 and 7 in all disciplines except the Arts & Humanities ( (Fig 1a and 1b; numerical results for this and all subsequent figures and analyses mentioned in the text are reported in S1 File).
Scientists' number of collaborators significantly affected their productivity and impact. The relationship is non-linear, but the "optimal" average number of collaborators is non-zero in all disciplines, and grows along a gradient of "hardness" of subject matter [24], i.e. from the arts and humanities to the physical sciences (S4 Fig). Once publication rates were adjusted for co-authorship, they were no longer increasing. Fractional research productivity, calculated by dividing the total number of papers published by the average number of co-authors, was highest in the first half of the 20 th century and declined overall. After the year 1980, fractional productivity has been stable or decreasing for most disciplines, the few exceptions showing modest growth rates (Fig 2a). A multivariable model regressing total productivity on year of first paper and adjusting for co-authorship yielded a substantially similar picture, by suggesting that disciplines underwent, at best, an extremely small increase of fractional productivity (i.e. less than 1% per year) since the 1980s (S1 File).
The number of papers signed as first-author, a position that in most disciplines is bestowed on the team member who mostly contributed to the research [25], had also declined. Researchers who started their careers in 1998 signed as first-authors, on average, two papers less than their colleagues in 1950. Between 1980 and 1998, the number of first-authored papers was non-increasing for all disciplines except Psychology, in which the increase was very modest (less than one extra paper over fifteen years), and Chemistry and Earth and Space Sciences, which started from the lowest rates and have grown rapidly, i.e. from around three to five and four to six first-authored papers, respectively, over 15 years (Fig 2b). If analyses were repeated for the number of (multi-authored) papers published as last author, very similar trends were observed (S1 File).
Countries, as captured by our sampling procedure, differed significantly in their average publication and co-authorship rates, but all underwent some increase on both parameters over the years (S1 File). Across geographic regions, fractional and first-author publication rates have followed similar changes over time (Fig 3a and 3b). However, we observed a significant variability between countries. At one extreme were countries that exhibited the highest levels of fractional and first-author publication rates throughout the century, in particular the United States, United Kingdom and Germany. These showed a decrease or no increase (e.g. yearly linear slope±standard error for first authored papers, from 1980-onwards, respectively: -0.004 ±0.001, -0.000±0.002, -0.002±0.005). At the other extreme are countries that recorded lower publication rates during most of the century and underwent a rapid increase in recent decades. These include Belgium, Portugal, Spain and Italy, for which the rate of first-author publication has tripled since the 1980s (e.g. yearly linear slope for first authored papers, from 1980-onwards, respectively: 0.016±0.007, 0.008±0.004, 0.01±0.003, 0.032±0.008; for numerical results of all countries see S1 File).
Multiple secondary analyses suggested that the patterns observed in this study are genuine, and not artefacts generated by our sampling or analytical strategy. The use of three-initialled names greatly minimized disambiguation errors and, to any extent that such errors affected our sample, they made our analysis conservative with respect to the hypothesis (see Methods). The relative frequency of three-initial names recorded in the Web of Science varied by year and country, but these trends were uncorrelated to the trends observed in our study (S1 Fig, see Methods). Furthermore, names of the 20 most and least productive authors from United States and United Kingdom in our sample did not exhibit any obvious pattern (e.g. an overrepresentation of Asian names) that may suggest the presence of bias in our sample (see Methods, S1 File). The Web of Science database is, like all similar databases, an incomplete representation of the scientific literature, but temporal changes in its coverage are uncorrelated to our findings, and conservative with respect to the hypothesis (S2 Fig, see Methods). We used linear and cubic-polynomial models to illustrate temporal trends, sacrificing detail for the sake of simplicity, but the same fundamental trend (i.e. no increase over time) was observed if a simple mean is calculated year-by-year (S4 Fig). Moreover, we repeated all analyses using alternative "early career" time windows. When the early-career window was set at 25 years (authors' total N = 21,431), results were very similar to those obtained with a 15-year time window. When the early-career window was limited to the first 8 years of publication activity (N = 51,484), results showed a pronounced decline in individual publication rates (numerical results are reported in S1 File), suggesting that our main results are actually conservative with respect to the hypothesis that early-career researchers are publishing more papers.
Discussion
We analysed individual publication profiles of over 40,000 scientists whose first recorded paper appeared in the Web of Science database between the years 1900 and 1998, and who published two or more papers within the first fifteen years of activity-an "early-career" phase in which pressures to publish are believed to be high. As expected, the total number of papers published by scientists has increased, particularly in recent decades. However, the average number of collaborators has also increased, and this factor should be taken into account when estimating publication rates. Adjusted for co-authorship, the publication rate of scientists in all disciplines has not increased overall, and has actually mostly declined. Co-authorship might not fairly reflect actual contribution, because authorship attribution practices might have changed over time and therefore roles that previously were not rewarded with authorship-e.g. senior scientist, mentor, lab director, technician, statistician-now might be. However, even if we ignored co-authorship and measured the number of papers published as first author, a position that in most disciplines indicates whom has mostly contributed to the work [25], we observed no significant increase overall. Early-career scientists today publish, as first authors, roughly one paper less than their colleagues in the 1950s.
These results are robust to methodological choices and are conservative with respect to the hypothesis of growing publication rates. Therefore, the widespread belief that pressures to publish are causing the scientific literature to be flooded with salami-sliced, trivial, incomplete, duplicated, plagiarized and false results is likely to be incorrect or at least exaggerated.
If researchers across all disciplines have responded to pressure to publish at all, they might seem to have done so primarily by expanding their network of collaborations within and outside their institution (Fig 1b), thus obtaining (co-)authorship on a higher number of papers with the same amount of research effort. Not all collaborations are alike, and it is possible that specific types of collaboration (e.g. long-distance collaboration versus within-lab collaboration) might have different effects on publication rates. However, it is intuitively clear (and supported by our own data, see S4 Fig) that researchers with multiple collaborators are able to share multiple papers and thereby increase their overall list of publications. Since neither productivity nor impact are typically calculated fractionally by current bibliometric tools, expanding one's range of collaborations is a virtually cost-free strategy against pressures to publish, and was openly recommended as such in the literature e.g. [10].
There is no denying that co-authorship has grown primarily for genuine scientific necessities linked to the growing complexity of phenomena studied. The fact that co-authorship started growing earlier on in the physical sciences [26], and our finding that, when moving from the humanities to the physical sciences (i.e. from lo-to high-consensus disciplines [24]) the optimal numbers of co-authors increased (S4 Fig) supports this hypothesis. Co-authorship might also have increased thanks to improvements in long-distance communication technology, as well as a growing support for interdisciplinary research. However, the extremely rapid rise in co-authorship observed in biomedical research and other areas suggests that other factors in addition to the growing complexity of science are at play [27,28]. In particular, we hypothesize that performance evaluation policies might represent one of the drivers of increased co-authorship, and therefore that questionable co-authorship practices may be a consequence of pressures to publish that is significantly overlooked by researchers and policymakers.
The pressures that in numerous surveys and interviews scientists have reported to feel e.g. [16,17] are likely to be genuine. Between-country comparisons made in this study offer a preliminary support of this view. Countries that in our study have higher fractional and firstauthor productivity, in particular United States and United Kingdom (see S1 File), are also those reporting higher perceived pressures to publish [16,27]. However, we found no evidence that the output of scientists in these "high-pressure" countries has increased over time, and therefore no indication that scientists in these countries are increasingly fragmenting their output or responding in any other negative way to pressures to publish. This null finding is in agreement with a previous analysis on corrections and retractions, which found no evidence that research integrity might be lower in scientists that publish at higher rates, in high-impact journals or that work in countries where research performance is evaluated quantitatively [29] The significant increase in fractional and first-author productivity that we observed in South-European countries (i.e. Italy, Spain, Portugal) is likely to reflect not a net increase in productivity, but rather a shift from non-English national journals to English-language international journals indexed in the Web of Science (WOS), a trend largely driven by policies aimed at measuring and maximizing research impact [30]. This trend might be especially pronounced in the social sciences, which traditionally had a national and local focus. Therefore, reports suggesting that journals are flooded with growing submissions e.g. [13,14] may not be false, but might reflect the growing numbers of researchers in the social sciences and in developing countries who choose to publish in international journals. Between-country comparisons made in this study must, however, be interpreted with caution, because our sampling strategy is probably not random with respect to particular demographics or cultural groups (see Methods). Therefore, future research should confirm our observations about national differences and further examine the link between publication patterns, nationality and research policies.
We found no evidence that our sampling strategy introduced bias in the results, but we cannot exclude that three-lettered names might offer an unbalanced representation of the scientific community. For example, our sampling method could under-represent women, which are more likely than men to change their surname, or it could over-represent Catholics, whom in some countries are more likely to bear three-initialled names. Assessing whether specific cultural groups within a country have different levels of productivity is a fascinating hypothesis to test in future work, but it is unlikely to significantly limit this study's conclusions, because these are not based on between-country (or between discipline) comparisons but on long-term trends that are similar across countries and disciplines. For our conclusion to be flawed, in other words, one would have to assume that all three-initialled names around the world represent a similar group of people, which are less likely to respond to pressures to publish (or to engage in questionable research practices) than those with two-initialled names-an hypothesis that appears to be rather unrealistic.
Even if, as our data suggests, scientists are not publishing papers at higher individual rates, they might still be experiencing genuine and growing pressures. For example, scientist are likely to experience increasing pressures to write grant applications, reports, syllabi and other material. This would imply that, over time, scientists have been compelled to dedicate a smaller proportion of time to research and publication activities. It is also possible that the average time and effort required by each paper has increased over time, putting successive generations of scientists under growing pressures to maintain a high publication rate. These "increasing workload" and "increasing research effort" hypotheses should be tested in future studies. Nonetheless, these hypotheses would directly contradict the notion, tested in this study, that scientists are increasingly publishing fragmented and inconclusive results, unless one supposed that scientists are publishing papers that are both fewer in number and poorer in content.
If, as our data suggests, contemporary science is not suffering from a salami-slicing of papers, then current policies aimed at countering this problem are likely to be ineffective. Indeed, such policies could have negative consequences, because curtailing the list of publications submitted in support of grant applications e.g. [11], or ignoring any consideration of productivity when evaluating institutions' research performance e.g.
[12] might put scientists under even greater pressures to boost their citation scores, journal impact factor profile, visibility in the mass media and to "salami-slice" their collaborations [31], all at the possible expense of scientific quality and rigour. | 2018-04-03T01:49:28.506Z | 2016-03-09T00:00:00.000 | {
"year": 2016,
"sha1": "48505ca446b4036b6c4c7e305135686f53be8b8b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0149504&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48505ca446b4036b6c4c7e305135686f53be8b8b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
253455123 | pes2o/s2orc | v3-fos-license | Opinions of future biology teachers about their competencies in the field of education in the context of multilingual education
This research aimed to determine the opinions of future biology teachers about the importance of language in biology teaching and effective techniques in biology teaching. With the start of education by students from different languages in public schools, the difficulties of the languages of education were experienced. For this reason, education becomes difficult. Students with different mother tongues make it difficult for teachers and may experience difficulties in fulfilling their duties. For this reason, this study was needed to determine the readiness of the pre-service teachers studying in the field of biology for the problems they may encounter in their professional lives. The opinions of 85 biology department senior students who are in their last year of university were consulted. For the questions to be used in the research, the research questions prepared by the researchers and finalised by taking the opinions of the experts were asked. The findings obtained in this study, which was carried out using the qualitative research design, were thematised and explained in detail with the content analysis method. When the results obtained from the research are examined, we see that the senior students of the biology teaching department do not have sufficient knowledge of multilingualism and that they need training on education-teaching strategies.
Introduction
It is very difficult to help students become scientifically literate in areas where scientific development is rapid, facts are complex and the amount of accumulated knowledge is very high (Duncan, Rogat, & Yarden, 2009). Although the idea of biological inheritance is as old as humanity, domestic animals and grains, the treatment of genetics as a discipline started in the 1900s. The pathways of classical genetics come from fields of study that have divergent purposes related to evolution, cytology, embryology and reproduction, reproduction and hybridisation, but have been considered an aspect of genetics since the 20th century (Carlson, 2004). There are many historical models in genetics that Carlson (1966) calls 'bogeyman models' because there is no simple and clear idea of the function of genes in the scientific community that existed simultaneously at a given time. Gericke and Hagberg (2007) suggested that there are five different historical models for the historical development of gene functions that differ in their epistemological features: (a) Mendelian's model, (b) classical model, (c) biochemical classical model, (d) neoclassical model and (e) modern model. These historical models are often superficial or imprecise and can be useful tools for both the scientific community and school science education to develop explanations or explanations of the world. Models can be used explicitly or implicitly in science education to relate to the curriculum, textbook or classroom environment in a retrospective context (Gericke, 2008).
To progress in learning, students should encounter explanations that give an idea of the nature of a particular phenomenon and that can be used for more phenomena. Teachers can use current social science topics to teach science content related to these topics and the nature of science (Lederman, Antink, & Bartos, 2014;Thörne & Gericke, 2014). The use of technology in education is very important. The methods and techniques learned in the teaching process determine whether they will be used in their professional lives in the future (Uzunboylu et al., 2022). The task of teachers is to provide instruction. Studies show that neither teachers nor students typically have knowledgeable views on teaching (Driver, Leach, Millar, & Scott, 1996;Lederman & Lederman, 2004;Schwartz et al., 2002). However, the research basis is noticeably smaller than it is for NOS due to the lack of an easily available or frequently used instrument similar to the nature of science questionnaire (VNOS) (Lederman, Abd-El-Khalick, & Bell). Tekkaya, Çapa, and Yılmaz (2000) determined in their study that pre-service biology teachers have learning difficulties about misconceptions in many subjects. The same researchers reported that students had many misconceptions about plant biology, ecology, digestive system, respiration, excretion, enzymes, diffusion and osmosis, cell division and classification. As a result of the research, it was stated that teacher candidates should be informed about the misconceptions identified task, content, forms, resources, methods and technologies of education of students belonging to this system and functions of future biology teachers in the upper level and structure of their vocational education. According to research, genetics has long been recognised as one of the most important components of the basic biology curriculum, which is difficult to learn and teach (Gericke & Smith, 2014;Hickey, Kindfield, Horwitz, & Christie, 2003;Lewis & Wood-Robinson, 2000;Marbach-Ad & Stavy, 2000). outline three main reasons why studying modern genetics is difficult. First, they note that it adds an interdisciplinary dimension to the field, as reasoning in modern genetics requires an understanding of chemical and physical interactions at the molecular level. Accordingly, they point out that this interdisciplinary dimension adds complexity to students who cannot understand the chemical structure of biological molecules and the basic knowledge of atoms and molecules. Second, citing earlier research, they argue that cellular and molecular processes and entities found in genetic phenomena are invisible and experimentally inaccessible to students (Marbach-Ad & Stavy, 2000). Roseman, Stern, and Koppal (2010) tried to understand the molecular basis of inheritance. First of all, it is necessary to have a consistent understanding of the two main functions of DNA: 1) the function of DNA is to determine traits in organisms, and 2) the function of DNA. It is a function of the transfer of knowledge from one generation to the next. Accordingly, students are ready to transfer knowledge by knowing ideas about them and relating them to each other and using that knowledge biologically. They should be able to use it at different levels of the organisation. Studies have been conducted on the teaching of gene courses used in biology teaching. Duncan, Castro-Faix, and Choi (2014) reported that two frameworks developed as genetic learning progression (Duncan & Hmelo-Silver, 2009;Roseman, Calwell, Gogos, & Kurth, 2006). Based on their main approach, they first conducted a study on whether Mendelian Genetics or the Central Dogma of Molecular Biology should be taught.
Researchers have emphasised the importance of language in science education for over 40 years (Mortimer & Scott, 2003). Students especially experience problems with technical and non-technical vocabulary related to logical connections (Zhang & Lidbury, 2012). Here, language is not only a means of conveying different meanings but also a part of creating the meaning itself (Thörne, Gericke, & Hagberg, 2013). In order to learn scientific content, it is necessary to learn the original language of science first. In this direction, it is necessary to raise the awareness of teachers about the importance of language in science teaching. Frequent and daily misuse of terminology in science lessons, where conceptual differences are very high, causes students to have difficulties in the relationship between many concepts such as chromosome, gene, genetic information and alleles. Information about such concepts should be presented to students in a consistent manner and it is important to consider the epistemological features of the various disciplinary contexts in which the concept is used in genetics teaching (Flodin, 2009).
Research purpose
This research aimed to determine the opinions of future biology teachers about the importance of language in biology teaching and effective techniques in biology teaching. In the problems experienced in education, it is necessary to eliminate the qualifications of teachers. The biology course is a branch where experiments are carried out frequently. With the start of education by students from different languages in public schools, the difficulties of the languages of education were experienced. For this reason, education becomes difficult. Students with different mother tongues make it difficult for teachers and may experience difficulties in fulfilling their duties. For this reason, this study was needed to determine the readiness of the pre-service teachers studying in the field of biology for the problems they may encounter in their professional lives. With the result of this research, what kind of problems future biology teachers will experience in their professional lives can be determined in advance and necessary precautions can be taken accordingly.
Purpose of the study and research questions
According to the purpose, the following research questions are asked: 1. How do you make biology concepts culturally friendly and understandable to linguistically diverse students? 2. What are some of the effective teaching strategies and techniques? 3. What are your views on the academic language you use in teaching and language learning for students?
Method
With the start of education by students from different languages in public schools, the difficulties of the languages of education were experienced. For this reason, education becomes difficult. Students with different mother tongues make it difficult for teachers and may experience difficulties in fulfilling their duties. For this reason, qualitative interviews and document analysis techniques were used in the study conducted to determine the readiness of the pre-service biology teachers regarding the problems they may encounter in their professional lives. A descriptive research approach was carried out with the interview method, which is one of the qualitative research methods. Tekindal and Uguz (2020) stated that the focus of qualitative research is a research model that helps us understand participants' own perspectives and comments on a situation or topic (Mtemeri, 2022).
Data collection tools
A demographic information form prepared by the researcher and a semi-structured interview form consisting of open questions were used to collect the research data. After the research questions were prepared, they were finalised by three experts in the field and three open-ended interview questions were applied to the students. The open-ended questions of the questionnaire were prepared based on the literature and the researchers' own experiences.
Research group
This study, which was conducted with the students of the biology teaching department at the university, is a case study with a methodological basis. This study was conducted with students studying at a university with a biology teaching department in the spring semester of 2021-2022. 85 senior students studying in the biology department participated in the study on a voluntary basis. The questions were prepared in the form of semi-structured interview questions and the results were analysed in detail with the content analysis method. As a result of the survey, the opinions of the students about their competence in teaching in the field of biology were taken. When the findings regarding the demographic information of the biology teaching department senior students at the university are examined, there are 53 female students and 32 male students. When the age range of the participants participating in the research is examined, there are 24 students in the 18-20 age range, 35 students in the 21-24 age range and 26 students in the age range of 24 and above. When the findings of the biology department students studying at the university on the subject of linguistically teaching biology concepts are examined, the majority of the students stated that the method and technique should be chosen well. In the same way, the majority of students say that teaching by establishing a similarity makes teaching easier. There are 22 students who argue that experimental-based teaching should be weighted.
Findings on teaching biology concepts to culturally friendly and linguistically diverse students
Opinions of some of the students are as follows: 'Many of the students I observed while doing internships have misconceptions. It can be said that the reason for this is the language used and the textbooks. I can say that the methods chosen in the teaching of languages taught with difficulty are very important'.
'Teaching becomes easier by establishing similarity. Especially when explaining the concepts used in the content of biology, which is a common field such as physics and chemistry in teaching concepts, it is very important that they are explained in the same way in other courses'. When we look at the findings of the last year students studying in the biology department regarding the methods and techniques used, there are 42 students who say experimental method, 20 students who say technology supported education, 12 students who say cooperative learning and 1 student who says cognitive learning.
Findings on effective teaching strategies and techniques
Opinions of some of the students are as follows: 'Teaching biology is a course in which experiments are predominant. The content in biology lessons is empirical. It includes information encountered in the everyday outside environment. For this reason, experiential-based teaching is very important'.
'When we look at the teachings in our country, a teacher-centred approach is generally used in biology teaching. Teacher-centred teaching makes students passive and ensures that their learning is not permanent. For this reason, it is important to activate students mainly by using new techniques. Cooperative learning is one of these methods. With student-centred teaching, students can both learn easily with each other and use what they have learned in their daily lives. The method must be chosen very well'. Too many theoretical terms 23 . Opinions of future biology teachers about their competencies in the field of education in the context of multilingual education. Cypriot Journal of Educational Sciences. 17(10), 3795-3805. https://doi.org/10.18844/cjes.v17i10.8251 3801 There are 32 students who stated that their views on the academic language used in teaching the students who will be biology teachers of the future and on language learning should be made easier to understand. Likewise, the majority of students state that it is only difficult. It was found that there were pre-service teachers who stated that they contain too many theoretical terms.
Findings regarding the academic language used in teaching and their views on language learning
Opinions of some of the students are as follows: 'Academic language is very important. While using these languages, there may be semantic confusion, that is, conceptual confusion. I can say that this situation is especially related to language problems in biology lessons. Learning becomes very difficult when the terms they learn outside and the terms in the academy are different'.
'I think it's awareness. With social responsibility projects, we can gain knowledge in many unknown areas'.
In order to prevent the problems experienced in the professions, the problems in the education they received at the university should be eliminated and they should be ready for the profession. The results obtained from this study, which aimed to determine the opinions of future biology teachers on the importance of language in biology teaching and effective techniques in biology teaching, can be said to be in the confusion of concepts and they do not feel ready for the profession.
When the results of the biology concepts of the students studying in the biology department at the university are examined, the majority of the students state that the method and technique should be chosen well. This result is very important. Methods and techniques are the most important element in providing teaching. Likewise, the majority of students say that teaching facilitates teaching by making similarity. It is seen that there are students who state that experimental-based teaching should be weighted. Concepts can become more understandable by experimental means. Ürek, Kayalı, and Tarhan (2002) stated that among students this one is personal and for learning can be transferred to teaching. Tekkaya et al. (2000) inaccurately established that candidates are tested in biology education and used in many subjects in the context of a biology course.
Method and technique is the most important element of education. When we look at the findings of the last year students studying in the biology department about the methods and techniques used, the results of experimental method, technology supported education, cooperative learning and cognitive learning have emerged. This situation shows that the pre-service teachers do not have sufficient knowledge about method and technique. In the relevant literature, the terms 'method' and 'technique' are strongly confused with each other. So, generally, this is defined as the shortest path to the destination. Technique the way the teaching method is applied in practice or can be defined as a set of transactions (Aydin, Saribaş, Özalp, & Yilmaz, 2021). Kamalov, Saipov, and Kamalov (2022), stated in their study that future teachers have deficiencies in methods and techniques and that the education they receive is insufficient. The importance of technology supported education is supported by other studies. There are studies in which the opinions of university students are positive about the technology-supported education they receive at the university (Urh, Jereb, Šprajc, Jerebic, & Rakovec, 2022).
When we look at the results of the findings of academic language and language learning used in the teaching of students who will be future biology teachers, which is another finding, it is concluded that they have a comprehension problem in the same way. Eliminating these problems in the education they receive at the university will prevent the problems that will be experienced in their future professional lives. It is supported by other studies that future biology educators have problems in methodical education (Arbuzova, 2011;Bulavintseva, 2011;Moroz, 2008;Stepaniuk, 2011;Traitak, 2002;Tsurul, 2011). | 2022-11-11T16:16:58.920Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "8468b03cc9544b9f8e637b388b1dd93e9c09afba",
"oa_license": null,
"oa_url": "https://un-pub.eu/ojs/index.php/cjes/article/download/8251/9227",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cb0c8b51525f00800a4e92ae6cc7dafc3862bf42",
"s2fieldsofstudy": [
"Education",
"Biology",
"Linguistics"
],
"extfieldsofstudy": []
} |
16873096 | pes2o/s2orc | v3-fos-license | The influence of mammogram acquisition on the mammographic density and breast cancer association in the mayo mammography health study cohort
Introduction Mammographic density is a strong risk factor for breast cancer. Image acquisition technique varies across mammograms to limit radiation and produce a clinically useful image. We examined whether acquisition technique parameters at the time of mammography were associated with mammographic density and whether the acquisition parameters confounded the density and breast cancer association. Methods We examined this question within the Mayo Mammography Health Study (MMHS) cohort, comprised of 19,924 women (51.2% of eligible) seen in the Mayo Clinic mammography screening practice from 2003 to 2006. A case-cohort design, comprising 318 incident breast cancers diagnosed through December 2009 and a random subcohort of 2,259, was used to examine potential confounding of mammogram acquisition technique parameters (x-ray tube voltage peak (kVp), milliampere-seconds (mAs), thickness and compression force) on the density and breast cancer association. The Breast Imaging Reporting and Data System four-category tissue composition measure (BI-RADS) and percent density (PD) (Cumulus program) were estimated from screen-film mammograms at time of enrollment. Spearman correlation coefficients (r) and means (standard deviations) were used to examine the relationship of density measures with acquisition parameters. Hazard ratios (HR) and C-statistics were estimated using Cox proportional hazards regression, adjusting for age, menopausal status, body mass index and postmenopausal hormones. A change in the HR of at least 15% indicated confounding. Results Adjusted PD and BI-RADS density were associated with breast cancer (p-trends < 0.001), with a 3 to 4-fold increased risk in the extremely dense vs. fatty BI-RADS categories (HR: 3.0, 95% CI, 1.7 - 5.1) and the ≥ 25% vs. ≤ 5% PD categories (HR: 3.8, 95% CI, 2.5 - 5.9). Of the acquisition parameters, kVp was not correlated with PD (r = 0.04, p = 0.07). Although thickness (r = -0.27, p < 0.001), compression force (r = -0.16, p < 0.001), and mAs (r = -0.06, p = 0.008) were inversely correlated with PD, they did not confound the PD or BI-RADS associations with breast cancer and their inclusion did not improve discriminatory accuracy. Results were similar for associations of dense and non-dense area with breast cancer. Conclusions We confirmed a strong association between mammographic density and breast cancer risk that was not confounded by mammogram acquisition technique.
Introduction
Percent mammographic density represents the proportion of stromal and epithelial tissue visible on a mammogram. Mammographic density varies between women and is influenced by age, body mass index (BMI), and some epidemiologic risk factors for breast cancer such as nulliparity and late age at first birth [1,2]. Women in the highest categories of percent density (PD) are at three to five times greater risk of breast cancer relative to those in the lowest category, making it one of the strongest known risk factors for breast cancer [3,4]. The associations with breast cancer are consistent whether density is measured as a qualitative trait (for example, American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) tissue composition assessment) [5,6] or a quantitative trait (for example, computer-assisted thresholding-based methods such as Cumulus) [3,[7][8][9].
Although a substantial number of investigations of mammographic density and breast cancer have been reported [3], the majority were conducted across multiple institutions and consequently using different mammography units. This increases the potential for variation in the density estimates because of several parameters, including the influence of image acquisition. The image acquisition parameters consisting of compressed breast thickness, compression force, x-ray tube voltage peak (kVp), milliampere-seconds (mAs), and target-filter combination, where applicable, vary across mammograms to limit radiation and produce a clinically useful image. The kVp is set either automatically when using the automated exposure control (AEC) mode or by the technologist by using a reference or look-up table. The mAs value (or, equivalently, the x-ray production) is controlled by the AEC. The compression paddle setting, determined by the technologist, depends on the breast size as well as the patient's tolerance to compression force. Thus, we expect larger breasts to have larger compressed breast thicknesses. The kVp setting is a positive function of compressed breast thickness. It is well known that larger breasts are more apt to be composed of adipose tissue. Therefore, we hypothesize that both compressed breast thickness and kVp will be associated positively with non-dense area and inversely with PD. Dense breast tissue has greater x-ray attenuation properties than adipose tissue. Therefore, we expect the mAs value to have a positive correlation with dense area and also with PD. Firm or larger breasts (or both) require greater levels of compression force to both spread out and separate the breast tissue in order to maximize the image clarity. We speculate that compression force, though more difficult to connect, is associated positively with non-dense area and inversely with PD.
Given the expected associations of these acquisition parameters with mammographic density measures, we further hypothesized that the acquisition measures confound the density and breast cancer associations. But no studies known to date have directly evaluated the influence of the different acquisition parameters on the density and breast cancer association. Some studies, however, have accounted for acquisition through calibration (that is, normalizing the inter-image pixel value scale) but these findings are mixed; some show that calibration results in stronger mammographic density and breast cancer associations [10,11], whereas others have shown that calibration does not improve these associations [12,13]. In this report, we examine the association of the acquisition parameters with mammographic density measures and their influence on the density and breast cancer association within a prospective cohort study from a single large breast practice, the Mayo Mammography Health Study (MMHS) Cohort.
Mayo Mammography Health Study eligibility
The MMHS prospectively enrolled patients scheduled for a screening mammogram from October 2003 through September 2006 at the Mayo Clinic in Rochester, MN. The MMHS was approved by the Mayo Institutional Review Board. Women were invited to take part if they were at least 35 years old, residents of Minnesota, Iowa, or Wisconsin (tri-state), and had no personal history of breast cancer. Women scheduled for a diagnostic mammogram (known or suspected breast cancer) were not eligible. Eligible women were mailed an invitation packet consisting of a study brochure, a consent form, a baseline questionnaire, and a permission request form to link to state tumor registries. Out of 49,032 women initially invited, 10,149 were excluded for residence outside of the tri-state area (1,698), mammogram not for screening purposes (that is, a diagnostic mammogram) (6,383), and a personal history of breast cancer (2,068). Of 38,883 eligible women, 19,924 provided written informed consented (51.2% adjusted response rate) ( Figure 1). Compared with nonparticipants, participants were younger (11 months on average) and more likely to have ever used post-menopausal hormones (45% versus 33%), to have a first-or seconddegree family history of breast cancer (19% versus 16%), more frequent mammograms (47% versus 38% had seven or more mammograms since 1986), and a history of breast biopsy (23% versus 20%) (Additional file 1).
Mayo Mammography Health Study questionnaire
All women were asked to complete a written questionnaire that covered mammogram screening behaviors; menstrual and reproductive factors; surgeries of the breast, ovaries, and/or uterus; use of hormone therapies; medical history; family size and cancer history; use of non-steroidal anti-inflammatory medications; use of vitamins and complementary medicines; alcohol and cigarette use; physical activity; current weight and weight history; race; and education. Height and weight were also abstracted from the Mayo Clinic medical record at the medical visit closest in time to each mammogram collected for the study. To identify subjects with prevalent cancer, the medical history section of the questionnaire inquired of previous cancer diagnoses. A total of 2,283 women in the cohort reported having had at least one form of cancer (other than breast cancer) prior to enrollment. This group is excluded for analyses restricted to a 'Healthy cohort' (see Additional file 2 for a listing of prior cancer types self-reported among cohort members). Women with a prior diagnosis of breast cancer were ineligible and excluded earlier from the cohort.
Follow-up
Follow-up for cancer occurrence was performed annually by using a combination of cancer registry data (passive follow-up) and mailed follow-up (active follow-up). All women were linked to the Mayo Clinic Tumor Registry to identify cases of cancer that had been diagnosed or treated (or both) at the Mayo Clinic since enrollment. To identify cancers external to the Mayo system, women who lived in Minnesota, Iowa, or Wisconsin and had provided written consent for linkage to external tumor registries (99.7%) were linked to their respective state tumor registries.
Active follow-up to obtain cancer and vital status was conducted in 2009 and 2010 via mail and telephone from women who had not been back to the Mayo Clinic within 12 months (thus, the medical record would not have current cancer diagnoses) and either had moved outside Minnesota, Iowa, or Wisconsin (1,755 women, 8.8%) or did not grant consent for registry linkage (62 women, 0.3%). Telephone follow-up was attempted on non-responders to the mailed contact. Thus, women who were eligible for active follow-up were contacted each year unless they were seen at the Mayo Clinic in the prior 12 months. Some women, then, could have been actively followed in one year but not the other. Active follow-up using all possible methods was successful for 83.1% in 2009 and 78.4% in 2010. By using both passive follow-up through the registries where possible and active contact by our staff, we have been able to collect cancer occurrence data on 98.8% of our cohort through 2010 (96,483 person-years).
Person-years of follow-up were computed as the amount of time since completion of the enrollment mammogram to subsequent events that differed depending upon whether the woman remained a resident of the tri-state area (and thus would be passively reported to us by the relevant state tumor registry) or moved outside. Women who resided in Minnesota, Iowa, or Wisconsin over the period were censored in the following order: (a) at the date of diagnosis with breast cancer, (b) at the date of death, or (c) on 17 December 2009. Women who moved out of these three states were censored in the following order: (a) at the date of diagnosis with breast cancer, (b) at the date of last response to cohort follow-up, (c) at the date last seen at the Mayo Clinic, or (d) at the date last known to reside in Minnesota, Iowa, or Wisconsin.
Case-cohort design
We used the case-cohort design, in which all incident breast cancer cases that occurred in the at-risk cohort during the follow-up and a random sample of approximately 10% of those in the cohort (n = 2,259, plus 39 who later became cases) were selected to conduct the main analyses. We chose this design to permit prospective collection and analysis of the mammograms and risk factor data beginning at the start of the project. This design reduced the costs and time associated with obtaining mammograms and PD estimates on every woman in the cohort.
Mammogram acquisition, retrieval, digitization, and density estimation
All mammograms at the Mayo Clinic over the time of the study were performed on one of 12 Hologic (LoRad) screen-film mammography systems (Hologic, Inc., Bedford, MA, USA) using either molybdenum (Mo)/Mo or Mo/rhodium (Rh) target-filter materials. Image acquisition parameters vary across mammograms to limit radiation and produce a clinically useful image. The compressed breast thickness (distance between the compression paddle and breast support surface) is set by the technician and is dependent upon the breast size and the patient's tolerance. In tandem, the compression force is defined by the paddle adjustment. Breast compression is used to achieve uniform breast thickness and spread the breast tissue to improve image quality. Accuracy of the measurement of thickness was within ± 5 mm; furthermore, the paddle tilt, which depends on breast size, paddle size, and compression force, showed tilt deflections of less than 1 cm when a known standard for evaluation was used. The mAs value varies due to the AEC. The AEC limits the exposure while producing a useful image and is dependent upon the breast size, breast composition, and sensor location(s). The AEC mode used for the images acquired in this study was primarily AutoFilter mode. In AutoFilter mode, the x-ray unit uses a short prepulse exposure to determine the lowest kVp selection that delivers a total exposure time of below 2 seconds or 200 mAs. In this mode, the minimum kVp selection is 25 kVp used with a Mo filter. The kVp selection rises with increased tissue attenuation up to 30 kVp, where the Mo filter is exchanged with an Rh filter. When the maximum kVp is reached (31 or 32 kVp), the mAs value may exceed 200 as needed. For very thin breasts, the AEC mode used was AutoTime, where the kVp was manually set at 23 kVp with Mo filter. KVp was tested annually with a control limit of 5.
All image acquisition parameters were manually abstracted by our staff from the printed screen-film mammogram: the compressed breast thickness (in millimeters), compression force (in pounds), x-ray tube voltage peak (kVp), milliampere-seconds (mAs), and filter. Note that when kVp is 30 or above, the filter is automatically Rh. In our data, very few individuals had a kVp of 30 or above requiring an Rh filter. Thus, the target-filter combination was limited and therefore was not considered in the analysis.
For all cases and women in the subcohort, we obtained and digitized one view from the enrollment screen-film mammogram (2003 to 2006). Screen-film mammograms were digitized on the Array 2905 laser digitizer (Array Corporation, Roden, The Netherlands), which has 50-μm (limiting) pixel spacing with 12-bit grayscale bit depth. PD was estimated by a single trained programmer (F-FW) from the craniocaudal mammogram view of the non-cancerous breast of cases and the left breast of controls. All images were scrubbed of identifying information and re-oriented so that all images were presented consistently despite the side evaluated. Thus, the reader was blinded to cancer status. Batch files were composed of both cases and controls, and a 5% repeat set of images was included within each batch file to assess reliability. Percent mammographic density (dense area divided by total area, times 100%) was estimated by the programmer by using a computer-assisted thresholding program, Cumulus [7]. Briefly, two thresholds are set by the programmer; one separates the breast from the background and the other separates dense from nondense tissue. In the batch files examined for this study, our reader consistently demonstrated high reliability (intraclass correlation of greater than 0.93).
In addition to estimating the semi-quantitative estimation of density described above, we obtained the clinical BI-RADS four-category tissue composition assessment corresponding to the enrollment mammogram from the Mayo Clinic electronic medical record. The BI-RADS tissue composition has been routinely estimated on all screening mammograms at the Mayo Clinic since mid-1996. Mayo Clinic attending radiologists classified each mammogram into one of four categories as defined in the BI-RADS lexicon over this period (American College of Radiology, third edition): (a) the breast is almost entirely fat; (b) there are scattered fibroglandular densities; (c) the breast tissue is heterogeneously dense, which may lower the sensitivity of mammography; and (d) the breast is extremely dense, which could obscure a lesion on mammography. These ratings convey the relative possibility that a lesion may be obscured in mammography. All four mammogram views (craniocaudal and mediolateral oblique for ipsilateral and contralateral sides) contribute to the assessment of BI-RADS composition. In our study, we used the estimates that experienced radiologists assessed in the clinical setting. These radiologists did not systematically assess BI-RADS composition for this study, but this rating has shown adequate interobserver reliability [14].
Statistical analyses
We first verified that the randomly sampled subcohort represented the full cohort by comparing basic demographic and clinical factors between the subcohort and all other cohort members (Table 1). Next, we compared these factors between breast cancer cases and the members of the subcohort by using t tests for continuous variables and chi-square tests for categorical variables. We estimated hazard ratios (HRs) and their 95% confidence intervals by using Cox proportional hazards regression to describe the association between the two mammographic density measures and breast cancer. Age was used as the time scale in the Cox model; age at enrollment was used as the starting point, and follow-up age was defined as age at breast cancer diagnosis for cases and age at last known follow-up for members of the subcohort. The case-cohort design was accounted for by applying sampling weights to subjects selected for the subcohort [15]. In addition to estimating the HRs, we performed tests for trend and computed the C-statistic from the Cox proportional hazards model to measure the degree to which mammographic density could discriminate risk between breast cancer cases and the other members of the subcohort. We compared the relative risk of breast cancer between groups of women classified into quartiles of PD based on values observed in the subcohort. Women in the lowest density category served as the reference group. Analyses of dense and nondense area were conducted similarly.
Two additional analyses were performed to ensure the integrity of our findings. First, to ensure that prevalent cancers did not influence our results, we performed the above analyses excluding 2,283 members of the cohort who had a cancer diagnosis other than breast cancer prior to baseline enrollment. Second, we performed analyses of BI-RADS density and breast cancer within the entire cohort of 19,924 women to compare with results from the case cohort. Because the PD measure was not available on the entire cohort, we were unable to compare results for this density measure.
Data for mAs, thickness, and compression force were divided into quartiles. kVp data were not normally distributed, and 55% of values were at a standard value of 25. Thus, this variable was categorized into a three-level ordinal variable. To examine the association of acquisition parameters with mammographic density, we estimated the mean and standard deviations (SDs) of PD, dense area, and non-dense area by categories of the four acquisition parameters. We also calculated Spearman correlation coefficients between acquisition parameters and density measures. Next, we examined the degree to which inclusion of each of the acquisition parameters contributed to the breast cancer association in the presence of the measure of mammographic density, and the amount that each of these acquisition parameters individually, and all measures combined, changed the HR estimates from the original models. A change in the HR estimates of 15% or greater would provide evidence of confounding. To determine the degree to which these parameters influenced the prediction of breast cancer risk, we also computed C-statistics for the proportional hazards regression models that included the acquisition parameters as covariates. Similar analyses were conducted by using dense area and nondense area as the endpoints.
All analyses were carried out with the SAS software system (SAS Institute Inc., Cary, NC, USA). All reported P values are two-sided, and comparisons were adjusted for variables found to be significantly associated with density, including age, menopausal status, post-menopausal hormone use, and BMI.
Results
Of the 19,924 women in the MMHS cohort, 59 had a prevalent breast cancer (defined as within 60 days post-enrollment), leaving 19,865 eligible women for these analyses. Incident cases included 318 breast cancers diagnosed before 31 December 2009, and the subcohort consisted of 2,298 women randomly sampled from the entire cohort. Of the subcohort, 39 became cases and 2,259 were unaffected (Table 1 and Figure 1).
The average follow-up times from enrollment mammogram to diagnosis or last follow-up were 2.4 years (SD = 1.7) for cases and 5.0 years (SD = 0.9) for the subcohort. As shown in Table 1, the mean age of cases at enrollment was 61.8 years, which was slightly higher than among members of the cohort (58.0 years) or subcohort (58.0 years). BMI was similar in all three groups. Cases were more likely than women in either the cohort or subcohort to be post-menopausal. Cases were less likely to report having never used post-menopausal hormone therapy (46.5%) than women in either the subcohort (49.7%) or cohort (50.3%). Table 1 also displays the means and SDs of the density estimates and the four acquisition parameters. Mean (SD) PD and dense area were higher among cases (19.1% (13.7%) and 2,622 mm 2 (1,939), respectively) than controls (17.6% (14.1%) and 2,333 mm 2 (1,841), respectively). Of the acquisition parameters, only mAs values showed a greater mean (SD) among cases (161.0 (49.4)) compared with controls (157.9 (49.5)).
We first confirmed the mammographic density and breast cancer association in our study. As expected, both mammographic density measures, PD and BI-RADS, were associated with future risk of breast cancer (P trend < 0.001) ( Table 2). Women with a PD of greater than 25.1% were 3.8 (95% CI 2.5 to 5.9) times more likely to develop breast cancer during the follow-up period than women with a PD of 0% to 5.0%. Similarly, women in the highest BI-RADS category (Extremely dense) compared with the lowest BI-RADS category (Almost entirely fat) were 3.0 (95% CI 1.7 to 5.1) times as likely to develop breast cancer during the follow-up period. When only invasive breast cancers were considered (n = 199, data not shown), risk estimates were somewhat strengthened for the PD and breast cancer association (HR 5.1, 95% CI 3.0 to 8.4 for the highest versus lowest category of PD) but not for the BI-RADS association (HR 2.7, 95% CI 1.5 to 5.0 for the highest versus the lowest level of BI-RADS density). The two additional analyses to evaluate the possible influence of prevalent cancers and the case-cohort (versus full cohort) design on our results showed no marked difference from the main analyses (Additional file 3). Compression force, pounds 24.7 (6.1) 24.9 (5.6) a Reported at enrollment; b 39 women overlap between cases and subcohort; c 59 women were deleted for breast cancers occurring within the first two months after enrollment. They are included in the full cohort but excluded from analyses in this paper. SD, standard deviation. Next, we examined the association and correlation between PD, dense area, and non-dense area and the four acquisition parameters. Table 3 shows the mean (SD) density by categories of acquisition technique. mAs, thickness, and compression were inversely associated with PD, as reflected in the mean differences and correlations (r = -0.03 (P = 0.60), -0.25 (P < 0.001), and -0.14 (P = 0.02), respectively, for the cases and r = -0.06 (P = 0.008), -0.27 (P < 0.001), and -0.16 (P < 0.001), respectively, for the subcohort members). The strongest association was seen across quartiles of thickness (mean PD of 23.3% in the lowest quartile of thickness versus 13.0% in the highest thickness quartile). kVp did not show strong evidence of an association with PD across categories. Mean dense area, however, increased across levels of kVp and mAs and decreased across levels of thickness and compression force. Non-dense area increased across levels of all four acquisition parameters, and the largest correlation was seen for thickness and non-dense area (r = 0.41, P < 0.001 among cases; r = 0.35, P < 0.001 among non-cases).
The strongest correlations among acquisition parameters themselves were observed between thickness with mAs (r = 0.70 for cases and subcohort) and kVp (r = 0.72 for cases and r = 0.70 for subcohort). The smallest correlations were seen between compression and the other acquisition parameters (mAs, r = 0.10 for cases and r = 0.13 for controls; kVP, r = 0.11 for cases and r = 0.10 for controls; thickness, r = 0.001 for cases and r = 0.03 for controls). This was not entirely surprising since kVp is a function of breast thickness and mAs values are influenced by breast size (correlated with thickness) and composition. Table 4 presents the evaluation of acquisition technique on the association between age and BMI-adjusted density and risk of breast cancer. Four parameters were evaluated singly and in combination: x-ray tube kVp, mAs, compressed breast thickness, and compression force. Inclusion of these parameters did not alter the strength of the association between age and BMI-adjusted PD and breast cancer or alter the association of BI-RADS density with breast cancer. For example, each millimeter increase in thickness was associated with only a 1.08-fold increased risk of breast cancer (95% CI 0.92 to 1.26) in the analyses of PD and breast cancer risk. The other three parameters showed similar non-statistically significant associations with breast cancer. The discriminatory capacity of the model after inclusion of any of these four parameters, as estimated by the C-statistic, was not improved for the PD-breast cancer association (0.63 or 0.64 for all models, including the model with all four parameters included) or for the BI-RADS and breast cancer association (C-statistic 0.62 for all models, including the model with all four parameters included) ( Table 4). Similar analyses were conducted by examining the association of adjusted dense area and non-dense area with breast cancer. Like the results with PD and BI-RADS, inclusion of these four acquisition parameters, singly or in combination, did not alter the association between dense area and breast cancer or non-dense area with breast cancer (Table 5). Finally, there was no evidence of interactions between PD and acquisition parameters on breast cancer risk (mAs P = 0.67, kVp P = 0.77, thickness P = 0.95, and compression force P = 0.93).
Discussion
Within a prospective screening cohort at a single institution, we confirmed the association between age and BMIadjusted mammographic density and breast cancer by using a subjective clinical measure or a semi-quantitative estimate. We showed that the acquisition technique was associated with percent and area density measures. However, the mammographic density and breast cancer associations were not materially influenced by adjustment for parameters of mammogram acquisition, suggesting that the density and breast cancer association is robust, at least in the screen-film setting. Our estimates of the association between PD and breast cancer are comparable to those of other cohort studies that used computer-assisted quantitative estimates of percent mammographic density [3,4]. Our estimate that women with greater than 25% density are at 3.8-fold increased risk of breast cancer is similar to those of 3.5 to 4.4 times in nested case-control studies using similar density categories. Because these earlier studies used mammograms from multiple institutions with variations in type of mammography machine manufacturer and processing technique whereas the MMHS used mammograms from a single institution with the same machines and protocols, this report also suggests that this variability in the PD measure has not markedly biased previous reports.
We evaluated the association of mammogram acquisition on percent and area density measures. Our findings of positive associations between compressed breast thickness, compression force, and kVp with non-dense area were expected, given the associations between these measures and breast size. Because larger breasts generally have greater adipose tissue content, they also tend to have lower percent or proportion of density compared with smaller breasts with the same amount of dense area. As such, our findings of inverse associations of PD with thickness and compression were consistent. Along this line of reasoning, we also anticipated an inverse association of kVp with PD but found no evidence for this. We noted, instead, positive associations of kVp with both dense and non-dense breast area. Although somewhat difficult to interpret, this implies that larger breasts have relatively larger amounts of both adipose and dense tissue than smaller breasts, as observed in the projection Table 3 Association a of density estimates (percent density, dense area, and non-dense area) and categories b of the four acquisition parameters Mean and SD of density by categories are estimated using the subcohort only. Correlation coefficients estimated on cases and subcohort separately. b Quartiles were used where appropriate. The distribution of voltage peak (kVp) was highly skewed and was categorized into three levels. c Owing to missing acquisition parameters, 18 cases and 47 controls that were included in the analyses depicted in Table 2 image. We also found a positive association between mAs values and dense area, as originally hypothesized, but also a positive association with non-dense area and a very small but inverse association with PD. kVp and mAs appear to influence the absolute density measures to a greater extent than the ratios, but this needs to be confirmed in other studies. We had hypothesized that these acquisition parameters may confound the association between mammographic density measures and breast cancer risk. However, inclusion of these parameters, alone or in combination, did not influence the association between density and risk of breast cancer. Also, their inclusion did not improve the discriminatory capacity of the statistical models. Therefore, in the context of screen-film mammography and the density measures considered in this report (that is, PD, BI-RADS, dense area, and non-dense area), these acquisition parameters appear not to introduce meaningful variation reflected in the density and breast cancer associations.
The lack of confounding of the density and breast cancer association by thickness was not consistent with studies that suggest that volumetric density, which is dependent on compressed thickness, is more informative than PD and area measures [11]. Our focus on the covariate-adjusted (including BMI) density phenotypes likely explains the discrepancy. In fact, models examining PD and acquisition parameters with breast cancer that did not adjust for BMI found significant associations of all of the acquisition parameters related to thickness -that is, thickness (HR 1.20, 95% CI 1.05 to 1.38), mAs (HR 1.17, 95% CI 1.03 to 1.34), and kVp (HR 1.19, 95% CI 1.01 to 1.41) -with breast cancer. However, similar to models adjusting for BMI, the parameter estimates for the associations of density with breast cancer and the discrimination capability (C-statistics) did not materially change for models with or without the acquisition technique.
Our findings need to be considered in the context of the study design of our cohort. The mammograms included in our study were from a single institution. Therefore, the mammograms in this study are less likely to include variation from x-ray unit manufacturer, filmscreen combination, and film processing conditions than would mammograms in a study that included mammograms collected from multiple institutions. This may limit the generalizability of our findings. In studies with greater variation in mammogram manufacturers and acquisition techniques, it is possible that controlling for these parameters may have a greater impact. This hypothesis needs to be tested in studies that collect mammograms from multiple sources.
We know of no other studies that have evaluated the direct influence of acquisition technique parameters on the density and breast cancer association. However, some studies have been designed to account for these parameters by using calibrated approaches [10][11][12][13]. The calibrated approach seeks to nullify uncertainties (or variation) introduced by the acquisition technique differences by producing standardized data, often with the aim of making comparisons with PD. The evaluation is indirect because the calibrated measure in the comparison is not necessarily PD and not a one-to-one comparison of the same metric such as PD derived from two data representation (that is, from calibrated and raw data). Similar to our findings, some of these studies show that the density and breast cancer association is not strengthened when accounting for the acquisition technique differences when using calibration approaches [12,13]. Because calibration is a newer approach for assessing density and the calibrated density measures are normally not the same metric as PD (or BI-RADS), it is not clear at this time whether the inclusion of the technique parameters in general is not important or whether the calibration techniques require further modifications. However, our findings reinforce and emphasize the robustness of the existing area-based percent breast density measures (that is, PD and BI-RADS), at least on digitized screen-film mammography.
Strengths of our study include the prospective nature (allowing evaluation of mammograms prior to cancer), the estimation of density by two separate methods (a semiquantitative method and a subjective clinical measure), and the ability to systematically compare responders and non-responders in our study by using existing clinical databases. BI-RADS density did not differ substantially between participants and non-participants in the MMHS cohort, and establishing the cohort within one breast screening practice allowed us to reduce other sources of variation, including x-ray equipment (manufacturer) calibrated similarly over this period and the use of one digitizer. A limitation of our study was that BI-RADS density was estimated by numerous readers over time, but this reflects the true clinical experience and how this measure would be used in practice. Owing to a lack of variability within our population, our analyses of acquisition did not include the target-filter acquisition technique. Finally, our investigations reflect only acquisition influence on density estimates from screen-film mammograms and on mammograms from one institution only. Similar studies need to be conducted on images acquired from multiple institutions and on full-field digital mammography.
Conclusions
Results from the MMHS cohort confirm a strong association between mammographic density and risk of breast cancer which was not materially influenced by variability in image acquisition parameters. Based upon similar risk estimates for the mammographic density/breast cancer association, our data suggest that estimation of the association between breast density and breast cancer is not improved by including acquisition parameters. Mammographic density remains a robust breast cancer risk factor that merits consideration for integration into the clinical practice to inform risk assessment and possible intervention. | 2017-06-25T22:52:42.783Z | 2012-11-15T00:00:00.000 | {
"year": 2012,
"sha1": "c76883f53358a786e56a42a2dc46b07c882067ea",
"oa_license": "CCBY",
"oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr3357",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be126f83bc7181a2df588d876687fd91c0f75540",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241615573 | pes2o/s2orc | v3-fos-license | A peridynamic model for damage and fracture in porous materials
: We introduce a peridynamic (PD) model for simulating damage and fracture in porous materials based on an Intermediate Homogenization (IH) approach. In this approach, instead of explicitly representing the detailed pore geometry, we use homogenization but maintain some information about the microstructure (porosity) in the model. Porosity is introduced in the model as initial peridynamic damage, implemented by stochastically pre-breaking peridynamic bonds to match the desired porosity value. We validate the model for elastodynamics with wave propagation in porous glasses, where we match observed wave propagation speeds and the apparent elastic moduli for various porosities. The model is then used to study the fracture behavior of Berea sandstone under three-point bending loading conditions. We validate the model for fracture problems using the case of failure in a sandstone sample with an off-center pre-notch under three-point bending. The IH-PD results agree very well with experiments: we obtain different failure modes depending on the length of the off-center pre-notch. When the pre-notch is short, most damage (and the subsequent
Introduction
Damage and fracture in natural or man-made porous materials (bone, wood, rock, sandstone, or metal foams, filters, concrete, composites, ceramics) present particular challenges: damage can be widespread or finely localized, as micro-cracks "jump" over pores and link together into a number of macro-cracks leading to final failure or they stay separated leading to microcrushing and eventual large scale fragmentation. The underlying physical principles governing their mechanical behavior, but not their failure behavior, is known as "poromechanics". [1][2][3] Poroelastic media are elastic materials with pores/voids of arbitrary shapes and sizes, usually randomly distributed and oriented through the material. Many porous materials can be considered as elastic quasi-brittle materials, as least locally. Their global failure response can, sometimes, appear as ductile failure, as locally brittle damage spreads over pores, flaws and other kinds of discontinuities. The geometry and distribution of pores have a close relationship to their initiation and crack propagation. Although many continuum-based and discontinuous (e.g. distinct element method) methods based on classically homogenized poromechanical approaches have been developed and applied to model porous materials fracture, it remains an open question whether they can correctly capture their fracture behavior. 12 A compromise between having a low computational cost (on par with that of homogenized models) and being able to reproduce the effects the porous microstructure has on the initiation and propagation of fracture and damage is sought in the present contribution. For this purpose, we introduce an intermediately-homogenized peridynamic (IH-PD) model and validate it against experimental results for wave propagation in porous media of different porosities. We then test the new model on Berea sandstone fracture from a three-point bending setup with an off-center prenotch. Experiments show that the sample fails from the pre-notch if the notch is sufficiently long, but if the notch is short, the porous stone fails from its center (the point of maximum tensile stress of an un-notched specimen). We compare the results from the IH-PD model with those from experiments, with other methods from the literature, and with a fully-homogenized peridynamic model. We comment on the results obtained and discuss the importance of preserving the effect of local heterogeneities on the damage initiation, distribution, and crack propagation. This can be accomplished with the new IH-PD model, and at a similar cost with a fully homogenized peridynamic model. Fully homogenized models (classical or peridynamics-based), however, lose important heterogeneity information that controls crack initiation and propagation and because of that will miss, in a significant way, the correct failure behavior of porous materials. Therein lie the benefits for the proposed modeling approach we present in this paper.
Literature review
Peridynamics, a new nonlocal continuum theory proposed by Stewart Silling, 15,16 eliminates spatial derivatives from the classical formulation of continuum mechanics, which makes it a consistent mathematical model for problems with discontinuities in the displacement field. The model is particularly well suited for dealing with cracks and damage in solid mechanics especially in situations where the crack path is not known in advance. These features have enabled peridynamics to be successfully applied to diverse engineering problems that involve fracture and damage evolution. [17][18][19][20][21][22][23] Several recent studies have applied peridynamics in porous materials-related problems: Katiyar et al. 24 and later Jabakhanji and Mohtar 25 developed a peridynamic approach to model flow in porous media. Peridynamics was also used to address hydraulic fracturing and reproduce the deformation induced by the fluid flow in a porous medium. [26][27][28] In the present study, we focus on the initiation and propagation of cracks in dry porous rock. Recently, Zhou et al. investigated how complex fracturing patterns initiate and propagate from a single flaw embedded in rock-like materials under compression using bond-based peridynamics. 29 With this homogeneous PD model, Zhou and co-authors simulated crack propagation in rocks with pre-existing flaws (e.g. [30][31][32] ). These PD models, when applied to failure of materials with relatively high porosity, would have the same issues as the fully-homogenized PD model discussed in section 5. Lee et al. applied bond-based peridynamics to understand crack coalescing processes in rock materials. 33 In these studies of rock fracture, the actual rock materials studied are dense materials with small porosity, such as marble 29 (porosity: 2% 34 ), and Longtan sandstone 14 (porosity: 1.38% 35 ). When the porosity is not negligible, such as in Berea sandstone (porosity range: 10.2~22.2% [36][37][38], damage initiation and crack propagation are strongly influenced by it. As we shall see in section 5, this aspect limits the applicability of such homogenized models.
Peridynamic model for porous materials
In this section, we first briefly review the peridynamic theory for elastic brittle materials.
Then, we describe the fully homogenized PD model (FH-PD) and introduce the new Intermediately Homogenized PeriDynamic (IH-PD) model for porous materials.
Brief review of peridynamic theory for elastic brittle materials
The peridynamic model is a framework for continuum mechanics based on the idea that pairs of material points exert forces on each other across a finite distance. This concept allows for the natural evolution (initiation, propagation, and interaction) of damage and cracks and can be viewed as an effective treatment of material length-scale induced by, for example, the material microstructure. The peridynamic equations of motion for the bond-based model are given as: 15 where f is the pairwise force function in the peridynamic bond that connects point � to x, u is the displacement vector field, ρ is the density and b(x,t) is the body force. The integral is defined over a region Hx called the "horizon region", or simply the "horizon". The horizon is the compactsupported domain of the pairwise force function around a point x (see Fig. 1). The horizon region is taken here to be a circular disk (sphere) of radius δ . We refer to δ also as the "horizon", and from the context there should be no confusion whether we refer to the region or its radius. A micro-elastic material is defined as one for which the pairwise force derives from a potential: where = � − is the relative position in the reference configuration and = (�, ) − ( , ) is the relative displacement between and �. A micropotential that leads to a linear microelastic material is given by: where = ‖ ‖, and = is the relative elongation of a bond, or bond strain. The function ) (ξ c is called the micro-modulus and has the meaning of bond's elastic stiffness. The pairwise force corresponding to the micropotential given above has the following form: Following the same procedure performed to calculate the micro-modulus functions in 1D (see 40 ), one obtains the conical micro-modulus function in 2D, plane stress conditions: 41 or, assuming a constant micro-modulus function over the horizon region, in 2D, plane stress conditions: where E is Young's modulus of the material. In the IH-PD model of porous material, E is the elastic modulus of the constituent material of the porous medium and this is discussed in Section 3.3. The material model in Eq. (4) is equivalent to the kernel function using = 1 for peridynamic kernel (�, )/|� − | by Chen et al. 42 They constructed a peridynamic kernel ( = 2) based on physical principles for dynamic elasticity and showed that, when the one-point Gauss quadrature is used for discretization, the = 2 model is the only one whose convergence to the classical solution does not depend on the fineness of the discretization grid. Models with = 0 or = 1 also converge, in the limit of horizon going to zero and ratio of horizon to grid spacing going to infinity, to the classical solution for problem with sufficient smoothness. No significant differences on crack patterns were observed between models with = 1 and = 2. 43 In this work we use the model with = 1.
Failure is introduced in peridynamics by considering that the peridynamic bonds break when they are deformed beyond a critical value, called the critical relative elongation or critical bond strain, s0, computed based on the material's fracture energy. In 2D, the energy per unit fracture length for complete separation of the two halves of the body is the fracture energy G0. Equating it to the work done in a PD material to accomplish the separation of the body into two halves gives: Substituting the micro-modulus functions from Eqs. (5) and (6) into Eq. (7), respectively, s0 is obtained for the conical micromodulus function (under plane stress conditions) as: δ π E G s 9 and for the constant micromodulus function as: For the IH-PD model of poroelastic materials, 0 G is the fracture energy of the constituent material of the porous medium (see details in Section 5).
Numerical discretization
In this section we describe the numerical implementation details, including both dynamic and static solutions. In principle, the peridynamic equations can be discretized using the finite element method, or any other method appropriate to compute solutions to integro-differential equations (or integral equations for the static case). These approaches, however, soon hit wellknown obstacles and difficulties for problems with evolving topologies, like those in which cracks initiate, grow, and interact with each other leading to damage, full fracture and/or fragmentation.
Instead, meshfree-types discretizations are preferred for peridynamic simulations of material failure and damage. The discretization proposed by Silling and Askari 44 uses the mid-point integration scheme (equivalent to one-point Gauss quadrature) for the domain integral. Numerical simulations are performed using the following discretized equation: where Fam(i) is the family of nodes j with their area (volume in 3D) covered, fully or partially, by the horizon region of node i, is the bond length between nodes i and j, is the relative elongation for the bond connecting nodes i and j, and is the area of node j estimated to be covered by the horizon of node i.
Note that node j may not be fully contained within the horizon of node i, so a "partial volume" integration, first introduced by Bobaru et al 45 and also shown in their following work, 46 is used here to improve the accuracy of the mid-point quadrature scheme. The main advantage of this algorithm compared with one that simply checks whether a node is inside or outside the horizon region is that, as the grid density increases (for a fixed horizon value), the numerical convergence (in terms of strain energy density, for example) is monotonic. 45 For a fixed horizon, the ratio = /Δ describes how accurate the numerical quadrature for the integral in Eq.
(1) will be. We call this ratio "the horizon factor". In the convergence study shown in Section 4, we study both m-convergence and δ-convergence 40 for an elastic wave propagation problem. We recall that in m-convergence we consider the horizon δ fixed and take → ∞. The numerical PD approximation will converge in this case to the exact nonlocal PD solution for the given δ. In the case of δ-convergence, the horizon → 0 while m is fixed or increases with decreasing δ. For δ-convergence and in problems with no singularities, the numerical PD solutions are expected to converge to the classical local solution (as m increases).
Both dynamic (see Section 4) and static (see Section 5) simulations are performed in this work. In the dynamic simulations for elastic wave propagation in a porous glass (see Section 4), we apply the Velocity-Verlet method with a time interval of 0.1 µs. For the quasi-static fracture tests in Section 5, the energy minimization method 47,48 is used, and the nonlinear conjugate gradient (CG) method with secant line search is adopted to minimize the strain energy of the system. For all static simulations in this paper, the nonlinear CG method is used with a convergence tolerance defined by: < 10 −6 , in which , and −1 are the total strain energy at the current (k-th) and previous (k-th-1) CG iterations.
(1), we obtain the nonlinear system of the discretized equations for quasi-static conditions: where bi is the body force at node i.
An intermediately-homogenized peridynamic model for poroelastic materials
In the fully-homogenized peridynamic (FH-PD) models, heterogeneous materials are understood as material models locally homogenized in terms of their elastic properties, density, and fracture energy. 19 The properties used in the FH-PD models can be calculated from the constitutive models of porous materials, such as Voigt or Reuss models with known porosity and properties of the constituent material, or the apparent parameters from direct measurements.
Porous materials, when the pores sizes are much smaller than the sample sizes, can also be viewed as locally homogeneous. Excellent simulation results are produced by the FH-PD models for rock with low porosity. When the rock porosity is not negligible, such as in Berea sandstone, local heterogeneities can highly influence damage initiation and crack propagation, which limits the applicability of fully-homogenized models for these cases. In this section, we introduce the intermediately-homogenized peridynamic (IH-PD) model for porous materials. The reader is referred to 49 for more details on using the IH-PD model for multi-phase materials.
Here we introduce a peridynamic model for porous materials based on an Intermediate Homogenization (IH) approach to simulate fracture and damage in such materials. The porosity is represented by peridynamic pre-damage, with mechanical bonds connected to a peridynamic node being pre-broken, stochastically, to achieve the desired porosity. The "intermediate homogenization" approach refers to the fact that we do not represent the explicit geometry of the actual pores in the material, and neither do we fully homogenize the porous medium. With the IH method we aim to maintain sufficient information about the porosity to allows us to more accurately compute the failure behavior of porous materials compared with the FH-PD. At the same time, the IH-PD model will be significantly more efficient computationally compared with a model that uses an explicit geometric representation of the microstructural individual pores, while hopefully, still being able to compute the macro-scale failure behavior with accuracy.
In the IH-PD model of porous materials, pores are treated as pre-existing material damage. In peridynamics, the damage index is computed as the ratio between the number of broken (or failed) bonds (Nf) and the total number of bonds (N) originally associated with a node ( ( , ) = ). To mimic the presence of pores, we randomly break a number of bonds at each node (see Fig. 2). We calibrate the number of broken bonds to the material's porosity. This procedure creates a "predamage index", related to material's porosity and computed like the damage index above but with Np, the number of pre-broken bonds at a node, replacing Nf. When the porosity reaches the critical porosity (the porosity beyond which the rock can exist only as a suspension 6 ), all bonds associated with that point should be broken ( = ), meaning that the pre-damage index is unity.
For zero porosity, no pre-broken bonds are introduced (pre-damage index is zero at all points).
To perform the calibration for the number of pre-broken bonds, we implement the initial damage representative of the pores by adopting the ideas used in the "concentration-dependent damage" (CDD) model originally proposed by Chen and Bobaru 18 and used for modeling of damage induced by corrosion processes [50][51][52][53] . Here, we assume that the pre-damage index at a point depends linearly on its porosity when zero-porosity material surrounds the point: iii. Go to next bond in the family of bonds of node x.
This procedure is applied for all nodes in the material. Here we assume that the pore shapes and distribution are "isotropic", therefore the peridynamic pre-damage representation of this case uses the same uniform distribution for bond-breaking independent of the node location or the bond direction. Special location-and/or orientation-dependent pre-damage can easily be introduced to mimic anisotropy in porous materials.
With the above algorithm, each PD bond (connecting material points and �) goes through the procedure twice. If porosities at and � are ( ) and (�), respectively, the chance for the . For materials with uniform porosity P, the change for any bond to stay intact is: (1 − / ) 2 . The pre-damage index, the ratio between the number of pre-broken bonds and the total number of bonds at a node, in this case, converges to: in the limit of the horizon factor (the ratio between the horizon size and grid spacing) going to infinity (also known as the m-convergence, see Section 3.2). Notice the nonlinear dependency, for a material with uniform porosity, between the pre-damage index and the given porosity.
In In the next section, we use the IH-PD model to simulate elastic wave propagation in porous glasses. By computationally measuring the wave speeds corresponding to different porosities, we calculate the apparent elastic moduli, and compare them with those measured in experiments.
Model verification and validation for elasto-dynamic problems
We verify (by performing convergence studies) and validate the IH-PD model for an elastodynamic problem: elastic wave propagation in porous glasses. We generate elastic waves by suddenly applying a force pulse over a small region in the sample and monitor their propagation through the sample. Experimental data regarding the apparent elastic modulus of these porous glasses in terms of varying degrees of porosity exists. 54 We test if the model can predict the experimentally measured relationship between elastic wave speed and material porosity. We arbitrarily choose a sample size of 1 m × 1 m. A suddenly applied load of 1 MPa in the vertical-up direction over the region (100 mm in length) at the bottom side is kept constant for 5 µs (see Fig. 3). After that, the load is suddenly removed. The loading generates elastic waves propagating through the sample.
Although the horizon-size-dependence of the loading region leads to wave patterns that depend on the horizon size, the wave speed does not change much with the horizon size and should converge to the measured value when the horizon size goes to zero. 40 Since our goal is to calculate the wave speeds for different porosities, the specific initial loading conditions (loading range, loading magnitude) are not important.
To compute the wave speed from the PD model results, we track the location of the pressure wave's peak over a time period that starts shortly after the pulse is applied and stops before waves return from the sample's boundaries. More specifically, we compute an average velocity by tracking the displacement of the crossing point between line x = 0 (shown in Fig. 4) and the crest of the front wave, over the time period 100 μs -200 μs from the application of the pulse. For this verification phase we are interested only in the elastic wave propagation, therefore we apply a "nofail" condition for all bonds in the model. Obviously, the pre-damage corresponding to different porosities is present in these calculations, but no "new damage" is allowed. Fig. 4, the horizon size is 40 mm, and the horizon factor m=8.
One notable difference between the nonporous (P = 0) and the porous case (e.g., P = 0.5) that can be observed from these results, is that in the porous sample stress waves become less coherent, being locally dispersed ("noisier" velocity maps) by the more detailed representation (with the IH- We now study m-convergence and -convergence to see the influence of the horizon factor and nonlocal horizon size ( ) on the wave speed at different porosities. Fig. 6 shows the crest locations versus time, for different porosities. We use a linear fit for the vertical velocity data points within the time range mentioned above to extract the wave speed velocity shown in Fig. 7, for each porosity. As mentioned before, at high porosities, (0.7, 0.9), there is a relatively larger probability for peridynamic nodes to lose all of their connections with the surrounding nodes if m is relatively small, since the number of bonds is relatively small. This is the reason for which only the higher values of m are used in these cases (see Fig. 7b).
The results in Fig. 7 show that m = 4 is a safe choice for simulating elasto-dynamic problems in porous materials with the IH-PD model when the porosity is smaller than 0.5 (relative to a critical porosity of 1).
The apparent elastic modulus in glass samples with different porosities was experimentally measured. 54 To compare the PD results with experimental measurements, we compute the apparent modulus (EP) from the wave speeds numerically obtained in the IH-PD simulations above. Based on the theory of 2D (plane stress) wave propagation, 55 the relationship between longitudinal or pressure wave speed and the apparent elastic modulus (EP) is: where L C is the longitudinal waves speed. In the computation, we use 1/3 as the Poisson ratio, because the Poisson's ratio value for bond-based peridynamic model in 2D under plane stress conditions is 1/3. 56 The apparent modulus calculated from Eq. (13) by using the wave speed computed with the PD model matches well the experimental measurements (see Fig. 8). With this model validation for elasto-dynamic problems, we next validate the IH-PD model for fracture using quasi-static fracture of a porous rock. We consider fracture induced by threepoint bending of Berea sandstone.
Peridynamic modeling of damage evolution in porous materials
We apply both the FH-PD and IH-PD models for quasi-static fracture of a porous rock sample. We find that only the IH-PD model delivers damage patterns and crack profiles similar to those seen in experiments in. 9
Description of experimental setup and results, and of the numerical model
The experimental setup shown in Fig. 10 Fig. 8 in Lin et al. 9 ) in mixed-mode. Fig. 7 in Lin et al. 9 also shows that for the long notch, some damage, measured via acoustic emission (AE) and electronic speckle pattern interferometry (ESPI), is recorded near the beam's center. In what follows, we investigate whether the observed damage sensitivity to notch length are reproduced by any of the two PD models described in this paper. We will compare our PD simulation results with the experimental results in terms of fracture paths, locations of microcracks, and peak loads. These comparisons are not possible through full sample failure simply because the experimental results shown 9 stop before that. In our simulations, the imposed displacement is 2μm between increments. We run 2,000 increments to ensure full splitting of the samples.
Berea sandstone is chosen for our study because of its fairly uniform grain size, 9 which makes it a good candidate for the isotropic porosity model, as well as for its brittle failure behavior.
Although Berea sandstone from different locations may have different porosities and thus different corresponding apparent moduli, we assume they all obey the relationship (see Section 4) = (1 − / ) 2 , with critical porosity PC and zero-porosity modulus E. The apparent fracture energy ( 0, ) has a similar relationship with porosity as the apparent modulus, thus Poisson ratio does not happen to match as well, can be performed using the state-based peridynamic formulation. 47 The state-based PD formulation eliminates the Poisson ratio restriction.
Because the Poisson ratio for the material considered here is close to 1/3 and the fact that our primary interest here is to observe the capabilities of a PD model in capturing the evolution of fracture and failure in poroelastic materials and to determine other factors that control crack growth in poroelastic materials, we use the bond-based peridynamic formulation. In our simulations, we use the conical micromodulus (see Eqs. (5) and (9)), δ = 2 mm and m = 4.
Results and discussion
We use the FH-PD and IH-PD models (with the quasi-static loading conditions specified in the previous section) to simulate the cases shown in Fig. 10, and to examine the effect the prenotch length has on the failure mode and crack patterns. In the IH-PD model, the effect of porosity is represented by inserting "pre-damage" in the material, while in the FH-PD model, porosity is indirectly represented by the apparent elastic and fracture properties. the long pre-notch leads to crack initiating from the off-center pre-notch tip, while in the short prenotch case, the crack initiates from the bottom-center of the beam.
Movies 6 and 7 show the damage evolution with and without pre-damage due to porosity, respectively, for the short pre-notched sample using the IH-PD model. Movies 8 and 9 show the damage evolution with and without pre-damage due to porosity, respectively, for the long prenotched sample using IH-PD model. In movies 6-9, the imposed displacement of the top-center of beam progresses from 0 to 2.6 mm. In these movies, the damage quantity shown is scaled to a 64 range between 0 and 1, and the same is done for the rest of such movies.
As displayed in movies 6-9, our PD simulations track the crack growth through full failure, Because of the simplifying assumptions implicit in a full homogenization of a porous material (absence of microscale heterogeneities and defects), the high stress concentration at the tip of preexisting notch, whether long or short, causes fracture to initiate (Fig. 12). In reality, small scale heterogeneities (presence of pores) result in local stress concentrations at places other than the offcenter notch tip, and most likely in the regions near the bottom center of the beam where tensile bending stress is highest. This is what leads cracks to initiate from the beam's middle bottom rather than the pre-notch tip, in the short-notch case ( Fig. 11c and 11e).
c d e f Movies 10 and 11 show the damage evolution for the short and long pre-notched samples, respectively, using the FH-PD model with conical micromodulus. In these simulations the imposed displacement at the top-center of beam progresses from 0 to 1.9 mm. To make it easier to analyze the computed damage distribution in the material, we blank the material nodes with damage index of zero in Fig. 13. The snapshots shown in Fig. 13 13a and 13b) match very well the experiments (see Fig. 3 in Lin et al. 9 ). Note that the initiation of fracture in the specimen with the short notch is the same as that in the specimen without a notch (similar distribution of damage around the final crack path). Figs. 7 and 8 in Lin et al. 9 show that, experimentally, while for the long notch case the specimen fails from the notch tip, some damage is also recorded around the bottom center of the beam. As seen in Fig. 13b, the IH-PD model captures this micro-damage at the beam center for the long notch case, as observed in experiments, while the FH-PD model (Fig. 13d) does not. al. 9 , with the data produced by the IH-PD model for the long notch case. The available experimental results (red diamonds in Fig. 14) only show the early stages of the fracture process.
a b
The peridynamic results (from the IH-PD model) match well with the acoustic emission data 9 , showing damage accumulating at the right corner of the notch tip, while the numerical results from Lin et al. 9 , show crack initiation from the left corner of the notch. The damage evolution for the long notch case solved with the IH-PD model (seen from movie 13) shows distributed damage around the beam bottom-center occurring earlier than the damage that eventually leads to full failure, starting at the pre-notch tip.
We note that the location of long notch shown in Figs. 7 and 8 in Lin et al. 9 is incorrect. Based on Table 2 in Lin et al., 9 the notch is located at two-thirds of the half-span length (about 49 mm from the center, see Fig. 10); however, the notch is shown to be located close to 40 mm in Figs. 7 and 8 in Lin et al.. 9 We use the parameter in Table 2 color squares, and using the legend in Fig. 13), and the numerical model in Lin et al. 9 (black dots). The off-center black rectangle shows the notch geometry and location.
In Table 1 Compared with the numerical model presented in Lin et al., 9 the IH-PD model introduced here has following advantages: i.
The IH-PD model presented here uses only an elastic-and-brittle-failure model, matching the sandstone micro-scale behavior. The actual sandstone heterogeneities/porosity (represented in our model via pre-damage) lead to an effective, macroscale behavior that displays softening. In contrast, Lin et al.'s 9 use a softening contact bond model, in which the normal bond strength reduces linearly after its peak, to mimic the observed macroscale behavior.
ii. No trial-and-error calibration for material constants is necessary for the IH-PD model. In the IH-PD model, macroscopic measurable properties (e.g. porosity, etc.) are input parameters. On the other hand, the constitutive model in the contact bond model 9 is defined by many microscopic material constants. Through the interaction of many discrete element particles (with microscopic material constants), macroscopic properties emerge. Hence, trial and error steps (or calibration processes) are required to achieve appropriate macroscopic properties similar to those of a specific rock.
Conclusions
In this paper we presented an Intermediately-Homogenized Peridynamic (IH-PD) model for simulating the elastic and failure behavior of porous materials. In this model, porosity was represented by an initial peridynamic damage ("pre-damage"), introduced by breaking bonds, stochastically, to achieve a given porosity. The peridynamic micromodulus was computed using the elastic modulus of the porous medium constituent material. We validated the model for elastic wave propagation using experimental data for wave propagation speeds and the apparent elastic moduli in porous glass with different porosities.
While simple homogenization methods work well for linear problems (e.g., elastic response), for nonlinear problems (e.g., in damage and failure), where dissipation and weakest links play a role in determining the material behavior, such methods may fail to reproduce experimental observations. To answer the question "how much homogenization is too much?" when modeling fracture processes in porous materials, we compared two peridynamic models: one that homogenized the material to a greater extent (the Fully-Homogenized Peridynamic model, FH-PD) and the newly introduced IH-PD model. We studied a quasi-static crack growth problem in a brittle porous rock to understand the difference between the models' responses. We found that the fully-homogenized model (FH-PD) failed to capture the experimentally observed fracture behavior in Berea sandstones sample under three-point bending loading. For this problem, experiments show that fracture patterns are controlled by the size of the off-center pre-notch. The IH-PD model, however, reproduced perfectly the observed crack growth behavior and its dependency on the length of the pre-notch.
We conclude that for problems in which the microstructure has a large effect on the failure behavior, a fully homogenized strategy will not work to correctly capture the fracture and damage evolution. For a predictive model, some of the details of the microstructure and its role in controlling crack growth are needed. The new IH-PD model can predict the correct failure evolution in a porous material without requiring the explicit description of the microstructure geometry. This happens because some essential features (porosity represented via peridynamic pre-damage) of the material microstructure are incorporated in the model. It is this extra microstructure information that allows the computed damage to initiate and evolve in a way similar to that in the real porous medium. | 2019-08-17T15:53:18.156Z | 2019-04-03T00:00:00.000 | {
"year": 2019,
"sha1": "2341f4073c040dc7ccbb162b86a743263abdf83e",
"oa_license": "CCBY",
"oa_url": "https://engrxiv.org/preprint/download/458/1086",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "bfd0916ebb6c67395d79764383108eb568ba6a38",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
245704308 | pes2o/s2orc | v3-fos-license | M AXIMIZING THE P SYCHO -A COUSTIC S WEET S POT P REPRINT
,
Introduction
The field of spatial sound addresses the question: how do we create a desired auditory scene over a spatial region of interest from a sound scene generated with only a few loudspeakers? In this context, the sound scene represents the objective nature of a sound wave propagating in the physical world, whereas the auditory scene represents the imprint of the sound scene in our subjectivity, that is, the result of the auditory system perceiving and organizing sound into meaning [1,2]. Over the last century, several methods have been proposed to answer this question. Their performance can be compared in terms of the size of the region where the sound scene creates an auditory scene that most closely resembles the desired one. In this work, we call this region the sweet spot.
A popular strategy to recreate an auditory scene is to directly approximate the sound wave that created it. In the literature, this strategy is called sound field reconstruction and, in this context, the sweet spot is assumed to be the same as the region where the generated sound wave closely resembles the target sound wave. Following Huygens' principle, any sound scene can be approximated accurately with a sufficiently dense arrangement of loudspeakers. However, selecting the audio signals for the loudspeakers is an ill-conditioned problem [3] for which there might be multiple solutions, rendering the problem ill-posed [4]. Three classes of commonly used methods for sound field reconstruction are mode matching methods, pressure matching methods and wave field synthesis.
Mode Matching Methods (MMM) find an approximation by matching the coefficients in the expansion of the target and generated sound waves in spatial spherical harmonics [5]. Some well-known MMMs are Ambisonics [6], Higher-Order Ambisonics (HOA), and Near-Field Compensated Ambisonics (NFC-HOA) [7]. All of them minimize the 2 -norm of the difference between leading coefficients. Ambisonics assumes the loudspeakers emit plane waves and uses only the leading coefficient, whereas HOA uses a larger but fixed number of coefficients. In contrast, NFC-HOA assumes the loudspeakers are monopoles. Ambisonics, HOA and NFC-HOA are designed for circular or spherical regions of interest. When approximating a plane wave, they create a spherical sweet spot with a radius that is inversely proportional to the frequency of the source [8].
Instead of using expansions in spatial spherical harmonics, Pressure Matching Methods (PMM) minimize the spatiotemporal L 2 -error between the the target and generated sound waves [9]. The magnitude of the audio signals are often penalized by their L p -norm to mitigate the effects of ill-conditioning [10]. Typically the loudspeakers are modeled as monopoles. In most cases, the solution can only be found numerically, and the discretization of the region of interest plays an important role.
Finally, Wave Field Synthesis (WFS) leverages the single-layer boundary integral representation of a sound wave over a region of interest [11]. Traditional WFS [12] uses a Rayleigh integral representation to derive a solution when the speakers are modeled as dipoles lying on a line. This was later extended to monopoles [13]. Its reformulation, Revisited WFS [14], uses a Kirchhoff-Helmholtz integral representation along with a Neumann boundary condition to obtain a solution for an arbitrary distribution of monopoles. It has been shown that the spatial properties of the auditory scene are correctly simulated by WFS and do not depend on the position of the listener over the region of interest [15]. However, it suffers from coloration effects due to spatial aliasing artifacts [16].
There is extensive literature analyzing these methods and comparing their performance [7,17,18]. In fact, they become equivalent in the limit of a continuum of loudspeakers, differing only when using a finite number [19]. Although they are amenable to mathematical analysis and have computationally efficient implementations, their construction has no natural psycho-acoustic justification to produce a large sweet spot as we have defined it. As a consequence, the artifacts introduced by these methods, due to approximation errors, may produce noticeable, and possibly avoidable, psycho-acoustic artifacts.
An alternative to reproduce better the auditory scene is to explicitly account for psycho-acoustic principles [20,2] in the methods. The first steps in this direction were taken in [21] by proposing a simple model that aims to preserve the spatial properties of the desired auditory scene. A method to reproduce an active intensity field, itself a proxy for the spatial properties, that is largely uniform in space was then proposed in [22]. It is based on an optimization problem yielding audio signals where at most two loudspeakers are active simultaneously. However, it makes the restrictive assumption that the target sound wave is a plane wave, and that the loudspeakers emit plane waves. In [23] the radiation method and the precedence fade are proposed. The former is equivalent to applying a PMM over a selection of frequencies that are most relevant psycho-acoustically, whereas the latter is a method to overcome the localization problems associated to the precedence effect [2]. Finally, in [24] a PMM is extended to account for psycho-acoustic effects by considering the L 2 -norm of the differences in pressure convolved in time by a suitable filter.
We believe that there is a gap between methods that aim to directly approximate a sound wave to reproduce a desired auditory scene, and methods that leverage psycho-acoustic models to reproduce the same auditory scene. In this work, we develop a method that incorporates monaural psycho-acoustic models to generate a sound wave that directly maximizes the sweet spot. This method is amenable to mathematical analysis, has an efficient computational implementation, and incorporates psycho-acoustic principles from the onset. Our numerical results show our method outperforms some state-of-the-art methods for sound field reconstruction. The paper is organized as follows. In Section 2 we introduce the main physical and psycho-acoustic models that we use. In Section 3 we formulate the problem of maximizing the sweet spot, proposing an accurate approximation, and analyzing its properties. In Section 4 we show this approximation can be recast as Difference-of-Convex (DC) program, and we introduce the SWEET algorithm as an efficient method to solve it approximately. In Section 5 we show a concrete implementation of our method based on van de Par's spectral psycho-acoustic model [25]. Finally, in Section 6 we perform several numerical experiments analyzing its performance, comparing its results with WFS, NFC-HOA and PMM, and showing some concrete applications. Consider an array of n s speakers located at x 1 , . . . , x ns ∈ R 3 . When the medium is assumed homogeneous and isotropic, and each loudspeaker is modeled as an isotropic point source, the sound wave they generate is [26,Section 2.5.2] where c s is the speed of sound in the medium, and c 1 , . . . , c ns are the audio signals of every loudspeaker. In the frequency domain, this is represented as where c k is the Fourier transform of c k in time To model the spatial radiation pattern of each loudspeaker, along with time-invariant effects such as reverb [27,28], the representation (1) can be replaced by where G k are the corresponding Green's functions. In addition to this array, consider a bounded domain Ω ⊂ R 3 containing no loudspeakers, i.e. x k / ∈ Ω, allowing us to avoid the singularities in (1) at each x k . On this domain, we could attempt to approximate as best as possible a sound wave u 0 with the array of loudspeakers.
If we had a continuum of isotropic point sources on ∂Ω then, under suitable conditions, the simple source formulation [29,Section 8.7] shows we can reproduce u 0 exactly. However, when only a finite number of physical loudspeakers are available, we must find c 1 , . . . , c ns such that in an suitable sense, for x ∈ Ω. In many cases u 0 is real-analytic on its second argument over Ω. As a consequence, when the speakers are isotropic point sources or G k is real-analytic on its 2nd argument, the approximation cannot be exact on any open set unless u 0 was actually generated by the speaker array [30,Corollary 1.2.5]. This suggests (3) can hold only in average.
From now on we let W S be the set of acoustic waves that can be generated by the array, represented in the frequency domain as in (2). We formalize this set in Section 3 and we first turn our attention to the psycho-acoustic criteria that determine a suitable sense to interpret (3).
Psycho-acoustic preliminaries
To interpret (3) adequately, we consider two basic aspects of the human auditory system: the hearing threshold and the damage/discomfort risk level threshold. The former allows us to determine when the differences between u 0 and the approximating wave are negligible, whereas the latter ensures we do not harm listeners.
The hearing threshold
An important psycho-acoustic problem is to determine when the difference between two audio signals v 0 = v 0 (t) and v = v(t) is audible. A key concept to address it is the absolute threshold of hearing [20, Section 2.1] (see Figure 1): when v 0 ≡ 0, a pure tone v is imperceptible if its intensity falls below it.
In complex audio signals other mechanisms come into play and the criteria for perception depend on the signal v 0 being approximated. It has been proposed that the human auditory system first computes an internal representation of the audio signal v → Φ(v) to then apply an internal detector is perceptible if this value exceeds a given threshold [31,32]. These studies do not provide a tractable form for this representation nor for the internal detector. A simplification yielding a tractable model is given in [33]. The model is simplified to a non-symmetric distortion measure where L is a transform modelling locally time-invariant filters that may depend on v 0 . Another simplification in the literature is to consider a sum of convolved-weighted-squared errors [34] where h k and g k represent a spectral and time weighting respectively. Together they model the difference over the k-th auditory filter. The filters may depend themselves on v 0 . A further simplification introduced in [25] consists in taking a constant g, i.e., This proposal works only with spectral information and thus it may not capture temporal masking effects accurately [34].
The main reason to make these models dependent on v 0 is to account for the psycho-acoustic equivalence of v and v 0 when the approximation error is masked by v 0 . This is a principle already used in audio coding [35,36].
The damage and discomfort risk threshold
Exposure to loud sound waves may be uncomfortable. Then, unrestricted spatial sound systems may reproduce undesirable sound scenes where some features prevail at the expense of the discomfort of some listeners. Empirical thresholds for loud discomforts levels for sinusoidal signals over a finite set of frequencies have been defined in the literature, e.g. in [37,38]. Naturally, these can be expressed as where ρ(f ) is the multiplicative inverse of the threshold.
Psycho-acoustic framework
Although there is no definitive model for the hearing threshold, the literature supports the idea that the effects that must be taken into account depend on the sound wave u 0 itself. In this work we consider a general form for these models that includes some proposals in the literature. Inspired by (4), if u is a acoustic wave on Ω, a map of the form where K B is a suitable kernel, not necessarily time-invariant, quantifies the differences in perception between u and u 0 at a given x. A map of this form can account for time-variant effects, such as temporal masking, and also for time-invariant effects, such as spectral masking. Therefore, by choosing suitable kernels we can represent the differences in perception over several auditory filters as a collection of functionals B 1 , . . . , B n b of the form (8). Consequently, we define the threshold map as T u(x) := Ψ(B 1 u(x), . . . , B n b u(x)) (9) where Ψ : R n b + → R is a continuous convex function that is non-decreasing on each one of its components. Without loss, we consider the difference between u and u 0 is not audible at x if T u(x) ≤ 0. Remark that by choosing a suitable function Ψ we may incorporate interactions between different auditory filters. Note that the form (9) encompasses (4), (5) and (6). Therefore, given an approximating wave u ∈ W S , we define its sweet spot as the set where u is psycho-acoustically equivalent to u 0 , i.e., S(u) = {x ∈ Ω : T u(x) ≤ 0}.
(10) Note the psycho-acoustic equivalence that defines the sweet spot is monaural. Although at each point the audio signal is in this sense equivalent to the original, this does not account a priori for binaural effects, e.g., whether the position of an audio source is perceived correctly. Analogously, to model the discomfort level threshold we consider a collection of functionals Q 1 , . . . , Q np of the form (8) with u 0 ≡ 0. Note that those generalize (7) as they can account for time-variant effects. To enhance flexibility, we do not assume the same selection of auditory filters for the functionals B and Q, nor that n b = n p . Hence, we define the discomfort map P as in (9) where Π is a function with the same properties as Ψ. Then, is the collection of sound waves below the discomfort threshold at every x. The domain of T and P are sound waves, and thus are part of the sound scene. In contrast, their image are part of the auditory scene. Hence, T and P link the objective and subjective aspects of the problem.
Our goal is to find an acoustic wave u ∈ W S that maximizes the weighted area of the sweet spot µ(S(u)) while remaining comfortable, i.e., u ∈ P. From now on, we assume u 0 is known and fixed. Particularly, all the parameters that we have introduced to define the threshold map (9) may depend on u 0 .
Maximizing the sweet-spot
To formalize the problem of maximizing the sweet spot we make some critical assumptions. We consider the space of sound waves that have finite energy at every x ∈ Ω. Spaces of this form are called mixed L p -spaces and were introduced in [39]. The space W is complete under the norm An important feature of this norm is that the energy is preserved in time and frequency, i.e., u W = u W . From now on, we assume u 0 ∈ W . The following proposition summarizes the technical results that ensure that the methods we propose are well-posed. We defer its proof to Appendix A. Proposition 1. Suppose that (i) The audio signals c k in (2) are all bandlimited to an interval I c and their L 2 -norm is uniformly bounded.
(ii) The functions G k in (2) are continuous and bounded on I c × Ω.
Then the following assertions are true.
(i) The map T : W → L ∞ (Ω) is continuous, and for almost every x ∈ Ω the map u → T u(x) is convex.
(ii) The set S(u) is Borel measurable for any u ∈ W .
(iii) The set W S is compact in W .
(iv) The set P is closed in W .
We assume the hypotheses of the proposition hold throughout. This does not impose strong constraints on the threshold map (9). However, this implies the sound waves in W S are continuous in space and time.
The weighted area of the sweet spot is measured with a Borel measure µ [40, Section 1.2]. The problem of maximizing the sweet spot becomes subject to u ∈ P.
In the above problem the feasible set is closed and bounded and, in fact, compact. To prove there exists a solution, we need to characterize the regularity of the objective function. However, this implies characterizing the behavior of the set-valued function u ⇒ S(u). This could be very difficult in practice. For this reason, we propose an approximation to (P 0 ) that can be analyzed with standard methods, and for which approximate solutions can be found efficiently.
The layer-cake representation
The layer-cake representation allows us to approximate the area of S(u) in terms of an integral over a function of u.
Let ϕ be a bounded non-negative function of bounded variation such that ϕ(t) = 0 for t < 0 and Suppose v ∈ L ∞ (Ω), α > 0 and let S α : Since Ω is bounded, this implies v ∈ L 1 (Ω). We claim the area µ(S α ) can approximated by Proof of Proposition 2. Let {ε n } be monotone decreasing to zero and V t,n : where we used the monotone convergence theorem [40, Theorem 2.4.1]. As {ε n } is arbitrary, the claim follows.
ε , we have for u ∈ W and ε small that . This allows us to use directly an integral functional of a function of T u thereby removing the need to use the set S(u) as an optimization variable.
The variational problem
We propose to solve the ε-approximated problem We can characterize the regularity of the objective function for this problem. Proposition 3. The function A ε : L ∞ (Ω) → R is continuous. Since W S is compact, there exists at least one solution to (12).
The existence of solutions follows from the compactness of W S ∩ P.
Unfortunately, we cannot assert that the solution to (12) is unique and, in fact, several solutions may exist as two distinct sound waves may be the best psycho-acoustic approximation to u 0 on Ω. Consider the case u 0 ≡ 0: any sound wave u ∈ W S of sufficiently small magnitude falls below the pain and hearing thresholds, and is thus optimal for (12). In addition, although the feasible set is convex, the objective function is not. Therefore, in principle, there may not be efficient algorithms to solve (12), and several local minima may exist.
DC Formulation
To introduce a suitable algorithm to solve (12) we first rewrite it as We interpret the auxiliary variable v as an overestimate of the threshold map over Ω. The proof of the following proposition shows that for all practical purposes we can assume T u = v. Proposition 4. The following assertions are true.
(ii) If u is an optimal solution to (P ε ) then (u , T u ) is an optimal solution to (P ε ). In particular, (P ε ) has a solution.
(iii) If (T u , v ) is an optimal solution to (P ε ) then (u , T u ) is also an optimal solution, and u is an optimal solution to (P ε ).
Proof of Proposition 4. We omit details for brevity. (i) Convexity follows from (i) in Proposition 1. Similarly, the set is closed by the continuity of T . (ii)-(iv) Let u be an optimal solution to (P ε ). By choosing v = T u it is clear the optimal solution to (P ε ) is less or equal than that of (P ε ). Let (u , v ) be an optimal solution to (P ε ). We claim that we can choose v so that and, without loss, we can assume v = T u . Consequently, the optimal value of (P ε ) is greater or equal to that of (P ε ). Hence the problems are equivalent and, by Proposition 3, they both have at least one solution.
From now on, we denote both (12) and (13) as (P ε ) and we omit the subscript ε when possible. Note that in (13) the objective is the difference of convex functions. Since ϕ is of bounded variation we can consider its Jordan decomposition [41, Chapter 6, Jordan's Theorem] ϕ = ϕ + − ϕ − where ϕ + , ϕ − : R → R are non-decreasing functions which we assume to be zero for t < 0. Define Hence, the formulation (13) is a Difference-of-Convex (DC) program [43,44]. For this type of problems, there are efficient algorithms that attempt to find a solution.
SWEET algorithm
The Convex-Concave Procedure (CCCP) [45] is an efficient method, which can be thought as a primal version of the DCA algorithm [43], to find a solution to (13). Although it can be shown that if it converges, then its limit is a stationary point [43,Theorem 3], our results in Section 6 suggest that in practice we are able to find local minima for (13). The CCCP is an iterative method that uses an affine majorant for the concave part, e.g., using subgradients, to then majorize the objective function in (13) with a convex function. The resulting convex problem can then be solved efficiently.
Since A − is continuous and convex, it has a subdifferential for any v ∈ L ∞ (Ω). The functional g is called a subgradient. Therefore, we can use the convex majorizer Although it may be difficult to characterize the subdifferential of a convex function on a Banach space, in our case we can always find at least a subgradient at any v 0 .
Proof of Proposition 6. Let v ∈ L ∞ (Ω), t, t 0 ∈ R. By the monotonicity of ϕ − we have Since t ∈ R is arbitrary, ϕ − (t 0 ) is a subgradient of Φ − at t 0 . Moreover, since t 0 ∈ R is arbitrary, we have that . Whence, by the monotonicity of the integral, integrating over Ω yields The CCCP solves at each iteration the convex problem The proof of Proposition 3 can be adapted to show (P ε,v0 ) has at least one solution. Proposition 7. There exists at least one solution to (15).
Proof of Proposition 7. We first construct a candidate for an unconstrained minimizer of the objective. Letṽ 0 be any representative of v 0 ∈ L ∞ (Ω). Define the set-valued map F : ∈ Ω + then ϕ − (ṽ 0 (x)) = 0. Furthermore, since Φ + takes non-negative values and vanishes on the non-positive reals, it is clear that ] for x ∈ Ω + . In particular, F takes non-empty, closed and convex values on a complete metric space. Therefore, it admits a measurable selectioñ v [47, Theorem 8.2.2 and Theorem 8.1.13]. Note that |ṽ (x)| ≤ |ṽ 0 (x)| for x ∈ Ω + . With a slight abuse of notation, we denoteṽ its extension by zero to to all of Ω. Note this is still a measurable selection for F . If we let v denote its equivalence class, we deduce that v L ∞ ≤ v 0 L ∞ whence v ∈ L ∞ (Ω).
By construction, for any v ∈ L ∞ (Ω) and representativeṽ of v we have . Consequently, v is indeed an unconstrained minimizer for the objective.
We now prove a minimizer exists. Let {(u k , v k )} be a minimizing sequence. Since W S is compact by Proposition 1, we can assume without loss that {u k } converges to a limit u ∞ ∈ W S . Define w k = max{v , T u k }. We will show {(u k , w k )} is also minimizing. Define )v k (x) by construction. By applying the same arguments as in the proof of Proposition 6, we obtain the inequality
From this, it follows that
. Hence, (u k , w k ) attains a lower objective value than (u k , v k ). Since {(u k , v k )} is minimizing, we conclude p = lim inf where p is the optimal value; hence, {(u k , w k )} is also minimizing. Since T is continuous, we have w k → max{v , T u ∞ }. Hence, (u ∞ , max{v , T u ∞ }) is a minimizer.
By solving a sequence of problems of the form (P ε,v k+1 ), where (u k+1 , v k+1 ) is an optimal solution to (P ε,v k ), we can attempt to find a solution to (P ε ).
Assuming the CCCP converges to a local minimizer to (P ε ), we can then solve a sequence of problems of the form (P ε k ) for a decreasing sequence {ε k } to approximate a solution to (P 0 ). In this case, we initialize the CCCP to solve (P ε k+1 ) with the solution found for (P ε k ). We call this the SWEET algorithm and is shown in Algorithm 1.
Finally, remark we could apply the decomposition (12). Although the term A − • T is convex when φ + ≥ 0, majorizing −A − • T would be more involved than the approach we have taken here.
SWEET-ReLU algorithm
When ϕ is a step function, the function Φ is the difference of two Rectified Linear Units (ReLUs). The resulting instance of Algorithm 1 is simple and interpretable. Let ε > 0 and ϕ = ε −1 χ [0,ε] . Choosing ϕ + = ε −1 χ [0,∞) and whence Φ + and Φ − are ReLUs. Moreover, the subgradient (14) becomes Let Ω ε,v0 := {x : v 0 (x) ≤ ε}. Since A − (v 0 ) and g v0 (v 0 ) in (P ε,v0 ) the terms are constant, it suffices to compute where we used the fact that t + − t = (−t) + . The second term is non-negative, and it is positive only when v takes negative values. The restriction T u ≤ v in (P ε,v0 ) allows us to choose v arbitrarily large over Ω c ε,v0 , decreasing the objective value, and allowing us to neglect the second integral. Therefore, only the first term contributes to the objective in (P ε,v0 ). Hence, for this choice of ϕ, ϕ + and ϕ − we obtain Because of the monotonicity of the positive-part function we can eliminate the auxiliary variable v to obtain the problem Note it depends on v 0 only through the set Ω ε,v0 . With this in mind, notice that at each iteration of Algorithm 1 we need an optimal solution (u k+1 , v k+1 ) to (P ε,v k ). However, solving (17) only yields an optimal solution u k+1 . Fortunately, from a given solution u k+1 to (17) we can choose v k+1 such that (u k+1 , v k+1 ) is an optimal solution to (16) as follows: Using this choice, note that We call this simplification the SWEET-ReLU algorithm. It is shown in Algorithm 2. Due to compactness, the iterates {u k } have at least one accumulation point, which must be a stationary point for (12) [43,Theorem 3]. SWEET-ReLU can be interpreted as a greedy algorithm that improves at each step the approximation over the set Ω k while neglecting the approximation outside Ω k . Intuitively, a point in Ω is neglected by the algorithm as soon as it determines that it cannot belong to the sweet spot. Furthermore, the sequence of sets generated by the algorithm are precisely an approximation for the sweet spot as, in fact, S(u N ) ≈ Ω N . Additionally, initializing the algorithm with ε 0 sufficiently large we have Ω 1 = Ω, making the choice of u 0 irrelevant. Finally, the choice of {ε i } can be adaptive. For instance, ε i can be selected as the p-th percentile of T u iNu−1 .
Implementation
We provide an implementation of SWEET-ReLU for approximating a sound wave generated by a (pseudo) sinusoidal isotropic point source emitting at frequencies f 1 , . . . , f n f . The loudspeakers are modeled as equivalent (pseudo) sinusoidal point sources, i.e. we use (1) for coefficients a k, ∈ C and a fixed spectral localization parameter σ 1. Since the signals are almost stationary, temporal masking is almost non-existent. This allows us to define the threshold map T using van de Par's spectral psycho-acoustic model [25]. In this case, the filters in (8) are time-invariant. Thus, for van de Par's model we have The constant C A > 0 limits the perception of very weak signals in silence. The weight w Bj is defined as w Bj := |ηγ j | 2 where with C η,0 = 4.69, C η,1 = 18.2 × 10 1.4 , C η,2 = 32.5 × 10 −7 and C η,3 = 5 × 10 −16 models the outer and middle ear as proposed by Terhardt [48], and where C Ψ = 2 1/4 π 1/2 σC Ψ and we used the approximation for (pseudo) sinusoidal signals when σ 1. The constants C Ψ and C A are defined as suggested in [25]. This considers the absolute threshold of hearing and the just-noticeable difference in level for sinusoidal signals, which gives, C Ψ ≈ 1.555 and C A ≈ 4.481 when considering n b = 100 as the number of center frequencies, and f 1 = 20, f n b = 10 3 as the first and last center frequency.
To model the pain threshold we consider the experiments in [37] about the discomfort caused by sinusoidal signals. We interpolated the data in this study using cubic splines with natural boundary [50,Section 8.6] to obtain a function η P , as shown in Fig. 2. For the auditory filter associated to the j-th frequency we define To our knowledge, there is no standard reference for the spectral integration that determines the levels of discomfort or pain. For simplicity, we consider, as in the van de Par model, a summing integrating function, but now with the center frequencies of the discomfort auditory filters equal to the sound frequencies f 1 , . . . , f n b . Then, Π(q 1 , . . . , q n f ) = −1 + C Π q 1 + . . . + C Π q n f . This is actually a conservative choice of Π as this controls the sum of the contributions of every frequency, instead of each one separately. Consequently, we obtain where the same approximation holds by the same arguments as before. Naturally, C Π = 1.
To solve (P ReLU
Ωε,v0 ) we discretize the integrals over Ω. The following proposition ensures that this approximation to the integral converges to the desired one under mild assumptions. We defer its proof to Appendix B. If f K ∈ L 2 (R 2 ) and u ∈ W S then T u ∈ C(Ω). Furthermore, if Ω is compact, T u is uniformly continuous over Ω.
Specifically, we discretize Ω using n d disjoint squares or cubes of side (|Ω|/n d ) 1/d for d ∈ {2, 3}. To avoid spatial aliasing, we need at least 2 points per spatial wavelength λ f = c s /f for each frequency f of the source. This implies (|Ω|/n d ) 1/d < λ f /2 whence n d > (2/λ) d |Ω|. To ensure the method performs well, we typically consider a denser discretization with at least 5 points per spatial wavelength.
Experiments
We perform two types of numerical experiments. First, we compare the performance of our method with the state-ofthe-art methods WFS, NFC-HOA and L 2 -PMM in terms of the size of the sweet spot they produce. Second, we explore other applications of our method related to sound field reconstruction. The setup for the numerical experiments consists of an equispaced arrangement of 20 loudspeakers lying on a circle of radius 2.5 m and at π/4 ≈ 0.785 m from each other. The region of interest Ω is a concentric circle of radius 2.4975 m (Fig. 4). The speed of sound is c s = 343 m/s.
The SWEET-ReLU algorithm and the L 2 -PMM method were implemented in Python 3.8 using the CVXPY package [51,52] and MOSEK [53]. The simulations of 2.5D NFC-HOA and 2.5D WFS were done with the SFS Toolbox [54]. To compare the results of these methods, we compare the size of the sweet spot as a fraction of the area |Ω| of Ω. To compare the values of the threshold map T u for u we use log(1 + T u). Hence, the sweet spot is the region where log(1 + T u) ≤ 0. Finally, we compare the Intensity Direction Error (IDE), defined as where I is the time averaged acoustic intensity. For sinusoidal signals of frequency f it is given by [29, Section 2.3] where v is the velocity vector field of u.
Comparison with state-of-the-art methods
To compare our method with state-of-the-art methods, we perform two types of numerical experiments. The first type consists of a sequence of instances where the source moves progressively away from the center of the loudspeaker array, starting at 0 m and ending at 15 m. Following the model in Section 5, the source is isotropic, and (pseudo) sinusoidal with f 1 = 343 Hz. Hence, its wavelength is 1 m. When the source is inside Ω, its intensity selected so that the wave has an amplitude of 60 dB at 1 m of the source. When the source is outside Ω we adjust the intensity so that the amplitude at the point where the segment joining the center of the arrangement and the source intersects the arrangement is 60 dB. This mitigates the effect of attenuation as the source moves away from the arrangement. A uniform discretization of 901 points was used for Ω at a distance of at most 0.145 m, achieving more than 6 points per wavelength.
The second type considers the same source at a distance of 5 m from the center of the array emitting a (pseudo) sinusouidal wave at different frequencies ranging from 50 Hz to 2000 Hz. To mitigate the issues due to non-convexity, we initialize SWEET-ReLU with the optimal solution obtained for the previous frequency value. A uniform discretization of 20848 points was used for Ω at a distance of at most 0.03 m, achieving more than 5 points per wavelength in the worst case. For both types of experiments we have chosen ε i adaptively with percentile p = 90. The results are shown in Fig. 3. We see our method generates a larger sweet spot than that generated by every other method over the entire range of source locations and frequencies ( Fig. 3a and Fig. 3b). When the source is at 2.5 m, lying over the arrangement, the sweet spot equals Ω, as expected (Fig. 3a). Furthermore, our method successfully attains the lowest average threshold value in most of the instances. Although the performance degrades at very low frequencies compared to other methods, it remains below the audible threshold ( Fig. 3c and and Fig. 3d). This shows that on average the SWEET-ReLU algorithm does not produce large values of the threshold map outside the sweet spot.
To perform a finer analysis, we consider two additional instances: the near-field instance, where the source outside the arrangement at 5 m of its center, and the focus-source instance, where the source is inside the arrangement at 0.82 m of its center (Fig. 4). For these experiments we have chosen ε i adaptively with percentile p = 99. The sweet spots generated by each method for each instance are shown in Fig. 6, 8, and their size is shown in Table 1. For the near field instance, the sweet spot generated by our method is almost twice as large as that of the other methods. The sweet spot generated by NFC-HOA (Fig. 6f) is centered, whereas that generated by WFS (Fig. 6f) is localized farther away from the source. This is consistent with the analysis in [7]. In contrast, the sweet spot generated by our method ( Fig. 6e) behaves like that generated by WFS, but almost encompasses the one generated by NFC-HOA. In all cases the aliasing artifacts appear roughly near the boundary of the sweet spot. This suggests that the principle behind sound field reconstruction, i.e., to avoid physically noticeable artifacts, does ensure a good monaural auditory scene. Our method exhibits less aliasing artifacts than the others. This may explain the low average IDE values and small psycho-acoustic errors in Fig. 3.
For the focus-source instance we strengthen the intensity of the source so that the wave has an amplitude of 72 dB at 1 m of the source. The sweet spot generated by our method (Fig. 8e) is almost 10 times larger than those generated by other methods. The sweet spot generated by NFC-HOA (Fig. 8f) is contained in a circle with a radius equal to the distance of the source to the center of the room. This is also consistent with [7]. The sweet spot generated by WFS (Fig. 8g) is almost empty as the resulting u has large amplitude. This suggest that focus source formulation for WFS needs an amplitude factor normalization. In contrast, the sweet spot generated by our method almost comprises the half of Ω that faces the source. Furthermore, the artifacts are noticeable only behind the source. This shows the advantages of the greedy strategy of the SWEET-ReLU algorithm: during its first iterations it is capable to detect the direction of u 0 over Ω to then prioritize the part of Ω where a good fit to u 0 can be obtained. This is a possible explanation for the almost empty sweet spot generated by L 2 -PMM both in the near field (Fig. 6h) and the focus source (Fig. 8h) instances. This, Figure 5: u 0 for near field setup. together with the proximity of the speakers, completely degrades its performance: the method attempts to minimize the L 2 -error where it is largest, i.e., near the speakers. As a consequence, the resulting u is small over Ω. Finally, our method is efficient in the usage of the loudspeakers: the acoustic wave u resulting from WFS is uncomfortably loud around the source and near the active loudspeakers in the array, whereas that obtained with NFC-HOA u 0 is Figure 7: u 0 for focus source setup. uncomfortably loud in a large region outside a circumference concentric to Ω. Our method, in contrast, produces a negligible discomfort region by construction.
The effect of multiple frequencies
We now study the effect of a source generating a superposition of (pseudo) sinusoidal waves at n f = 4 frequencies f 1 = 400 Hz, f 2 = 300 Hz, f 3 = 200 Hz, and f 4 = 100 Hz. Our goal is to study non-linear effects and their consequences on the sweet spot found for each frequency separately, and that found by solving the problem for a multi-frequency source. A uniform discretization of 9660 points was used for Ω. Contiguous points are at a distance of 0.04 m, achieving more than 19 points per wavelength in the worst case. The results are shown in the Fig. 10. Observe the sweet spots generated over Ω cover 54.3% of Ω for 400 Hz, 73.3% for 300 Hz, 85.5% for 200 Hz and 91% for 100 Hz. The sweet spot for the multi-frequency source covered 52% of Ω. In our standard setup it is easier to generate larger sweet spots at low frequencies, and these decrease as the frequency of the source increases. Furthermore, the sweet spots seem to be roughly nested as the frequency increases. Interestingly, the sweet spot generated for the multi-frequency source is comparable to that obtained at the highest frequency. This suggests that, in general, the sweet spot generated by our method for a multi-frequency source will be dominated by the frequency that is harder to approximate. This also yields insight into the setups for which a large sweet spot may be generated for a multi-frequency source.
Multiple zone control
The problem of creating a sound scene in a zone while keeping another silent has been extensively studied in the spatial sound literature, e.g. [55,56]. Here we show our methods provide a solution to this problem. We consider the instance shown in Fig. 4c where u 0 is equal to 0 over the silent zone as shown in Fig. 11. In the silent zone we fix a psycho-acoustic tolerance of 20 dB above the absolute threshold of hearing, whereas in the zone for the sound scene, i.e., the sound zone, we keep the van de Par model as before.
Since the silence zone is 24 times smaller than the sound zone, we balance the problem by choosing a non-uniform measure µ that takes the value 24 over the silent zone and 1 over the sound zone. A uniform discretization of 3274 points was used for the sound zone and 332 for the silence zone. Contiguous points are at a maximum distance of 0.075 m, achieving more than 13 points per wavelength. The results are shown in Figs. 12. Our method generates a sweet spot covering 32% of the sound zone, and 97.5% of the silent zone. Also, Fig. 12c shows that the direction of the source is correctly reproduced inside the sweet spot in the sound zone. Inside the silent zone the IDE is not incorrect but undefined. This indicates that the localization properties of the auditory scene may be correctly reproduced as well.
In contrast, weighted L 2 -PMM performs poorly in this global multi-zone instance for the same reasons explained in Section 6.1 (Figs. 12d-f). It generates a sweet spot covering 1.7% of the sound zone and 27% of the silent zone. This shows our method is flexible and can be used for global multi-zone instances.
Discussion
Our results show the SWEET-ReLU algorithm yields state-of-the-art results in standard numerical experiments. We believe the performance in these experiments is representative of what we would observe when using more complex pyscho-acoustic models for the hearing threshold and the loud discomfort level. A key component of our method is the Figure 9: The effect of multiple frequencies: log(1 + T u) multi-frequencies.
(a) Extending the form of T to account for these effects is the subject of future research. However, as it is shown in [57], the overall quality of a spatial sound system can be explained to 70% by coloration or timbral fidelity, which can be characterized by monoaural effects, and 30% by spatial fidelity, which needs to be characterized by binaural effects. Furthermore, our experiments show that in some settings our method achieves a lower intensity direction error, which is a proxy for the localization error, than state-of-the-art methods. Hence, it correctly simulates the spatial properties of the auditory scene, even though we are not explicitly enforcing it.
Although we have presented numerical results modeling the loudspeakers and the virtual sources as isotropic pseudosinusoudal monopoles, we believe our method can be readily implemented in real settings with non-trivial sound sources. For instance, reverberation, different radiation patterns for the loudspeakers, and other time-invariant effects can be incorporated by modifying the Green's functions G k . For the representation of the sound scene, due to the fine discretization of the region of interest required, it may be also convenient to use an object based approach [1]. In this case, the target sound wave u 0 is not measured with microphones, but instead is simulated when the location of the sources and their audio signals are known. Our method may be computationally expensive, as we need to solve a sequence of convex problems, precluding its use in real-time applications. Nevertheless, our multi-frequency experiments show the sweet spots nest as the frequency of a sinusoidal source increases. This suggests that an heuristic could be developed to improve the performance for multi-frequency sources. Furthermore, over a fixed instance, i.e. fixed room and loudspeaker arrangement, we may be able to approximate the map u 0 → u from several simulated instances of pairs (u 0 , u). Once approximated, the computational cost becomes negligible.
Finally, although we have not fully developed a theory for the convergence of SWEET-ReLU, our experiments show that it converges in practice. Further analysis will be the subject of future work.
Conclusion
In this work, we considered the sweet spot as the region where the a sound scene is psycho-acoustically close to a desired auditory scene. Furthermore, we developed a method that generates a sound scene that maximizes this sweet spot while guaranteeing no discomfort over a spatial region of interest. In this method, the sweet spot and the discomfort tolerance can be modeled within a flexible monaural psycho-acoustic framework. We provided a theoretical analysis of the method, and an efficient algorithm, the SWEET-ReLU algorithm, for its numerical implementation. Over isotropic pseudo-sinusoidal monopole instances our method successfully generates a larger sweet spot than the most common state-of-the-art sound field reconstruction methods. We believe our method is a step towards a new paradigm for spatial sound reconstruction, bridging a gap between methods based on psycho-acoustic principles, and sound field reconstruction methods.
A Proof of Proposition 1
We prove some auxiliary results. First, with a slight abuse of notation, we claim the map K B u(t, x) = K B (t, t , x)u(t , x) dt , where K B satisfies the hypotheses, is continuous from W into W . To prove this, fix x ∈ Ω and apply Young's inequality for integral operators [58, Theorem 0.3.1] to obtain Third, for any θ ∈ [0, 1] it is apparent that B(θu 1 + (1 − θ)u 2 )(x) ≤ θB(u 1 )(x) + (1 − θ)B(u 2 )(x).
whence for almost every x the map u → Bu(x) is convex. Fourth, Bu is a measurable function by Fubini's theorem [40,Theorem 5.2.2]. Fifth, B is continuous on u. To prove this, let v = |K B u 2 | + |K B u 1 | + 2|K B u 0 | and w = K B u 2 − K B u 1 and note that where we used the triangle inequality, the identity |a 2 − b 2 | = |a + b||a − b| and the Cauchy-Schwarz inequality. The first term is bounded, as , where we used the inequality (a + b + c) 2 ≤ 3(a 2 + b 2 + c 2 ). For the second, we have It follows that u 1 → u 2 in W implies Bu 1 → Bu 2 in L ∞ (Ω) whence B : W → L ∞ (Ω) is continuous.
By the proof of (iii) in Proposition 1 we know u ∈ W S implies x → u(·, x) is a continuous map Ω → L 2 (R). Hence, the above tends to zero as x − y → 0. For the second term, we have the bound whence the integrand is dominated and by Lebesgue's dominated convergence theorem we deduce that |K B (t, t , x) − K B (t, t , y)| 2 dt dt → 0 as x − y → 0. Hence, (h(t, x) − h(t, y)) 2 dt → 0 as x − y → 0 and x → h(·, x) is a continuous map Ω → L 2 (R). | 2022-01-06T02:16:27.052Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "9ffe4ffa2441b43ea96465973c75ec9fb50f56df",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9ffe4ffa2441b43ea96465973c75ec9fb50f56df",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
213679042 | pes2o/s2orc | v3-fos-license | Histopathological spectrum of gall bladder lesions
Background: Gall bladder diseases are a very common health problem that affects millions of people throughout the world. Cholelithiasis is commonly associated with carcinoma gallbladder. Cholecystectomy is the most commonly performed surgical procedure done for gall bladder disease. Methods: A total of 161 cases of gall bladder lesions were evaluated from January 2017 to December 2018 which were sent to department of pathology. Specimens were fixed in 10% formalin. Appropriate areas were selected from the specimen and grossed, processed, sectioned, stained using haematoxylin and eosin and were observed under microscope. Results: Out of 161 cases, 105 were female (65.22%) and 56 cases were male (34.78%).Histopathologically, the most common diagnosis was Chronic calculus cholecystitis (57.76%) followed by chronic acalculus cholecystitis (22.36%). Remaining cases were of Acute on chronic cholecystitis (6.21%), Acute on chronic cholecystitis with cholelithiasis (4.96%), Acute on chronic cholecystitis with perforation peritonitis (1.24%), Acute suppurative cholecystitis with perforation peritonitis (0.62%), Biliary Atresia (1.24%), Chronic cholecystitis with choledochal cyst (1.24%), Follicular cholecystitis (1.24%), Adenocarcinoma (0.62%), Adenosquamous carcinoma (0.62%) and one case was inconclusive (0.62%). Conclusions: The incidence of chronic calculus cholecystitis was found to be 57.76% with female preponderance and mostly in third decade. Malignancy of gall bladder is a rare condition. Routine histopathological examination of all cholecystectomy specimens is strongly recommended for the detection of various variants of chronic cholecystitis and also of incidental carcinoma of gall bladder which helps in their treatment and prognosis.
INTRODUCTION
Gallbladder is one of the organ having a wide spectrum of diseases ranging from congenital anomalies, calculi and its complications, non-inflammatory, inflammatory to the neoplastic lesions. Among the gall bladder diseases, gall stone is a very the common health problem that affects millions of people throughout the world. 1 Gall stones produce inflammation of gall bladder which can be acute, chronic or acute on chronic. Chronic cholecystitis produces diverse histopathological changes in gallbladder mucosa like acute -chronic inflammation, xanthogranulomatous cholecystitis, glandular hyperplasia, cholesterosis, and metaplasias. 2 The incidence of gall bladder carcinoma (GBC) is 0.8-1%. 3 Cholelithiasis is found in approximately 85% of people with gallbladder cancer. 4 Other risk factors which increases the risk of GBC include porcelain gallbladder, adenomatous polyps of the gallbladder, chronic infection with Salmonella typhi, carcinogen exposure (e.g. miners exposed to radon), and abnormal pancreaticobiliary duct junction. 5 Gallbladder cancer (GBC) can be clinically obvious, an unexpected finding at laparotomy, detected incidentally on histologic examination or may be missed only to present with recurrence during follow-up. 6 Also, the prognosis of gall bladder cancer is very poor. It is pertinent to analyze the histopathological changes associated with the gallbladder disorders. The present study was done to study the different histopathological patterns of gall bladder diseases and their incidences.
METHODS
This is a retrospective observational study conducted in the department of pathology at tertiary health care academic institute from January 2017 to December 2018. The study was approved by ethical committee of the institute. A retrospective analysis of 161 cases who had underwent cholecystectomy at the institute was done.
Inclusion criteria
• All patients who underwent cholecystectomy in the hospital during the study period were included in the study.
Exclusion criteria
• HIV patients were excluded.
Clinical details like age, sex and relevant investigations like LFT, USG were considered. All specimens were fixed in 10% formalin. Gross features of cholecystectomy specimens were recorded. Three sections each from neck, body and fundus were taken. In cases with any growth, irregularity in the wall, calcification, necrosis etc more sections were taken. Standard grossing techniques were followed. Appropriate areas were selected, grossed, processed, sectioned, and stained with haematoxylin and eosin dye Histopathological examination was done on formalin fixed and paraffin processed tissues.
Statistical analysis
The results were analysed using descriptive statistics.
RESULTS
A total of 161 patients who had undergone cholecystectomy were studied for a period of two years. Among these patients, 105 cases were of female (65.22%) and 56 cases were of male (34.78%). The age of the patients ranged from 3 months to 76 years with maximum number of patients being 31 to 40 years. Mean age was 46.51years. Table 1 shows age distribution of gall bladder diseases. Gall stones were present in 106 cases (65.83%). Gall stones and associated diseases were found to be more common in women within fourth decade as compared to men. Figure 1 shows sex distribution of gall bladder disease. Pigment stones were found to be most common followed by cholesterol stones. Table 2 shows morphological spectrum of gall bladder lesions. The present study concluded that gallbladder malignancy was relatively uncommon and was seen in only two cases which were diagnosed as adenocarcinoma and adenosquamous carcinoma. 19 Only one case was found to be inconclusive. Table 4 shows comparison of histopathological lesions. Major limitation of our study was the less number of cases.
CONCLUSION
The incidence of chronic calculus cholecystitis was found to be 57.76% with female preponderance and mostly in third decade. Our study strongly recommends routine histopathological examination of all cholecystectomy specimens for the detection of various variants of chronic cholecystitis and also of incidental carcinoma of gall bladder which helps in their treatment and prognosis. | 2020-03-05T11:10:15.999Z | 2020-02-26T00:00:00.000 | {
"year": 2020,
"sha1": "6e0a12dfa538c889ed4620410302bacaf88d0a26",
"oa_license": null,
"oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/7758/5518",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4a2dda3b8473e5cd936ad16251df52d8aa5b1328",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260419445 | pes2o/s2orc | v3-fos-license | Observational relationships between ammonia, carbon dioxide and water vapor under a wide range of meteorological and turbulent conditions: RITA-2021 campaign
. We present a comprehensive observational approach, aiming to establish relations between the surface-atmosphere exchange of ammonia (NH 3 ) and the CO 2 uptake and transpiration by vegetation. In doing so, we study relationships useful for the the improvement and development of NH 3 flux representations and their dependences. The NH 3 concentration and flux are measured using a novel open-path miniDOAS measurement setup, taken during the five week RITA-2021 campaign (25 15 August until 12 October 2021) at the Ruisdael Observatory at Cabauw, the Netherlands. After filtering for unobstructed flow, sufficient turbulent mixing and CO 2 uptake, we find the diurnal variability of the NH 3 flux to be characterized by daytime emissions (0.05 µ g m -2 s -1 on average) and deposition at sunrise and sunset (-0.05 µ g m -2 s -1 on average). We first compare the NH 3 flux to the observed gross primary production (GPP), representing CO 2 uptake, and latent heat flux (L v E), representing to evapotranspiration. Next we study the observations following the main drivers of the dynamic vegetation response, which 20 are photosynthetically active radiation (PAR), temperature (T) and the water vapor pressure deficit (VPD). Our findings show indication of the dominance of stomatal emission of NH 3 , with high correlation between the observed emissions and both net L v E (0.70) and PAR (0.72), as well as close similarities in the diurnal variability of the NH 3 flux and GPP. However, the efforts to establish relationships are hampered due to the amount of diversity of NH 3 sources of the active agricultural region and low data availability after filtering. Our findings show the need to collocate meteorological, carbon and nitrogen studies to advance 25 on our understanding of NH3 deposition and its representation.
Introduction
While nitrogen is an essential nutrient for the growth of plants, acting as a fertilizer, excess nitrogen deposition causes environmental damage and leads to an increased public health risk via the formation of particulate matter (Bobbink et al., 2003;Behera et al., 2013;Erisman and Schaap, 2004;Erisman et al., 2013;Smit and Heederik, 2017).When nitrogen critical loads are exceeded, excess nitrogen deposition threatens biodiversity through acidification and eutrophication of soils.When mitigation of the harmful effects of nitrogen fails, there can be serious political, economic and societal consequences, as demonstrated by the current Dutch nitrogen crisis (Stokstad, 2019).Atmospheric ammonia (NH 3 ) plays a key role in the deposition of nitrogen, mainly originating from agricultural activity.This is especially true in the Netherlands, where NH 3 deposition accounts for about three-quarters of all nitrogen deposition (Wichink Kruit and van Pul, 2018;RIVM et al., 2019).
Efforts to mitigate the harmful effects of nitrogen deposition heavily rely on models representing the concentration and deposition of nitrogen compounds, supported by a network of concentration and surface-atmosphere exchange measurements.The surface-atmosphere exchange in such models is represented by parameterizations, which are developed, validated and improved based on advanced highresolution observations.In the case of atmospheric ammonia, taking accurate high-resolution measurements is notoriously difficult, due to the reactive nature of gaseous NH 3 causing the gas to "stick" to the inlet walls of conventional instruments (Parrish and Fehsenfeld, 2000;von Bobrutzki et al., 2010).These challenges are amplified when measuring the NH 3 surface-atmosphere exchange flux (deposition or emission), where high precision is particularly important (Nemitz et al., 2004;Whitehead et al., 2008).
Recent developments in advanced instrumental techniques resolve these inlet issues by using optical open-path analyzers.Swart et al. (2023) presents an intercomparison of two novel open-path measurement setups aimed at measuring the NH 3 flux at half-hourly resolution: the RIVM-miniDOAS 2.2D (where DOAS denotes differential optical absorption spectroscopy) and the commercial Healthy Photon HT8700E.The two setups showed very similar results, despite being widely different in their measurement principle and approach to deriving the flux from concentrations: the Healthy Photon uses the eddy-covariance technique, whereas the miniDOAS applies the flux-gradient method to lineaverage concentration measurements over a 22 m open path at two heights.In this study, we continue the analysis of the observations of the miniDOAS system presented by Swart et al. (2023), as the system provides reliable measurements of both the concentration and flux with a high operational uptime.
In a previous study, based on measurements from the predecessor of the miniDOAS system at the Veenkampen meteorological site in the Netherlands, we identified that the mechanisms behind the stomatal exchange of NH 3 are not yet fully understood (Schulte et al., 2021).Here, we continue to study this stomatal exchange pathway by linking the observed NH 3 flux (F NH 3 ) to photosynthesis, i.e., the stomatal exchange of CO 2 and water vapor (plant transpiration).The similarities between the stomatal exchange of NH 3 and CO 2 have long been recognized (San José et al., 1991;Schrader et al., 2020).However, there are very few parallel measurements of NH 3 and CO 2 fluxes, and research into the two gases is generally conducted by separate scientific communities (Milford et al., 2001).Milford et al. (2001) performed one of the few attempts to develop a simple parameterization for both the CO 2 and NH 3 flux, but they were unsuccessful with respect to finding such relationships for NH 3 , as the observed NH 3 flux over Scottish heathland was dominated by non-stomatal exchange.Further, Zöll et al. (2019) performed an analysis to study whether the biosphere-atmosphere exchange of total reactive nitrogen was driven by the same variables as CO 2 .
Our aim is to relate NH 3 and CO 2 fluxes in order to advance our understanding of NH 3 stomatal exchange.These surface exchanges need to be related to the sensible and latent heat fluxes and to the diurnal boundary layer dynamics (Vilà- Guerau de Arellano et al., 2023).Utilizing recent developments in NH 3 measurement techniques, we combine high-quality miniDOAS F NH 3 observations with measurements of both CO 2 and water vapor fluxes as well as with other meteorological variables.As our dataset is limited due to diverse weather conditions and the complexity associated with nearby multiple sources of ammonia, our analysis acts as a proof of concept, serving as an example of the need for combined high-quality NH 3 flux measurements with auxiliary measurements of CO 2 , water vapor fluxes and other meteorological variables.As such, we decided to guide our analysis solely using observations and to keep the use of the representation of processes to interpret our data to a minimum.We first describe the observations, after which we link the observed F NH 3 to stomatal exchange, with the intention of establishing relationships between the stomatal exchange of ammonia and the processes of CO 2 uptake and transpiration by vegetation.As these processes of photosynthesis are well understood, we explore how this understanding can lead to further improvement of the parameterization of the NH 3 stomatal exchange.
Site description and measurement strategy
In September 2021, the Ruisdael Land-Atmosphere Interactions Intensive Trace-gas and Aerosol measurement campaign, known as RITA-2021, took place at the Cabauw Observatory (https://ruisdael-observatory.nl/cabauw/, last access: 15 January 2024).The Cabauw Observatory, one of the six sites within the Ruisdael Observatory, is located on flat grassland in the Netherlands (51.971 • N, 4.927 • E), with an average grass height of 0.1 m.The site provides a unique set of surface and upper-air observations, matched by very few stations worldwide.This includes measurements of thermodynamic variables along the 213 m mast, radiation, surface fluxes, clouds and trace gases.Surface elevation changes are, at most, a few meters over 20 km, and the nearby region is agricultural.An overview of the Cabauw site, the instruments stationed at the site and its 50 years of observations is given in Bosveld et al. (2020).
During the campaign, 48 d (from 25 August to 12 October) of ammonia measurements are taken using the miniDOAS flux measurement setup (Berkhout et al., 2017).The measurement setup and more details on the measurement campaign are described in Swart et al. (2023).In short, the miniDOAS is an optical instrument, measuring the line-average concentration (mass density) over a 22 m open path from the instrument to its retroreflector.The 30 min average NH 3 concentrations have an accuracy of 3 % (e.g., 0.15 µg m −3 at the median NH 3 concentration of 5 µg m −3 during the campaign; for further details, see Swart et al., 2023).The flux measurement setup uses two miniDOAS instruments that measure the concentration over parallel paths at different heights, i.e., 0.76 and 2.29 m, respectively.Regular intercalibration between the miniDOAS instruments allowed quantification and correction of any potential bias between the two instruments.The remaining random uncertainty in the NH 3 was 0.088 µg m −3 (1σ ; for further details, see Swart et al., 2023).F NH 3 is then inferred using the flux-gradient method, based on the Monin-Obukhov similarity theory (Moene and Van Dam, 2014).The fluxgradient method combines the observed vertical NH 3 gradient with turbulent measurements of a sonic anemometer (model Gill WindMaster Pro™, Gill Instruments, Lymington, UK) (Wyers et al., 1993;Nemitz et al., 2004;Wichink Kruit et al., 2007;Schulte et al., 2021).The sonic anemometer was mounted at 2.8 m above the ground alongside the mini-DOAS measurement path.Temperature data are based on the corrected air temperature as calculated by the EddyPro software from the sonic data.The 10 Hz open-path H 2 O and CO 2 analyzer (LI-7500DS, LI-COR Biosciences, Lincoln, USA) was placed at a similar height, 15 cm away from the sonic (for more details, see information on sonic no. 1 in Swart et al., 2023).The CO 2 and water vapor fluxes and other micrometeorological parameters were calculated using EddyPro software (LI-COR Biosciences, 2024) at 30 min intervals with the 10 Hz raw data.The flux calculation procedure followed the general best practices as applied across the FluxNet network (e.g., Mauder et al., 2022), including coordinate rotation (Wilczak et al., 2001), spectral corrections for both filtering (Moncrieff et al., 2004) and low-pass filtering (Moncrieff et al., 1997), and addition of the Webb-Pearman-Leuning density term (Webb et al., 1980).
The measurement field and its surroundings are shown in Fig. 1.The miniDOAS light paths are aimed north-northwest (right panel) to ensure unobstructed flow for wind coming from the west, which is the dominant wind direction in the Netherlands.North of the light path, shown in yellow in Fig. 1, the flow of air is obstructed by several instruments, including the aforementioned sonic anemometer.To the east and south, the airflow is obstructed by a trailer, the 213 m high meteorological tower and the container which houses the miniDOAS instruments.The unobstructed region west of the measurement field is mainly characterized by actively managed agricultural grassland and the small town of Cabauw (about 750 inhabitants), as shown in the left panel of Fig. 1.Several farms can be seen northwest and west of the measurement field, with varying emission strengths that reach over 1200 kg NH 3 yr −1 .Sheep and cattle graze on these agricultural fields, which are actively maintained and fertilized.These activities were not documented; sporadic fertilization events do affect the NH 3 measurements, as will be discussed later.
Data filtering
We apply several filter criteria to the RITA-2021 observations; these are shown in Table 1 along with the acceptance rates for each individual filter criterion.The miniDOAS flux setup requires several days of intercalibration measurements, as described in Swart et al. (2023).No ammonia flux can be inferred from these intercalibration measurements, leaving 65 % of the campaign observations suitable for flux measurements.Furthermore, we discard observations from 11 to 12 September, as these NH 3 emission fluxes are outliers with respect to the average observed NH 3 flux, indicating a fertilization event in close proximity to the measurement site.
The remaining measurements are processed by applying five filters in total.The use of the flux-gradient method requires unobstructed upwind airflow with sufficient turbulent mixing.Figure 1 shows that the instruments were positioned anticipating winds from the southwest (green), with the obstacles located east (red) and north (yellow) of the miniDOAS optical path.Therefore, we apply a criterion filtering for wind directions between 201 • and 331 • .This filter leads to a large reduction in the data available for analysis, decreasing the available data from 61 % to 16 %, as the prevalent wind direction during the campaign was from the northeast.As a secondary effect of this filter, the available observations are taken under synoptic weather conditions characterized by frontal passages with some rain events.The second filter excludes rain events lasting more than 5 min, as rain droplets can obstruct the light path of the miniDOAS.Finally, sufficient turbulent mixing is one of the main requirements for flux measurement using the flux-gradient method.Therefore, the third filter requires the friction velocity to have a value of at least 0.1 m s −1 (u * ≥ 0.1 m s −1 ).With these three filters, we ensure the quality of the ammonia measurements, observing the NH 3 flux with an average precision of 0.015 µg m −2 s −1 (1σ ; for further details, see Swart et al., 2023).
The fourth and fifth filter criteria focus on the ammonia surface-atmosphere exchange pathways.The NH 3 flux follows three pathways: the stomatal pathway, the external leaf surface pathway and the soil pathway (Nemitz et al., 2001;Massad et al., 2010;van Zanten et al., 2010).The latter is generally assumed to be negligible for the F NH 3 over grass, as the dense vegetation completely covers the soil.The external leaf pathway represents the exchange of ammonia with a thin film of water and leaf surface waxes on the leaf surhttps://doi.org/10.5194/bg-21-557-2024 Biogeosciences, 21, 557-574, 2024 face and depends on the relative humidity (RH) (Van Hove et al., 1989).Finally, the stomatal pathway represents the exchange of NH 3 through the plant stomata with ammonium dissolved in the apoplast fluids of the plant (Farquhar et al., 1980;Wichink Kruit et al., 2010).These processes occur at the leaf scale (micrometer or millimeter level) and, as such, require a representation of photosynthesis and stomatal aperture that needs to be evaluated with observations (Vilà-Guerau de Arellano et al., 2020).Upscaling to the canopy level allows it to be compared with observations inferred from eddy covariance, such as the GPP (Filter 4).
The NH 3 exchange through the stomatal pathway is governed by the dynamic response of vegetation to meteorolog-ical conditions and is closely related to photosynthesis.The stomata open during the day in response to solar radiation, as the vegetation uses energy for photosynthesis, particularly the photosynthetically active radiation (PAR) (Hsiao, 1973;Cowan and Farquhar, 1977;Papaioannou et al., 1996;Ronda et al., 2001).Plants ingest CO 2 through the stomata, but water from inside the plant can evaporate as the stomata are opened.The plant can reduce this loss of water by (partly) closing the stomata in the case of a high water vapor pressure deficit (VPD), or it can increase the evaporation rate by actively opening the stomata.Increasing the evaporation rate provides cooling, lowering the leaf temperature in order to reach the optimal conditions to perform photosynthesis (Ja-cobs and de Bruin, 1997;Takagi et al., 1998;de Groot et al., 2019;Vilà-Guerau de Arellano et al., 2020).As the temperature and VPD are often highest in the afternoon, the stomata often partly close to manage the loss of water.During the night, there is no PAR for photosynthesis, so the stomata are closed.As a result, the characteristics of ammonia surfaceatmosphere exchange differ between day and night, with the stomatal pathway being dominant during the day and the external leaf pathway being the dominant pathway during the night and in the early morning.
The uptake of CO 2 is represented by the gross primary production (GPP, in mg C m −2 s −1 ).The GPP and the ecosystem respiration (ER) combined define the net ecosystem exchange (NEE) of CO 2 .Using the sign convention that the flux towards the surface is positive, we define the net ecosystem exchanges as follows: NEE = GPP + ER, where (under normal daytime grass field conditions) our observations are NEE > 0, the inferred GPP values are positive and the inferred ER values are negative.The ER is estimated by taking the average campaign nighttime (defined when the net available radiation is zero, The GPP is then estimated by combining the observed CO 2 flux with the estimated respiration. The approach described above fits with our aim of guiding the analysis using measurements alone.However, wellestablished methods exist to partition the NEE into the GPP and ER.In Appendix A, we show that using the Arrheniustype relationship between temperature and nighttime CO 2 flux to describe ER, as proposed by Lloyd and Taylor (1994), and then subtracting that from the NEE to arrive at the GPP only changes the GPP estimates slightly.Because of its limited impact on the results, we continue with the observationbased estimate of the GPP in the main text.
To capture observations with active stomatal exchange, Filter 4 is set to only accept GPP > 0 mg CO 2 m −2 s −1 .Due to the uncertainty in our GPP estimate, there are still some nighttime observations which pass the filter.Therefore, we add an additional fifth filter using incoming shortwave radiation (SW in ).Only measurements with SW in > 10 W m −2 will pass in order to filter out these last remaining nighttime observations.
After filtering, 102 h (9 %) of all RITA-2021 observations, or 18 % of all daytime RITA-2021 observations, are available for analysis.These observations are taken over 17 unique days, spanning from 29 August to 30 September, with an average of 6 h and a maximum of 12 h of accepted measurement per day.
Characterization of the campaign meteorology
The summer months (June, July and August) leading up to the RITA-2021 campaign are characterized as an average Dutch summer, with average temperatures (17.7 • C), aboveaverage precipitation (244 mm accumulated) and below-average hours of sunshine (618 h).Additionally, the ground and surface water levels are actively managed in order to sustain optimal conditions for the agricultural activity in the area (Brauer et al., 2014).Thus, it is expected that the role of longterm vegetation stress on stomatal exchange is negligible during the RITA-2021 campaign.
As discussed in Sect.2.2, high temperatures or a high VPD can induce vegetation stress during the campaign.Therefore, we characterize the meteorological conditions of the 17 unique days on which the 102 h of filtered measurements were taken.The meteorological conditions of these days are summarized in Table 2, which shows the 17 d average and the observed range of the diurnal minimum/maximum of several variables.The 17 d average values provide a characterization of mild meteorological conditions with no indication that the vegetation is under stress.Additionally, Table 2 includes an estimate of the maximum daytime footprint determined using the sonic anemometer fluxes at a height of 2.8 m, following the method from Kljun et al. (2015).This footprint refers to the maximum upwind distance (in meters) encompassing the source area that contributed 70 % of the measured flux and serves as a first-order approximation of the footprint of the NH 3 flux measurements.
As the filtered campaign measurements are characterized by frontal passages, the weather conditions range from clearsky summer conditions with moderately high temperatures to colder cloudy days with short precipitation events (not shown).Furthermore, the atmospheric stability for the 102 h of filtered measurements is classified using the measured Obukhov length (L) and the height of the sonic anemometer (z = 2.8 m).In total, 4.5 h (4 %) can be classified as stable (z/L > 0.05), 61 h (60 %) can be classified as neutral (−0.05 ≤ z/L ≤ 0.05) and 36.5 h (36 %) can be classified as unstable (z/L < −0.05) conditions.This variation leads to a large spread in all variables shown in Table 2, as indicated by the column showing the 17 d range.
General characterization of the NH 3 observations
The variety of meteorological conditions could be an explanation of the large day-to-day difference in the observed NH 3 concentrations (shown in Fig. 2b).The histogram is highly skewed and shows that most observed NH 3 concentrations are below 7 µg m −3 , although higher concentrations with a maximum value of 24.7 µg m −3 are also present.Nevertheless, the mean (solid line) and median (dotted line) concentrations do indicate that the concentration decreases during the day, until the late afternoon.This would be in line with observations at several other sites, both in the Netherlands (Wichink Kruit et al., 2007;Schulte et al., 2021) and in other countries, e.g., Scotland (von Bobrutzki et al., 2010) or Italy (Ferrara et al., 2021).The large day-to-day differences in the NH 3 measurements could be a result of the changing meteorological conditions, the nearby agricultural activity or a combination of both.Despite the high variability in the NH 3 concentration measurements, a consistent diurnal variability is observed in the NH 3 gradient ( NH 3 ) and corresponding flux in Fig. 2c and d, respectively.Both Fig. 2b and c indicate that the observed NH 3 is independent of the absolute NH 3 concentration, i.e., high absolute concentrations do not lead to a large concentration difference between the two miniDOAS instruments.The average diurnal variability is characterized by negative NH 3 (deposition) in the early morning and late afternoon and positive (emission) NH 3 during the afternoon, with a typical range of about 0.5 µg m −3 in both directions.In total 79 % of the filtered observations have a positive NH 3 , corresponding to NH 3 emissions.
As F NH 3 is directly inferred from NH 3 , the diurnal variability in Fig. 2c and d is very similar.The NH 3 flux typically reaches its maximum around noon at a little over 0.05 µg m −2 s −1 on average, with individual noon observations ranging from −0.01 to 0.14 µg m −2 s −1 .Note that the measurements taken on 11-12 September, the aforementioned fertilization event, are approximately a factor 4 larger than the mean campaign values.Despite the large observed F NH 3 on these days, the observed concentrations are only slightly larger than the campaign averages.These 2 d will not be included in the analysis presented in this study, but they are shown as an illustration of how fertilization events can impact our analysis.
Characterization of the ammonia flux
In Fig. 3a, we show the observed ammonia flux against the air temperature, with the colors indicating the atmospheric NH 3 concentration at 2.29 m.Despite our efforts to filter for observations where the stomatal pathway is dominant, it cannot be ruled out that the external leaf pathway still plays an important role in the morning, via deposition onto morning dew at the canopy level (van Zanten et al., 2010;Wentworth et al., 2016).Therefore, we use black circles to mark obser-vations taken before 12:00 UTC with a RH > 80 % in Fig. 3.These highlighted observations indeed generally correspond to measurements of deposition or weak emission, indicating that NH 3 exchange through the external leaf pathway is still significant for these observations.While their involvement complicates our analysis of stomatal NH 3 exchange, they are still included in the analysis, as this also offers an opportunity to test if the relationships found in the filtered dataset differ for the marked and unmarked observations.If that is the case, it shows that we are indeed able to attribute the unmarked observations to stomatal exchange.
Figure 3a shows that F NH 3 increases with temperature for a low atmospheric concentration (2 µg m −3 ≤ NH 3, 2.29 m ≤ 7 µg m −3 ).We attribute this increase in NH 3 emissions to the change in NH 3 for increasing temperature, i.e., the difference between the approximately constant atmospheric NH 3 concentration and the stomatal compensation point.Following parameterizations of this compensation point, we find it to be related to the (leaf) temperature and some form of nitrogen availability parameter (e.g., actual or long-term NH 3 concentration): increasing nonlinearly with increasing temperature or nitrogen input (Nemitz et al., 2001;Massad et al., 2010;van Zanten et al., 2010).In Fig. 3b, a theoretical stomatal compensation point (dotted line) is added, which is calculated following the DEPosition of Acidifying Compounds (DEPAC) parameterization (van Zanten et al., 2010), using air temperature and the campaign median NH 3, 2.29 m (7.7 µg m −3 ).F NH 3 shows more scatter for measurements taken at high temperatures (> 21 • C).While Fig. 3b shows only small variations in the NH 3 concentration for temperatures below 21 • C, the NH 3 concentrations for these warmer temperatures are higher than the campaign average (> 7 µg m −3 ) and highly variable.As the NH 3 flux is directly related to the difference between the atmospheric NH 3 and the stomatal compensation point, the variability in the atmospheric concentration led to the scatter shown in Fig. 3a, where higher NH 3 concentrations correspond to weaker emission fluxes.
Ammonia flux relationships to dynamic vegetation responses
The diurnal pattern of F NH 3 in Fig. 2d shows similarities to the diurnal variability in the GPP in Fig. 2e.To further study the role of stomatal exchange during the campaign, we link the observed F NH 3 to the dynamic vegetation responses.First, we relate the ammonia flux to the GPP, the latent heat flux (L v E) and the sensible heat flux (H ).The GPP and (the transpirational part of) L v E are directly governed by the opening and closing of the stomata and represent stomatal exchange.Given the low data availability (9 %), we are aware that the analysis could be dominated by variations resulting from the diurnal variability in the fluxes.Therefore, we also include H in our analysis.The sensible heat flux is only indirectly related to the dynamic vegetation response through the surface energy balance, as the available energy from (solar) radiation and the soil heat flux is split between L v E and H .If the observed fluxes are indeed regulated through the opening and closing of stomata, the analysis of F NH 3 with respect to L v E and GPP should differ from the comparison with H . Next, we organize the observations following current dynamic vegetation models, based on temperature, radiation and moisture (Jarvis et al., 1976;Stewart, 1988;Ronda et al., 2001).Here, we compare the responses of the four individual fluxes to temperature (T ), PAR and VPD.As these three variables control the stomatal response at the canopy level in the models, we will use the responses of the fluxes to these variables as a guide to better understand the diurnal variabil-ity in the ammonia flux.Note that measurements taken on 11-12 September are not used to calculate correlation coefficients, but they are shown in the figures and included in the visual analysis.
Relating the ammonia flux to photosynthesis
Plotting F NH 3 against the GPP in Fig. 4a shows a low positive correlation between the two fluxes, with a correlation coefficient of 0.48.There is a large spread in the data, particularly for GPP values larger than 0.125 mg C m −2 s −1 .Part of this spread is attributed to the high relative humidity (black circles), where F NH 3 is not yet dominated by stomatal exchange and the external leaf pathway is still expected to be significant.Note that the atmospheric stability (color coded) plays an important role in the GPP, as unstable conditions are typically characterized by clear skies and high PAR values, which favor photosynthesis (as discussed in Sect.3.2).This relationship is not found in the observed F NH 3 , as there is a large spread in F NH 3 for both neutral and unstable conditions.
In Fig. 4b, a moderate positive correlation is found between F NH 3 and L v E. Our interpretation of this moderate correlation is that both transpiration and stomatal NH 3 emissions follow a similar process.The opening of the stomata for photosynthesis allows for the exchange of several gases, including water vapor and ammonia, depending on the VPD or the difference between atmospheric NH 3 and the stomatal compensation point (Cowan and Farquhar, 1977;Hsiao, 1973;Farquhar et al., 1980;Wichink Kruit et al., 2010).Note that L v E represents the net evaporation (Miralles et al., 2020), as evaporation from the soil plays a role as well.Assuming a vegetation cover of 90 % for grass, soil evaporation contributes with estimations that range from 10 % to 30 %.Despite this, the use of net L v E is acceptable as an indicator of the transpiration process.Note further that the observations with high relative humidity generally correspond to low L v E and that unstable conditions again correspond to high L v E values, related to the VPD between the leaf and stomata and to the atmosphere.
When plotting F NH 3 against H , two branches are found in the spread of the data, with a third branch being formed by the filtered out fertilization event on 11-12 September (black crosses).The smaller branch, with F NH 3 > 0.1 µg m −2 s −1 , could point towards another (weaker) fertilization event.Still, the second highest positive correlation is found at 0.65, indicating that the natural diurnal variability indeed plays an important role.Note that most of the measurements with high relative humidity are clustered around H = 0 W m −2 , i.e., there is little transfer of heat between the surface and atmosphere.
Based on the three scatterplots, we find the highest correlation between F NH 3 and L v E. Together with the diurnal variability in F NH 3 , transitioning from nighttime deposition to daytime emission from 08:30 to 16:30 UTC, this is the second indication of stomatal emission of NH 3 , opposed to emission from fertilization or animal droppings.However, the moderate correlation between F NH 3 and H indicates that the diurnal variability in the fluxes influences the correlation coefficient.Finally, we want to mention the observations on 11-12 September, which support the interpretation of the scatterplots with respect to showing how fertilization events affect our analysis.
The dynamic response to temperature
We further investigate the stomatal exchange of NH 3 by analyzing the response of F NH 3 to varying meteorological conditions.The optimal conditions (PAR, T , VPD) for photosynthesis are different for different vegetation types (Gates, 1980;Jacobs, 1994;Vilà-Guerau de Arellano et al., 2015).
https://doi.org/10.5194/bg-21-557-2024Biogeosciences, 21, 557-574, 2024 Starting with the 2.8 m temperature (T ) in Fig. 5, we find a large spread for all four surface fluxes, resulting in low positive correlations (0.33-0.51).The lowest correlation coefficients are found for GPP and L v E, indicating that temperature has little impact on the opening and closing of the stomata.A slightly higher correlation is found for F NH 3 , which we attribute to the relationship between the stomatal compensation point and the NH 3 flux (discussed in Sect.2.5).Note that the NH 3 emissions on 11-12 September stand out as outliers in Fig. 5a, whereas they are average for the other three subplots.
The dynamic response to the VPD
Moving on to analyzing the response of the four fluxes to the VPD, we find moderate correlation coefficients (0.42-0.53) in Fig. 6.In Fig. 6c, L v E shows a nonlinear relationship with the VPD, called the "evaporation hysteresis" (Zhang et al., 2014; de Groot et al., 2019).This hysteresis is driven by both the vegetation regulating the loss of water through evaporation, described in Sect.2.2, and the time difference when the maximum values of L v E (12:00 UTC) and VPD (15:00 UTC) are reached.The same holds true for the other three fluxes (F NH 3 , GPP and H ), as all three reach their maximum around noon.Note that the observations of 11-12 September are again clear outliers in Fig. 6a, forming two branches in the scatterplot.Also standing out are several observations with F NH 3 > 0.1 µg m −2 s −1 .These are the observations that appear as the small upper branch in the H -F NH 3 scatterplot in Fig. 4c and, again, form their separate branch in Fig. 6a.This further indicates that there is a second (weak) fertilization event in the filtered dataset of the RITA-2021 campaign.
The dynamic response to PAR
When relating the fluxes to PAR, we find high positive correlation coefficients for all four surface fluxes (0.72-0.93) (Fig. 7), indicating that PAR is the main driver of the dynamic vegetation response.
The GPP has a strongly nonlinear response to PAR, as the GPP appears to reach a plateau for PAR > 150 W m −2 .There are several reasons for this GPP maximum.At constant temperature and PAR, the stomatal uptake of CO 2 will increase the concentration within the plant to the point that the CO 2 supply is no longer the limiting factor.The GPP then reaches a plateau of maximum photosynthesis rate (see Fig. 6.13a in Moene and Van Dam, 2014), similar to the observations in Fig. 7b.Additionally, the photosynthesis system can become light saturated for high PAR values, at constant temperature.Following this latter process, the GPP is expected to level off more gradually, compared with the plateau that is reached by CO 2 saturation (see Fig. 6.13b in Moene and Van Dam, 2014).Finally, the (partial) closing of the stomata in response to high VPD could also reduce the GPP.How-ever, as the VPD typically reaches its maximum at around 15:00 UTC (not shown), it is unlikely that this is a limiting factor for GPP at high PAR values, which peak around noon.All of these processes depend on the temperature, VPD and PAR and can explain the vertical spread in Fig. 7b.
When taking a close look at the response of L v E to PAR, it is possible to distinguish two phases in Fig. 7c.First, for PAR values up to about 100 W m −2 , L v E increases linearly to roughly 75 W m −2 , related to the opening/closing of the stomata around sunrise/sunset.The second phase shows a more gradual linear increase in L v E with respect to PAR.From the linear response and the small spread in Fig. 7c, we conclude that opening/closing of the stomata during the RITA-2021 campaign is governed by PAR and that the role of the VPD or temperature is small.
Similar to L v E, the NH 3 flux generally shows a linear response: transitioning from weak deposition to emission as the stomata open in response to increasing PAR.The spread in the F NH 3 response is larger compared with the L v E response, which results in the lowest correlation coefficient at 0.72.We attribute this spread to three factors: the relationship between temperature and the stomatal compensation point, the variations in the NH 3 concentration, and the measurements where RH > 80 % (black circles).Furthermore, observations where F NH 3 > 0.1 µg m −2 s −1 , i.e., the possible (weak) fertilization event, again appear to form a second branch in the scatterplot.Based on the strong similarities between F NH 3 and L v E with respect to their response to PAR, we interpret the observed NH 3 emission as stomatal (re)emission from vegetation.
Discussion
Observations of the NH 3 flux after filtering, taken over 17 individual days during the RITA-2021 campaign, are characterized by daytime emissions.The measurement site in Cabauw is located on flat grassland in an agricultural area, with the nearby fields being actively managed and/or grazed upon.It is therefore possible that the observed NH 3 emissions originate from sources like fertilization events (e.g., manure application) or animal droppings.Clearly distinguishing between stomata-driven emission and the volatilization of ammonia due to fertilization events is complex due to the contributions of different paths (soil versus plant) and nonlinear effects (water vapor deficit dependence on temperature) that often offset each other.However, we identified the F NH 3 data which were most likely due to a fertilization event and labeled these data as outliers, whereas we retained other doubtful points in the analysis.Next, we also marked F NH 3 data which could have been due to exchange via the external pathway, once more trying to single out F NH 3 due to stomatal exchange.
Indications of stomatal emission are found in the diurnal variability in F NH 3 .The flux transitions from deposition to emission in the early morning (at around 08:00 UTC), reaches maximum emission around 12:00 UTC and transitions to deposition again just before sunset (at around 16:30 UTC), as shown in Fig. 2c.Our interpretation of this diurnal cycle is that the flux transitions from (nighttime) NH 3 deposition, through the external leaf path, towards emission through the stomatal path during the day.This diurnal variability in F NH 3 shares similarities to the diurnal variability in the CO 2 flux.As the stomata open for photosynthesis in response to PAR, the CO 2 flux transitions from CO 2 respiration to stomatal uptake of CO 2 .High correlations between F NH 3 and L v E (0.7) and between F NH 3 and PAR (0.72) further point towards stomatal NH 3 emission and a possible relationship between F NH 3 and the photosynthesis fluxes.
Critical analysis of RITA-2021 dataset
The conditions during the RITA-2021 campaign present a challenge for the analysis conducted in this study.The site is located in an active agricultural region, with several potential emission sources within only a few hundred meters to a couple of kilometers distance upwind of the measurement site.The fields next to the site are actively managed, and the nitrogen contents of the soil and vegetation can differ on a field-to-field basis.This high level of surface heterogeneity within the estimated footprint of the flux measurements (up to about 250 m; Table 2) adds an additional level of complexity to the analysis (Swart et al., 2023).Furthermore, there are several farms located within 2 km of the site, some of which have yearly NH 3 emissions of up to 1200 kg yr −1 .Studies on the blending distance (i.e., the distance at which a plume can be considered well mixed with respect to the background) indicate that emission plumes from such strong local NH 3 sources can affect flux measurements over distances of a couple of kilometers (Schulte et al., 2022).In this study, at least one instance of strong local emissions has been identified: the fertilization event on 11-12 September.Other pohttps://doi.org/10.5194/bg-21-557-2024 Biogeosciences, 21, 557-574, 2024 tential weaker events have also been shown and discussed in Figs.4a and c and 6a.The analysis is further complicated by the complex meteorological conditions, characterized by frontal passages.As the miniDOAS setup was positioned anticipating winds from the southwest, the meteorology of the filtered data is characterized by frontal passages.As a result, most observations are taken under neutral-stability conditions (60 %), with clouds and some rain showers.While rain events are filtered out, wet deposition by rain does lead to a sudden change in the NH 3 concentration and can lead to the re-emission of NH 3 as the rainwater evaporates.
Finally, the southwestern orientation of the instruments leads to a significant loss in the availability of data suitable for analysis.Historically, southwesterly winds tend to be most common in September, but the wind direction during the campaign was highly variable.Filtering for unobstructed wind directions reduces the availability of viable data by 510 h, i.e., 44 % of all measurement data.As a result, the observed range in the measurements presented in the figures is strongly influenced by the natural diurnal variability in the variables.While we do address the role of the natural diurnal variability by including the sensible heat flux in our analysis, it does make the observed relationships between F NH 3 and the other variables somewhat speculative.
The high level of heterogeneity due to complex emission sources, the low data availability after filtering and the complex weather conditions make the RITA-2021 dataset unfavorable for establishing relationships between F NH 3 and the CO 2 or water vapor flux.It also makes the dataset unsuitable to aid annual inventories.However, it highlights the importance of the homogeneity of the NH 3 surface characteristics and that the proximity of NH 3 emission sources should also be considered when selecting a measurement site, in addition to the availability of high-quality meteorological observations.Despite the challenges, the NH 3 measurements are of unprecedented high quality (Swart et al., 2023), and analyzing the unique dataset following our approach is still worthwhile because we can establish relationships that significantly correlate with the main drivers of the stomatal aperture following current dynamic vegetation models.
Recommendations
Following the results presented in this study, we recommend a comprehensive approach to future NH 3 flux measurements, including observations of the CO 2 and water vapor flux as auxiliary measurements.The opening of the stomata for CO 2 uptake through photosynthesis allows for the exchange of several other gases, including water vapor and ammonia.The process representations of photosynthesis have been widely researched, and its parameterizations have been better tested against sub-diurnal observations under different scales (Vilà-Guerau de Arellano et al., 2020).Combined observations of the NH 3 , CO 2 and water vapor fluxes can be used to further our understanding of NH 3 exchange through the indi-vidual exchange pathways, as was done for ozone deposition by Visser et al. (2021).
Furthermore, we recommend analyzing and comparing observations of the NH 3 flux at different measurement (grassland) sites, similar to the intercomparison of CO 2 exchange measurements by Jacobs et al. (2007).For example, the F NH 3 diurnal variability presented in this study significantly differs from measurements in 2013 at the Veenkampen meteorological site near the city of Wageningen (https://www.wur.nl/en/show/Weather-Station-De-Veenkampen.htm, last access: 15 January 2024).Located only 50 km east, the diurnal variability in F NH 3 at Veenkampen is characterized by weak morning deposition and strong afternoon deposition, up to about −0.3 µg m −2 s −1 , under clear-sky conditions over unfertilized grassland (Schulte et al., 2021).At the Haarweg meteorological site, the predecessor to Veenkampen, chemical wet denuder measurements of F NH 3 in 2004 were characterized by strong deposition in the early morning, attributed https://doi.org/10.5194/bg-21-557-2024 Biogeosciences, 21, 557-574, 2024 570 R. B. Schulte et al.: Observational relationships between NH 3 , CO 2 and evaporation to morning dew, and weak stomatal emissions in the afternoon (Wichink Kruit et al., 2007).The differences between observed diurnal variability in these three studies stress the high variability at the local and regional scales and highlight the need for long-term, high-resolution F NH 3 observations at multiple locations.Efforts to further our understanding of the NH 3 exchange and its diurnal variability are already being made.The miniDOAS setup used in RITA-2021 will be taking longterm (> 1 year) observations of the NH 3 flux at the Veenkampen meteorological site, starting in the spring of 2023.This yearlong record of high-resolution F NH 3 observations will be analyzed, alongside a wide range of meteorological and turbulent measurements, including the CO 2 and water vapor flux, aiming to improve the parameterization of the NH 3 surface-atmosphere exchange.The collocation of surface and upper-atmospheric observations (Vilà-Guerau de Arellano et al., 2023) is key with respect to obtaining a comprehensive and complete understanding of NH 3 flux.The analysis can be taken one step further in the context of the Ruisdael Observatory project via a process analysis combining the observations with both conceptual (Schulte et al., 2021) and high-resolution turbulence-resolved models (Schulte et al., 2022).
Conclusions
We analyzed over a month of ammonia flux measurements (F NH 3 ), taken during the RITA-2021 campaign at the Ruisdael Observatory in Cabauw.The analysis is centered around observations from the miniDOAS flux measurement setup, which applies the flux-gradient method to line-average concentration measurements over a 22 m open path at two heights.Our objective was to find relationships between the observed NH 3 flux and the main drivers of dynamic vegetation response, linking ammonia exchange through the main three variables that control the stomatal pathway to processes due to photosynthesis.The process of photosynthesis has been more widely studied; therefore, establishing robust relationships between photosynthesis drivers closely linked to stomatal aperture and NH 3 surface exchange enables us to determine and quantify the role of this path in emitting or depositing ammonia.
After filtering, the observed F NH 3 is characterized by daytime emissions of about 0.05 µg m −2 s −1 and nighttime deposition of about −0.05 µg m −2 s −1 .We compare the NH 3 flux to the observations inferred from CO 2 uptake by vegetation and the net observed exchange of water vapor, represented by the gross primary production (GPP) and net latent heat flux (L v E), as well as the sensible heat flux (H ), which is only indirectly related to the dynamic vegetation response.Here, we find a high and significant correlation between the observed daytime NH 3 emissions and L v E (0.70) and the photosynthetically active radiation (PAR, 0.72).These results provide a first-order quantification of how NH 3 exchange could follow similar paths to the exchange of CO 2 and H 2 O through plant processes regulated by the stomatal aperture.It shows that auxiliary and co-located flux measurements of CO 2 and water vapor are appropriate variables to distinguish stomatal NH 3 exchange from non-stomatal exchange.
The analysis presented in this study is hampered by the challenging conditions during the RITA-2021 campaign.However, despite these conditions, the comprehensive approach presented in this study paves the way for the potential of combining high-quality NH 3 observations with auxiliary flux measurements of CO 2 , water vapor and other meteorological variables.By organizing and analyzing the observations guided and constrained by the main meteorological drivers controlling the assimilation and transpiration in grass fields, we managed to attribute the observed NH 3 emission to processes and variables associated with stomatal exchange and identify outliers.In order to establish more robust relationships between NH 3 and the photosynthesis fluxes, the proposed framework in this study should be applied to measurements that are still representative of the nearby sources and sinks while also ensuring a blending distance that guarantees that these singular source and sink contributions are properly mixed with the NH 3 background concentration.These distances range from 1000 to 3000 m (Schulte et al., 2022).Further, longer time series are needed in order to make a more robust distinction between days with and without the influence of nearby sources.Our findings and framework over grasslands are a first step to confirm patterns and relationships between meteorological drivers and NH 3 exchange, but this work should be extended to longer and more dedicated field campaigns, including other ecosystems.The results presented in this study already indicate that there is room to find such patterns.
Appendix A: An alternative way of calculating ecosystem respiration
In Sect.2.2, we describe our approach to arrive at an estimate of the GPP using observations only.Here, we examine the potential impact of using a regression model to describe the ecosystem respiration to examine the potential impact of using a different method on the results.We calculated the GPP by describing ecosystem respiration as a function of air temperature using the exponential regression model of Lloyd and Taylor (1994), hereafter LT94: where R 10 is the reference respiration at reference temperature T ref (set to 10 • C).To avoid over-parameterization, T 0 is set to −46.02 • C, as in LT94.E 0 is an empirical parameter related to the activation energy.Using the nighttime data collected during the campaign, filtered for u * ≥ 0.1 m s −1 , and a quality flag of 0 (Mauder and Foken, 2006), we obtained values of 5.3 for R 10 and 124 for E 0 .In doing so, correlation coefficients in Fig. 4a
Figure 1 .
Figure 1.The area surrounding the Cabauw Observatory (left) and the setup of the instruments at the measurement site (right).The transparent white circle represents a distance of 500 m from the NH 3 measurements and the color-coded dots represent the locations of nearby farms; the emission strength at the latter locations is specified in kg NH 3 yr −1 (source: Emissieregistratie, https://www.emissieregistratie.nl, last access: 21 January 2022).The colored circle in both panels indicates the wind directions in which the airflow towards the miniDOAS light path is obstructed by either other instruments (yellow) or larger structures such as the tower and containers (red).The information in the left panel was sourced from Cabauw Observatory (51.971 • N, 4.927 • E) (© Google Earth, 27 January 2022, Image by Landsat / Copernicus).The right panel presents modified information from Swart et al. (2023).
Figure 2 .
Figure 2. The diurnal variability, from sunrise (06:00 UTC) to sunset (18:00 UTC), in the filtered NH 3 concentration (b), NH 3 gradient (c), F NH 3 (d) and the GPP, with the corresponding histogram to the right in each instance.At each moment in time, the multiday mean (solid line) and median (dotted line) are calculated.Highlighted are observations from the fertilization event on 11-12 September (open circles).The numbers (N ) of observations over which these averages are calculated are displayed in panel (a).NH 3 is defined so that the sign matches that of F NH 3 , i.e., negative numbers indicate deposition and positive numbers indicate emission.
Figure 3 .
Figure 3.The 2.8 m temperature plotted against F NH 3 (a) and the observed NH 3, 2.29 m concentration (b).The color coding in panel (a) represents the NH 3 concentration, observed at 2.29 m.In panel (b), the dotted line represents the theoretical stomatal compensation point (χ s ) for a long-term NH 3 concentration of 7.7 µg m −3 .Highlighted with black circles are observations with a RH > 80 %, taken before noon, where NH 3 exchange through the external leaf pathway can play a significant role.
Figure 4 .
Figure 4. Scatterplots of F NH 3 against the GPP (a), L v E (b) and H (c), with the colors indicating the atmospheric boundary layer (ABL) stability.Highlighted by black circles are observations with a RH > 80 %, where deposition through the external leaf path can still play an important role.The black crosses are observations from the fertilization event observed on 11-12 September.
Figure 5 .
Figure 5. Scatterplots of the temperature against F NH 3 (a), GPP (b), L v E (c) and H (d), with the colors indicating the ABL stability (see Fig. 4 for legend).Highlighted by black circles are observations with a RH > 80 %.The black crosses are observations from the fertilization event on 11-12 September.
Figure 6 .
Figure 6.Scatterplots of the VPD against F NH 3 (a), GPP (b), L v E (c) and H (d), with the colors indicating the ABL stability (see Fig. 4 for legend).Highlighted by black circles are observations with a RH > 80 %.The black crosses are observations from the fertilization event on 11-12 September.
Figure 7 .
Figure 7. Scatterplots of PAR against F NH 3 (a), GPP (b), L v E (c) and H (d), with the colors indicating the ABL stability (see Fig. 4 for legend).Highlighted by black circles are observations with a RH > 80 %.The black crosses are observations from the fertilization event on 11-12 September.
, b and c (see Sect. 3.2.3 of the main text) slightly improved.FiguresA1-A4show the scatterplots using this alternative formulation of the GPP.
Figure A1 .
Figure A1.Scatterplots of F NH 3 against the GPP, with the colors indicating the atmospheric boundary layer (ABL) stability (see Fig. 4 for legend).Highlighted by black circles are observations with a RH > 80 %, where deposition through the external leaf path can still play an important role.The black crosses are observations from the fertilization event observed on 11-12 September.
Figure A2 .
Figure A2.Scatterplots of the temperature against the GPP, with the colors indicating the ABL stability (see Fig. 4 for legend).Highlighted by black circles are observations with a RH > 80 %.The black crosses are observations from the fertilization event on 11-12 September.
Figure A3 .
Figure A3.Scatterplots of the VPD against the GPP, with the colors indicating the ABL stability (see Fig. 4 for legend).Highlighted by black circles are observations with a RH > 80 %.The black crosses are observations from the fertilization event on 11-12 September.
Figure A4 .
Figure A4.Scatterplots of PAR against the GPP, with the colors indicating the ABL stability (see Fig. 4 for legend).Highlighted by black circles are observations with a RH > 80 %.The black crosses are observations from the fertilization event on 11-12 September.
Table 1 .
Filter criteria, being applied in sequence, with filter acceptance rates (in percentages and hours).
Table 2 .
A characterization of the meteorology of the 17 unique days for which observational data pass the filters, showing the 17 d average and the range of the diurnal minimum/maximum of several (meteorological) variables.Daily maximum flux footprint length (70 %) refers to the maximum upwind distance (in meters) encompassing the source area that contributed 70 % of the measured flux.For GPP and flux footprint length, nighttime is excluded. | 2023-08-02T21:13:10.721Z | 2024-01-26T00:00:00.000 | {
"year": 2024,
"sha1": "d9556d50ca9fc1a4b734c2fa5db06494b4c21d11",
"oa_license": "CCBY",
"oa_url": "https://bg.copernicus.org/articles/21/557/2024/bg-21-557-2024.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8491b6a28ead3ee7238479573bd920e6eff92078",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
257183597 | pes2o/s2orc | v3-fos-license | The Planemo toolkit for developing, deploying, and executing scientific data analyses in Galaxy and beyond
There are thousands of well-maintained high-quality open-source software utilities for all aspects of scientific data analysis. For more than a decade, the Galaxy Project has been providing computational infrastructure and a unified user interface for these tools to make them accessible to a wide range of researchers. To streamline the process of integrating tools and constructing workflows as much as possible, we have developed Planemo, a software development kit for tool and workflow developers and Galaxy power users. Here we outline Planemo's implementation and describe its broad range of functionality for designing, testing, and executing Galaxy tools, workflows, and training material. In addition, we discuss the philosophy underlying Galaxy tool and workflow development, and how Planemo encourages the use of development best practices, such as test-driven development, by its users, including those who are not professional software developers.
The Galaxy project provides web browser access to command-line scientific software, together with the necessary compute resources, in a convenient, shareable, and reproducible way to tens of thousands of researchers around the world (Afgan et al. 2018). More than 8000 tools are available for installation onto any Galaxy server; users can run these individually, connect multiple tools together to form workflows, and finally perform complex analyses, without the need to access a command line. Although Galaxy itself does not require any significant computational skills to use, the development and maintenance of new tools and workflows benefit from sophisticated infrastructure with both human and automated components. The process of integrating software into Galaxy requires knowledge of both the command-line interface of the underlying software and the schema used by Galaxy to define tools in order to be able to write a "Galaxy tool wrapper," mapping data set inputs, parameter inputs, and outputs between them. Once written, wrappers, as well as other Galaxy artifacts such as workflows or training material , are amenable to routine processes such as testing, deployment, and regular updates, all of which can be automated using continuous integration (CI) systems. Here we present Planemo, a versatile library and commandline application that is used extensively as a software development kit by Galaxy or Common Workflow Language (CWL) (Crusoe et al. 2022) tool, workflow, and training material developers, as well as a toolkit for Galaxy "power users." Planemo provides a simple but powerful command-line interface for tool and workflow development and deployment, which encourages and enforces good practices for software development. In addition, it enables automated deployment of developed tools and automatic updates of the software dependencies used internally by each Galaxy tool. The testing functionality included in Planemo has been successfully integrated into CI workflows of the major tool and workflow repositories, which helps to ensure the creation of high-quality tool wrappers and workflows.
Planemo is structured into numerous subcommands, which provide a broad range of functionality. Here we discuss a selection of the most important functionalities, grouped around the following themes: (1) development of Galaxy tools, workflows, tutorials, and CWL tools; (2) deployment of the developed tools and workflows; (3) automated tool and workflow dependency updates; and (4) tool and workflow execution. Table 1 summarizes this functionality, and Figure 1 provides a graphical overview. In addition to its use as a command-line application, Planemo can also be used as a library by other projects. An example is the Planemo Training Development Kit project (PTDK; https://github.com/ galaxyproject/ptdk), which provides Planemo's functionality for creating training material for Galaxy workflows via a web server.
Although most of these tasks described above can already be performed individually without using Planemo, it provides a convenient single tool that encourages the best practices agreed on by the Galaxy community. As a result, Planemo is an essential part of the Galaxy ecosystem and, in fact, is already extensively used, having been downloaded more than 70,000 times from both Anaconda and PyPI.
Galaxy tool development
A Galaxy tool is defined by a wrapper for an underlying software (or code) that maps its data set inputs, parameter inputs, and outputs to a command-line script executed by Galaxy. When running a tool in the Galaxy interface, a user selects his or her preferred choices for the exposed data set and parameter inputs. The Galaxy server then constructs the command, schedules it as a job onto appropriate compute resources, collects the results once the job has completed, and returns them to the user.
Writing Galaxy tool wrappers requires a thorough knowledge of the underlying software and also an understanding of the Galaxy tool schema that defines how Galaxy wrappers are written. The tool schema is defined in a simple manner in order to make the process of wrapping software as accessible as possible (https://docs .galaxyproject.org/en/latest/dev/schema.html). Planemo provides several helpful features that assist tool developers in creating highquality wrappers that meet community-defined standards, such as those developed by the Intergalactic Utilities Commission (IUC; https://galaxy-iuc-standards.readthedocs.io/). These features are implemented as subcommands, for example, "planemo test." Planemo also helps to enforce software development best practices such as writing tests for all tools and linting the wrapper definitions to avoid bugs and ensure a coherent and readable style. Further support for tool development standards is provided by the Galaxy Language Server (https://github.com/galaxyproject/galaxylanguage-server), an implementation of the Language Server Protocol and a Visual Studio Code extension for Galaxy tools, which can be used side-by-side with Planemo.
A common starting point for tool development is the "tool_init" subcommand. To use this, the developer provides a variety of options, including an example command line, tool name, inputs, outputs, and software requirements, from which Planemo generates a skeleton tool wrapper. Most of the "tool_init" parameters are optional, but the more that are provided, the more detailed the initial skeleton will be.
Developers can then inspect and edit the generated file, adding more parameters and increasing the complexity of the wrapper logic by incorporating conditionals and repeat elements if necessary. As they continue to edit, they can use the "lint" subcommand to validate the wrapper under development. Planemo's linting forces wrappers to match Galaxy's tool schema, ensuring stylistic consistency and preventing some errors such as mismatched file formats; it also insists that the developer write a "help" section documenting the tool being wrapped. Crucially, Planemo recommends that wrappers define at least one test case to ensure the development of high-quality, portable, reliable, and functional tools, and this recommendation is strictly enforced by the IUC's and other tool repositories. Once tests are defined, together with an initial tool definition, the developer can start to run the tests using the "test" subcommand. This launches a transient Galaxy server on the developer's computer; installs the Galaxy tool under development, together with all software dependencies; and executes the tests specified within the tool wrapper. The results of the tests are then returned to the developer, by default using a report defined using JSON and HTML, although other format types are also supported (xUnit, jUnit, Markdown, and Allure).
Planemo encourages the use of test-driven development (Siddiqui 2021), a software development principle that states test cases should be written before a new feature is developed. Testdriven development is an industry-wide best practice. Defining extensive test cases covering the required features at the start of the process provides a focus for development and results in more robust and better documented code that contains fewer bugs. The tool developer is forced to adopt the perspective of the Galaxy user from the start to consider possible-use cases of the software for which tests need to be written. Initial test failures lead to iterative refinement of the wrapper, until a fully-functional Galaxy tool, which passes all tests, is produced.
Once tests are passing, the developer should optimize the tool interface that is presented to the user of the tool. To facilitate this, Planemo provides the "serve" subcommand, which launches a Galaxy server with the new tool installed, allowing the developer to inspect the rendering of the wrapper in the graphical interface and to perform manual testing. The developer should also improve the documentation of the tool by annotating each of the tool parameters, as well as writing a help section to explain the tool's aim and usage, which appears beneath the tool parameters in the graphical interface.
CWL tool development
In addition to Galaxy tools, Planemo also acts as a software development kit for CWL tools. CWL is a tool and workflow specification that is independent of a particular workflow manager; it aims to increase the portability of scientific workflows by allowing workflows written in CWL to be executed by any CWL-supporting workflow manager. Thus, the tasks of workflow composition and workflow execution can be decoupled from one another. The same subcommands described already for Galaxy tool development can also be used to develop CWL tools, including "tool_init" and "test." By appending the "--cwl" argument to the "tool_init" subcommand, Planemo generates a template for a CWL tool definition, rather than a Galaxy wrapper. The test and lint commands then detect that the input file is a CWL wrapper and process it accordingly. Tools are tested by executing with the CWL engine cwltool and comparing the result with test data or specified assertions in the same way as for Galaxy tools. The completed wrapper can be run using any CWL engine, such as cwltool, Toil (Vivian et al. 2017), Arvados (https://arvados.org), or Galaxy.
Galaxy workflow development
Workflows are created in Galaxy by connecting together multiple tools (i.e., an output of one tool becomes an input for the following one) in order to automate complex analyses. Unlike tools, workflows can be defined and edited in Galaxy's graphical workflow editor; often the starting point is an interactive analysis (a Galaxy history) from which a workflow can be extracted automatically. It is also possible to manually author workflows in the gxfor-mat2 workflow language (https://github.com/galaxyproject/ gxformat2), and the user can switch between manually writing workflows and editing in the graphical interface using the "workflow_edit" subcommand, which spins up a Galaxy instance with the workflow under development preinstalled for editing. Planemo additionally facilitates the creation of test cases by providing the option of generating them automatically from a pre-existing workflow invocation.
Once a draft version of the workflow exists, it should be iteratively improved in the same way as for tools, using the same lint, test, and serve subcommands already introduced. The "workflow_lint" subcommand checks workflows for errors and conformance with best practices-a command-line interface-mirroring functionality that is also provided by the Galaxy graphical workflow editor. For example, workflows that are missing test cases, labeled outputs, or essential metadata fail linting. Running the "test" subcommand launches a local Galaxy instance, installs the tools used in the workflow, uploads the workflow, and executes it on the provided input test data. In the same way as for tool testing, the workflow outputs are downloaded and compared with the test data, resulting in either a pass or fail status. In some cases, it can be convenient to run testing on an existing public server, such as https://usegalaxy.org, https ://usegalaxy.eu, or https://usegalaxy.org.au; this is also supported by Planemo. Running the "serve" subcommand provides a local Galaxy server with the workflow and the needed tools preinstalled, which can be used for workflow development and fine-tuning.
The philosophy of Galaxy tool and workflow development
After the previous discussion of the process of tool and workflow development, the question arises about how software complexity should be divided between the tool and the workflow level. Should most of the effort go into developing workflows, keeping tools as simple as possible, and flexibly rewrapping the underlying software depending on the demands of a particular workflow, or should developers invest time creating complex and multifunctional tools that can be reused without modification in multiple workflows?
The way in which scientific workflow management systems resolve this dilemma differs. Nextflow (Di Tommaso et al. 2017) and Snakemake (Mölder et al. 2021), two other widely used scientific workflow managers, take the first approach, whereas Galaxy leans heavily toward the second of these two options, as does CWL, although the following discussion will focus on Galaxy. Both approaches are valid, and there have been several recent reviews comparing the features and advantages provided by different workflow managers (Wratten et al. 2021). Galaxy encourages the creation of modular tools that are usable in isolation, so they can be used interchangeably in multiple different workflows. Tools generally encapsulate most of the complexity of the underlying software, allowing workflows to be simply constructed in a graphical interface by connecting the component tools. Workflows can thus be thought of as complex structures built from the same fundamental building blocks, which can be constructed without knowledge of the internal functionality of the individual tools. This has several advantages with regard to the user experience: Building workflows becomes a far less daunting task, and tools can also be used individually in the graphical interface, which makes Galaxy accessible to new users and enables its use as a teaching environment for scientific analysis.
Another advantage of this approach is the "separation of concerns," a design principle in computer science. Different groups of scientists can develop and apply specialized and complementary areas of knowledge: The tool developer can concentrate on describing and developing the Galaxy tool, without considering any downstream workflows that will be created later. On the other hand, the workflow developer can construct complex, high-level pipelines, without the detailed understanding of the component tools and the command-line possessed by the tool developer. This has the dual advantage that workflows can be treated on a more abstract level and that the workflow creation process is made accessible for a far greater number of users.
The separation of concerns between tools and workflows also benefits security. Executing untrusted software on a compute cluster is highly undesirable; thus, workflows need to be assessed for security risks before execution. For many workflow management systems, this assessment must be repeated for each workflow. In contrast, as the Galaxy tool review process involves checking tools for security issues before merging, a system administrator can deploy tools developed by the IUC or similar high-trust communities with confidence. The question of workflow security is thus made redundant: If the component tools are trusted, a workflow based on those tools can likewise be trusted.
These advantages must be balanced against the time investment required from community members to build up a diverse set of tools to allow the construction of scientifically interesting workflows. Nonetheless, the Galaxy community, facilitated by Planemo, has succeeded in developing such a toolset and making it available to the scientific community.
CI for community repositories
Galaxy has a large and vibrant community of tool and workflow developers, creating Galaxy tools in a wide range of scientific fields, ranging from genomics to proteomics, computational chemistry, and climate science. As a result, a large number of high-quality tools already exist and are actively maintained over several GitHub repositories, centered around the main IUC repos-itory; the IWC (for definition, see Methods) performs the equivalent function of a repository for Galaxy workflows. Building these communities has required many years of work by multiple contributors; in order to streamline the process and ease the burden on the tool developers, developing infrastructure to facilitate human review and automate as much as possible is essential. Planemo forms the core of this infrastructure.
Once a developer has completed the tool wrapper or workflow, they can submit it to a community repository, usually hosted on GitHub, for review. Alternatively, they may also deploy it themselves (e.g., to the ToolShed or WorkflowHub), but submission to a community repository is encouraged to ensure the code is thoroughly reviewed and to publicize the new tool or workflow. Community repositories are configured to run the linting and testing checks already described after submission, via a CI workflow. Planemo provides a couple of simple subcommands, "ci_find_repos" and "ci_find_tools," to identify tools that have been added or modified. Both of these allow chunking of tools in order to parallelize the testing process over multiple CI jobs. As part of the CI testing, linting and testing of the tools are repeated, as well as linting of any Python and R (R Core Team 2021) scripts added together with the new tool wrappers. These steps ensure the submitted tools are of high quality, enforce consistent standards on the code, and reduce the maintenance burden for the entire community.
If all tests pass and the proposed new tool or workflow is accepted by the community, another CI job is initiated to deploy it to the ToolShed. This makes use of Planemo's "shed_update" command, which uses the ToolShed credentials associated with the repository to upload the newly created tool. Once it is available on the ToolShed, it can easily be installed onto any Galaxy server.
The entire process, consisting of automated testing, human review, and automated deployment, ensures the creation of high-quality, trustworthy tools that can be safely installed and used. It requires several more specialized steps, which go beyond the simple Planemo subcommands that the developer runs on his or her local machine. To package these CI workflows into a single unit, a GitHub Action is provided (https://github.com/ galaxyproject/planemo-ci-action), which can be reused in other tool repositories. New tool repositories with the same structure as the IUC repository can be conveniently created from a template repository created by the Galaxy community (https://github.com/ galaxyproject/galaxy-tool-repository-template).
Automation of tool and workflow updates
Another feature offered by Planemo is automatic updates of Galaxy tool and workflow software dependencies, using the "autoupdate" subcommand. In combination with separate autoupdate features already developed by the Bioconda and conda-forge (https://doi.org/10.5281/zenodo.4774217) communities, this forms a sequence of semiautomated software update procedures, which are triggered by an official release of new source code. After this new release appears, this chain ensures that new Conda packages, new Docker and Singularity containers, updated Galaxy tools, and, finally, updated Galaxy workflows are generated (Fig. 2). At each step, a CI job detects the artifact published in the previous step and initiates the process of updating a dependent artifact, generally by means of a GitHub pull request (PR).
The CI pipelines developed by Bioconda and conda-forge monitor the Conda recipes they maintain, regularly checking the links provided in the recipes for new releases. When the developers of an upstream software package release a new version, the CI creates a PR to update the package recipe. Once the PR is reviewed and merged, newly built packages are uploaded to the Anaconda repository.
In parallel, a bot running the "autoupdate" subcommand monitors the Galaxy tool wrappers maintained by the IUC, as well as a few other smaller communities, checking the dependencies defined in the tool wrapper. Once an updated Bioconda or conda-forge package is published in the step above, the Planemo autoupdate bot detects this and updates the dependencies section of the Galaxy tool accordingly. A PR is then submitted to the GitHub repository, to be reviewed and manually updated if necessary, before it is merged and deployed as described in the CI for Community Repositories section.
Galaxy tools can specify multiple dependencies. If these dependencies are installed via Conda, the packages can be simply installed into a single environment, but if dependency installation is achieved using containers, a new container must be built for each required combination of dependencies. This is achieved by the "mulled build" infrastructure; a CI job triggers the building of a Docker container for each new combination of packages on the publication of new Galaxy tool versions. Another CI job is responsible for generating singularity containers from the new Docker containers, which are made available by the BioContainers and Galaxy communities via a CernVM file system (CVMFS) (Blomer et al.). These steps do not require manual review.
The Planemo autoupdate bot also monitors the Galaxy workflows maintained by the IWC and checks whether new versions exist for each of the component tools. Once a new tool version is created (either by the upstream tool autoupdate step or by a tool developer), the workflow definition file hosted by the IWC is modified accordingly and a PR submitted for review (Fig. 3).
Execution
Apart from providing assistance with tool and workflow development and deployment, Planemo is also a useful resource for Galaxy power users who need to launch high-throughput data analyses. Galaxy is traditionally accessed via a graphical interface in the web browser, and features such as Galaxy collections already provide a high level of parallelization to users of the graphical interface. Nonetheless, there are important scenarios in which a user might need to run individual workflows hundreds or thousands of times; in which the data cannot be grouped into collections ahead of time, for example, for variant calling of SARS-CoV-2 genomic data; and in which a huge amount of new data is published contin-uously (Maier et al. 2021). As a convenient alternative to the graphical interface, Planemo allows workflow execution to be scheduled programmatically using the "run" subcommand, either on a local machine or on a larger Galaxy server. "planemo run" can be embedded in scripts of varying complexity, which can be scheduled and controlled via CI systems or message queues to run workflows on demand, such as on new data appearing or tool updates.
Internally, Planemo executes workflows by submitting them to the chosen server via Galaxy's API. Requests to the API are made using BioBlend, a library that wraps many API endpoints as Python methods. It is also possible to execute workflows directly using BioBlend or simply by making API calls using a tool such as cURL. Although this approach does offer a high level of flexibility, it requires the user to possess a high level of knowledge of the API (e.g., the correct format to submit workflow parameters) and often requires the creation of custom scripts. In contrast, Planemo's "run" subcommand offers a high-level interface to execute workflows, monitor them during execution, and report on their status after completion, packaged as a single command.
For tool and workflow development, the artifacts under development are generally tested against an ephemeral local Galaxy instance, which is deleted after use. Although this is also supported by the "run" subcommand, with the workflow outputs saved to a specified location, this approach is not scalable for workflows that demand long compute times, with large data inputs, or with workflows that need to be executed multiple times. In many cases, users may prefer to make use of established, stable infrastructures, such as a public Galaxy instance or a private instance administered by their research group. Planemo allows external Galaxy instances to be specified for all "run" and "test" commands by providing the server URL and user API authentication key on the command line. As it is inconvenient and insecure to enter the API key with each command, Planemo also allows users to define profiles in which the URL and API key is configured for each server. The user can then define multiple profiles and run workflows on different servers simply by appending, for example, "--profile usegalaxy-org" or "--profile private-server," to the command.
Planemo provides numerous command-line options to configure the workflow execution process. The name of the history in which the new invocation is created, as well as a list of Galaxy tags to add, can be specified via the command line. In addition, Planemo and Galaxy allow both data sets and workflows to be specified via hexadecimal IDs that point toward a Galaxy object on an external server, rather than by referring to a local path. This has the advantage of avoiding multiple uploads of the same data set or workflow if the workflow has to be executed multiple times. Planemo can also be configured either to wait until the workflow has completed and download the output data sets created or to terminate once the workflow has been successfully scheduled. In the latter case, the "list_invocations" command can be used to monitor running workflows and to return the number of jobs that have succeeded, failed, or are incomplete. If jobs have failed, for example, owing to transient server issues, the user can also choose to restart them using the "rerun" subcommand.
Training material
Planemo provides utilities for developing tutorials for different types of data analysis with Galaxy. The Galaxy Training Network, accessible via https://training.galaxyproject.org, provides a range of training material, including slide decks, tutorials, and videos. In particular, the tutorials are written in Markdown and rendered using Jekyll and often feature "hands-on boxes" that describe the exact combination of parameters and input that users need to submit when running a Galaxy tool. Most tutorials instruct the trainees to run several Galaxy tools in sequence and thus correspond to a Galaxy workflow.
Planemo provides two subcommands, "training_init" and "training_generate_from_wf," that generate a directory structure for a new tutorial, containing skeleton Markdown files defining the tutorials. These files already contain sections and hands-on boxes for each tool, with the tool inputs and parameters predefined, ensuring a high level of consistency in the appearance and quality of the tutorials produced. The training developer can then take these templates and expand them with additional information, questions, diagrams, and citations to produce the com-pleted training. They also need to provide input data sets, which are usually stored on Zenodo (https://zenodo.org). To populate a Galaxy server with these data sets, the training developer should also provide a data library file, which can be generated using the "training_fill_data_library" subcommand, including the Zenodo links and file formats of the data sets.
A major aim of the Galaxy Training Network project is improving accessibility for new contributors, including scientists who are not comfortable with command-line software. As a result, the Planemo functionality relating to training material development is provided in web server form as the Planemo Training Development Kit (PTDK). The application is written using Flask and deployed with Heroku; it can be accessed via https://ptdk .apps.galaxyproject.eu/. The interface allows the selection of the same options as the Planemo commands, with the additional option of specifying a workflow for generating the training using its ID from one of the major public Galaxy servers.
Discussion
We have presented Planemo, a library and application that has already achieved widespread usage among Galaxy tool, workflow, and training material developers; among Galaxy power users; and as part of numerous automated deployment solutions. Planemo provides the developers of command-line software with an easy way to create a graphical interface, taking advantage of the many features developed by the Galaxy community and the compute resources provided by public Galaxy instances. We have described the complex infrastructure the Galaxy community has developed for creating and interacting with artifacts such as tools, workflows, and training material. Planemo plays the crucial role of bridging the gaps between the human and automated components of this infrastructure, freeing members of the community to devote their time to developing, reviewing, and performing novel scientific analyses.
Software design
Planemo is implemented as a Python package and distributed via GitHub, PyPI, and Bioconda ). As already described in the Introduction, Planemo is a highly flexible, multifunctional software, which can be used for (1) different types of artifacts (e.g., tools, workflows), (2) different workflow/tool languages and management systems (e.g., Galaxy, CWL), and (3) different tasks (e.g., linting, testing, executing). To handle this variety, Planemo defines two central abstractions: Runnables and Engines. Runnables include tools and workflows written for either Galaxy or CWL; an Engine provides access to an external piece of software (such as Toil or Galaxy) capable of executing a particular Runnable. Each Engine has various methods (e.g., run(), test()) that define a particular interaction with a Runnable.
Engines are provided for both local and external Galaxy servers, as well as for cwltool and Toil. These interact with their respective workflow management systems via the cwltool and Toil Python modules (for CWL) and via the BioBlend library (Sloggett et al. 2013), which provides access to the Galaxy API through Python. Numerous lower-level functions and classes are provided to connect the Engines with the underlying functionality.
Some tasks cannot be easily described in the context of these abstractions; for example, linting of tool or workflow definitions requires only that the structured document containing the definition be compared with a schema. Other examples include the functionality for automatic updates of software dependencies and generation of training material. Planemo handles these cases using separate classes and functions.
Planemo is most frequently used as a command-line application, using a command-line interface written using the Click package to provide a straightforward way to access the components described above. Multiple subcommands expose some of the most important tasks a user might want to perform. For example, a user could run "planemo test tool.xml" to test a Galaxy tool wrapper. Planemo will detect the type of Runnable (Galaxy tool) represented by the filepath and start the appropriate Engine (temporary local Galaxy instance), execute the Runnable on it, collect the results, and compare them with predefined test data to determine a pass or fail status. All subcommands can be configured by appending flags and options.
Implementation of CI jobs
Although Planemo is designed primarily with developers and users in mind, commands often need to be executed as part of automated CI jobs, for example, testing of newly created Galaxy tools after submission to a GitHub repository. Galaxy tools and workflows are hosted over multiple repositories; to ensure a unified approach to testing, a GitHub CI action is provided. The CI workflow consists of the following components: | 2023-02-26T06:17:32.590Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "008b634c5f4a2e17f79cb0b51eea1cd211a29a87",
"oa_license": "CCBY",
"oa_url": "https://genome.cshlp.org/content/early/2023/02/23/gr.276963.122.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "51d5191f30862cb318781321d5362da7011a1457",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
35084457 | pes2o/s2orc | v3-fos-license | Impaired Function of Antibodies to Pneumococcal Surface Protein A but Not to Capsular Polysaccharide in Mexican American Adults with Type 2 Diabetes Mellitus
ABSTRACT The goal of the study was to determine baseline protective titers of antibodies to Streptococcus pneumoniae surface protein A (PspA) and capsular polysaccharide in individuals with and individuals without type 2 diabetes mellitus. A total of 561 individuals (131 individuals with diabetes and 491 without) were screened for antibodies to PspA using a standard enzyme-linked immunosorbent assay (ELISA). A subset of participants with antibodies to PspA were retested using a WHO ELISA to determine titers of antibodies to capsular polysaccharide (CPS) (serotypes 4, 6B, 9V, 14, 18C, 19A, 19F, and 23F). Functional activity of antibodies was measured by assessing their ability to enhance complement (C3) deposition on pneumococci and promote killing of opsonized pneumococci. Titers of antibodies to protein antigens (PspA) were significantly lower in individuals with diabetes than controls without diabetes (P = 0.01), and antibodies showed a significantly reduced complement deposition ability (P = 0.02). Both antibody titers and complement deposition were negatively associated with hyperglycemia. Conversely, titers of antibodies to capsular polysaccharides were either comparable between the two groups or were significantly higher in individuals with diabetes, as was observed for CPS 14 (P = 0.05). The plasma specimens from individuals with diabetes also demonstrated a higher opsonophagocytic index against CPS serotype 14. Although we demonstrate comparable protective titers of antibodies to CPS in individuals with and individuals without diabetes, those with diabetes had lower PspA titers and poor opsonic activity strongly associated with hyperglycemia. These results suggest a link between diabetes and impairment of antibody response.
S treptococcus pneumoniae is an important human pathogen, causing each year an estimated 40,000 deaths in the United States alone and 1 million globally. Infections caused by S. pneumoniae can range from mild infections, such as otitis media and sinusitis, to severe and fatal infections, such as pneumonia and meningitis (19). S. pneumoniae mainly causes infections in children, the elderly, and immunocompromised individuals (6,11). However, elderly individuals with comorbidities such as cardiovascular disease and diabetes are at greater risk of developing severe invasive infections, which are often fatal (23).
The prevalence and incidence of type 2 diabetes mellitus have increased at an alarming rate, and currently, more than 20 million people in the United States have been diagnosed with diabetes. This number is expected to increase to 39 million by 2050. Diabetes has been implicated as the single greatest risk factor for pneumococcal bacteremia in individuals under the age of 40 (odds ratio [OR], 4.2; 95% CI confidence interval [CI], 1.1 to 16.7) (21,22) and among those with no other documented comorbidities (OR, 2.3; 95% CI, 1.3 to 3.9) (46). According to one study, the risk of community-acquired pneumococcal pneumonia was 1.5-fold higher in individuals with diabetes than nondiabetes controls (45). Infections in individuals with diabetes occurs with greater severity and are associated with an increased risk of complications (4,38). The increased susceptibility to infections in individuals with diabetes has been reported to be due in part to defects in both adaptive and cell-mediated immunity. Several immune defects have been noted in individuals with diabetes, particularly poorly controlled diabetes. Poor glucose control impairs a range of neu-trophil and macrophage functions, such as chemotaxis, adherence, phagocytosis, and intracellular killing of microorganisms (8,37). Additionally, decreases in mitogen-stimulated lymphocyte proliferation and defects in T-cell, B-cell, and dendritic cell functions have also been described in individuals with diabetes (12).
Protection against pneumococcal carriage and invasive infections is complex and multifactorial and has been shown to involve both antibody-dependent and independent mechanisms. Antibody-mediated protection is mainly dependent on capsular typespecific and anti-S. pneumoniae surface protein A (anti-PspA) antibodies, which develop as a result of either asymptomatic carriage or infection, resulting in protection against future infections (17,30). The mechanism of protection is characterized by antibodymediated enhancement of complement deposition followed by clearance of pneumococci via opsonophagocytosis by neutrophils (41). A critical role of CD4 ϩ T cells in antibody-independent immunity to carriage has recently been described, where colonization of the lungs and nasopharyngeal cavity resulted in activation and infiltration of CD4 ϩ T cells, in particular, Th17 cells. Activa-tion of Th17 resulted in synthesis and release of the effector cytokine interleukin-17, recruitment of neutrophils, and phagocytic killing of pneumococci (2,29,31).
Several studies have evaluated immune responses to natural exposure and vaccination in individuals with and individuals without diabetes (3,27). Results of studies evaluating immune responses to vaccines and vaccine efficacy are highly variable. Measurement of antibody titers after pneumococcal vaccination indicated that antibody titers in individuals with and individuals without diabetes were comparable in magnitude (27); however, patients with diabetes showed a delayed response to immunization (44) and impairments in production of circulating B cells and specific IgM in response to vaccination. These impairments were attributed to abnormalities in T-cell function. Contrary to these immune impairments, the overall efficacy of vaccine was reported to be between 56 and 70% (5,15,44), which is similar to what has been reported (65 to 81%) for those without diabetes (20,33,34,43). These discrepancies can be explained by the fact that studies determining vaccine efficacy were largely performed using a heterogeneous group of at-risk participants, including those with diabetes. As a result, these trials provide limited information on vaccine efficacy specific for individuals with diabetes. Furthermore, owing to the short-term efficacy of unconjugated polysaccharide (PS) vaccine, revaccination is recommended in both healthy controls and high-risk groups. Measurement of efficacy within the first 5 years of administration suggested that unconjugated polysaccharide vaccine was 75% effective in preventing invasive pneumococcal disease in immunocompetent individuals aged 65 years and above. The efficacy was reduced to 37% the first year, 18% every 5 years for 10 years, and 0% thereafter. However, revaccination in immunocompetent adults resulted in a significant increase in serotype-specific anticapsular IgG antibody and subsequent long-lasting protection against pneumonia and invasive pneumococcal disease (18). Given these findings, it is reasonable to hypothesize that antibodies generated in individuals with diabetes, even though they are similar in titer to those in nondiabetic individuals, are functionally impaired. We therefore designed this study to (i) determine the baseline titer of antibodies to pneumococcal surface protein A and capsular polysaccharide in individuals with and individuals without diabetes and (ii) determine if antibodies in these individuals are functionally comparable to those in individuals without diabetes.
Subjects.
The study was approved by the University of Texas Health Science Center at Houston Institutional Review Board (IRB) and the Committee for Protection of Human Subjects (CPHS). This study was conducted using stored plasma from 561 recently recruited participants from the Cameron County Hispanic Cohort (CCHC). The CCHC is a community-based cohort in Cameron County, TX, comprised of 2,500 randomly selected Hispanic participants from the U.S.-Mexico border region (20). The rate of diabetes among CCHC participants is 17.9%, using the 2006 American Diabetes Association (ADA) diagnostic criteria for diabetes (1a). However, if we use the 2010 ADA revised diagnostic criteria, which adds a level of glycated hemoglobin (A1c) of over 6.5% to the original criteria, the prevalence rises to 30.7%, twice the reported national rates of diabetes among all Americans and nearly twice as high as that previously established rates among Mexican Americans (1,14). Plasma was obtained from consenting participants on enrollment into the CCHC and were kept frozen as described previously (14). Variables of interest for this study that were collected during enrollment in the CCHC include age, body mass index (BMI), fasting blood glucose (FBG) levels, and A1c levels. Clinical chemistries were performed in a Clinical Laboratory Improvement Amendments-approved laboratory as described previously (3).
For the purposes of this study, individuals with diabetes were defined on the basis of the original 2006 American Diabetes Association criteria (1a). This includes participants with a diagnosis of diabetes who were also on medication for diabetes or those with fasting blood glucose levels of Ͼ126 mg/dl. Those with fasting blood glucose values of Յ126 mg/dl and no history of diabetes or receipt of diabetes medication were classified as not having diabetes. From our study samples, 132 individuals were identified to have diabetes and 429 were identified to not have diabetes. Among the 132 participants with diabetes, only 89 were on medication, whereas 43 were not on any hypoglycemic medication. To determine the overall health status of participants, we collected data on antibiotic usage and hospitalization at the time of enrollment and blood draw. Of the 561 participants, only 5 were on antibiotics and none reported hospitalization 3 months prior to the enrollment. We also compared the values of albumin between those with and those without diabetes and found no significant difference between the groups. Only one participant in the diabetes group (n ϭ 132) reported being on dialysis, whereas the rest did not report any renal impairment.
Measurement of serum concentration of antibodies to PspA. Antibodies to PspA were measured using standard enzyme-linked immunosorbent assay (ELISA) methods (39,47). We used stored plasma samples instead of serum samples. Serum is the most desirable specimen for measuring antibody titer; however, both plasma and serum specimens have been used interchangeably for measurement of antibodies in pneumococcal infections, and reported titers were observed to be comparable in both specimens (2, 15a) (Elizen kits; ZenTech). Ninety-six-well ELISA plates were coated with 1 g/ml of either a recombinant 30-kDa N-terminal fragment of family 1 PspA, 10 g/ml of a smaller (13-kDa) internal fragment of PspA containing proline repeats (proline-rich region [PRR]) (9), or the recombinant Staphylococcus aureus Efb (28) protein in bicarbonate buffer (50 ml 0.06 M Na 2 HCO 3 , 40 ml 0.06 M Na 2 CO 3 , 10 ml deionized H 2 O, pH 9.6). The proline-rich region was used since it is highly conserved in all PspA isoforms and is known to be immunogenic (9). Plates were coated at 4°C overnight, followed by blocking for 1 h at room temperature using phosphate-buffered saline (PBS; pH 7.4; Gibco Invitrogen, MO) containing 1% bovine serum albumin (BSA; Sigma-Aldrich, CA). Plates coated with 1 g/ml of BSA only were used as a negative control to account for nonspecific binding of serum proteins to BSA. Pooled human serum with a known titer of total IgG to PspA (1.85 mg/ml; a kind gift from David Briles, University of Alabama at Birmingham) was used as the standard for determining the concentration of total IgG to PspA in unknown serum samples. Pooled serum was used at a starting dilution of 1:1,000 in PBS-1% BSA (which corresponds to 1.8 g/ml of IgG to PspA) and titrated 1:3, in duplicate. Controls were included in each plate to account for nonspecific binding. Plasma samples were diluted 1:30 in PBS-1% BSA. Plates were incubated at 37°C for 1.5 h and washed 3 times with PBS containing 0.05% Tween 20 (PBST), followed by addition of 100 l of goat anti-human alkaline phosphatase (AP)-conjugated secondary antibody (Southern Biotech). Plates were incubated for an additional hour at room temperature, before they were washed 3 times with PBST and developed using 100 l of p-nitrophenylphosphate (pNpp; Sigma-Aldrich, CA). The absorbance was read at 450 nm with a preread setting (SpectraMax M5 microplate reader [Molecular Devices] with Soft-Max Pro software [version 4.8]). Analysis was repeated to ensure accuracy. A positive IgG outcome was set at an optical density (OD) reading of Ն0.25, corresponding to an IgG titer of Ն150 ng/ml. Antibody-mediated complement deposition on pneumococcal surface. To determine if antibodies measured by ELISA are functionally active, we measured their ability to deposit complement on the surface of pneumococci. Plasma specimens with an antibody titer of Ͼ150 ng/ml were selected from both diabetic and nondiabetic participants for use in a complement deposition assay. A capsule type 2 strain of S. pneumoniae (strain D39) was used for measurement of complement factor C3 deposition. An isogenic mutant of capsule type 2 strains (mutant Tre 121.13) was used as a positive control. This strain is deficient in surface expression of both PspA and PspC, the two proteins known to prevent complement activation and deposition on pneumococci (10,40). This mutant is therefore susceptible to complement deposition by the classical and alternative pathways, and complement becomes deposited at significantly elevated levels even in the absence of antibodies. Both strains were cultured to an OD at 600 nm of 0.4 (corresponding to approximately 4 ϫ 10 8 CFU/ml) in Todd-Hewitt-5% yeast extract (THY) broth at 37°C. An aliquot of culture corresponding to 1 ϫ 10 6 CFU/ml was spun to pellet bacteria, followed by resuspension of the pellet into either 100 l of Hanks balanced salt solution with 0.1% gelatin (HBSSG; negative controls) or 10% heat-inactivated human plasma diluted in HBSSG. Samples were incubated for 30 min at 37°C, then washed with HBSSG, and centrifuged; and the supernatants were discarded. Pellets were resuspended in 100 l of HBSSG containing 25 g/ml of baby rabbit complement (Pel-Freez Biologicals, Rogers, AK). Specimens were incubated for 30 min at 37°C, followed by washing and resuspension into 100 l of fluorescein-conjugated goat IgG fraction to rabbit complement C3 (MP Biomedical) diluted to a final concentration of 1:100 in PBS-1% BSA. Samples were incubated for 30 min at 37°C, washed once to remove unbound antibody, and fixed by resuspension in a 1:1 mixture of 2% para-formaldehyde and PBS-1% BSA. Samples were analyzed using a BD FACS CANTO II flow cytometer and FACSDiva software (version 6.1.1).
Measurement of serum concentration of anticapsular antibodies using WHO ELISA. Measurement of the anticapsular IgG was performed on a total of 64 plasma specimens that previously tested positive for anti-PspA. Titers were measured using a 3rd-generation sandwich ELISA as described previously (48). Briefly, plates were coated with purified capsular polysaccharides from the most common serotypes (serotypes 4, 6B, 9V, 14, 18C, 23F, and 9V) and less common serotypes (serotypes 19F and 19A). Plates were coated for 4 to 5 h at 37°C and then transferred to 4°C. Plasma specimens to be used in the study were absorbed with 5 g/ml of cell wall PS (Statens Serum Institute, Copenhagen, Denmark) and 10 g/ml of 22-F PS in a final volume of 1 ml of PBST for 30 min at room temperature. Serum pool 89-SF was absorbed only with cell wall PS and used as the standard. The preabsorbed plasma specimen and the 89-SF serum pool were serially diluted and added to the wells of a PS-coated plate. Plates were incubated for 2 h at room temperature and washed, followed by addition of AP-conjugated goat anti-human IgG. Incubation was continued at room temperature for 2 h, followed by another wash and addition of substrate. The reaction was stopped using 3 N NaOH, and the optical density was measured at 405 nm and 690 nm using a microplate ELISA reader. The amount of antibody was calculated from the standard curve made from sample 89-SF.
Multiplexed OPA. A total of 52 serum specimen were selected on the basis of their anticapsular IgG titers and were tested against pneumococci of eight serotypes (9V, 23F, 4, 18C, 6B, 19F, 14, and 19A) using a multiplexed opsonophagocytic killing assay (OPA) as described previously (7). For selection purposes, each target bacterium is made resistant to one of the four antibiotics (optochin, streptomycin, spectinomycin, and trimethoprim) and was left sensitive to the other three. Target strains were thawed, washed in opsonization buffer B (Hanks balanced salt solution with Mg 2ϩ and Ca 2ϩ containing 0.1% gelatin and 5% fetal bovine serum), and reconstituted to a final concentration of 2 ϫ 10 5 CFU/ml. Two pools were made by mixing equal volumes of four serotypes in each pool. To inactivate human complement proteins, plasma specimens were heat inactivated at 56°C for 30 min, followed by serial dilution of each sample. Twenty microliters of diluted serum specimens was added to the wells of 96-well round-bottom plates (Corning Inc., Corning, NY) and mixed with 10 l of the bacterial suspension, and the mixture was incubated at room temperature on a shaker (mini-orbital shaker; Bellco Biotechnology, Vineland, NJ) at 700 rpm for 30 min. Following incubation, 10 l of baby rabbit complement (Pel-Freez Biologicals, Rogers, AK) and 40 l of HL-60 cells (approximately 1 ϫ 10 7 cells/ml) which were differentiated into granulocytes were added and the incubation was continued in a tissue culture incubator at 37°C (5% CO 2 ) for 45 min with constant shaking at 700 rpm. On completion of the incubation, plates were cooled on ice for 15 min and a 10-l aliquot was spotted onto four different THY agar plates (Todd-Hewitt broth with 0.5% yeast extract and 1.5% agar). An equal volume of overlay agar (Todd-Hewitt broth with 0.5% yeast extract and 0.75% agar) containing one of the four antibiotics was added to each agar plate. After an overnight incubation at 37°C, the number of bacterial colonies in the agar plates was enumerated. Opsonization titers were defined as the plasma dilution that killed 50% of bacteria.
Statistical analysis. Univariate analyses of baseline variables found the distributions to be nonnormal and skewed. As a result, nonparametric alternatives were used for data analysis. Differences in median values of baseline characteristics between individuals with and individuals without diabetes and between those with positive IgG titers and those with negative IgG titers were compared using nonparametric Wilcoxon two-sample tests or chi-square tests. Stratified analyses and multivariable logistic regression were used to assess potential confounders or interactions. Unadjusted and adjusted odds ratios comparing outcomes of positive or negative IgG titers were calculated using multivariable logistic regression. Correlations were calculated using nonparametric alternatives. All analyses were considered significant when P values were Ͻ0.05. All analyses were run using SAS software (version 9.2; SAS Institute Inc., Cary, NC).
Participant characteristics.
The analysis was performed on a total of 561 participants (132 with diabetes and 429 without diabetes). Participants were not specifically asked if they had previously received a pneumococcal vaccine. This is a population in which only 11.9% has private insurance capable of financing the expensive pneumococcal vaccine (14). It is unlikely that individuals in this study had received the S. pneumoniae vaccine. Less than half of all participants self-reported that they had diabetes, and only half of the self-reported (89/132) individuals were on medication of any kind, further reducing the probability of pneumococcal vaccination (even among the self-reported diabetes participants) (14). Furthermore, only 5 participants out of the total of 561 in this study were on antibiotics when the specimens were obtained. The majority of the participants in this study were overweight or obese (80.9%), and nearly a third of the participants met the Adult Treatment Panel (ATP) III definition of metabolic syndrome (16). A total of 13.6% in this subset had diabetes (14).
Diabetes status is associated with lower titers of IgG to PspA independent of age. Carriage has been shown to be a prerequisite for invasive pneumococcal infections, and preexisting antibodies to PspA confer protection against carriage (39). We therefore measured anti-PspA titers in the plasma of participants with and individuals without diabetes. Levels of antibody to the full-length 30-kDa N-terminal region of family 1 PspA were measured using a standard ELISA method. A significantly higher titer of antibod-ies was observed in individuals with diabetes than in diabetic controls (P Ͻ 0.001) (Fig. 1). Three hundred twenty-five (57.9%) of 561 participants had a positive baseline IgG titer. A total of 43.2% of plasma specimens from those with diabetes and 63.5% from those without diabetes had a positive IgG titer (P Ͻ 0.0001) ( Table 2). The mean concentration of IgG in plasma samples of those with diabetes was 183 ng/ml, whereas it was 227 ng/ml in those without diabetes (P Ͻ 0.01). Specimens positive for the 30-kDa PspA antigen were screened for reactivity against the smaller internal fragment of PspA. Specimens positive for the full-length 30-kDa fragment were also positive for the smaller internal fragment of PspA; however, no significant difference between titers of individuals with diabetes and those of individuals without diabetes was observed. These results suggested that the responses to the full-length PspA protein observed were the result of specific anti-PspA antibodies in the plasma of participants rather than crossreacting molecules.
To further explore the immune response to protein antigens, we also measured the concentration of antibody to the S. aureus virulence factor Efb. Our results indicated that the response to this antigen was also significantly lower in individuals with diabetes (mean IgG concentration, 41 ng/ml) than controls without diabetes (mean concentration, 51.23 ng/ml) (P ϭ 0.01) (Fig. 1B).
Age and BMI are known to affect antibody response. We therefore determined the association of antibody titers with age and BMI (Table 3). When diabetes was stratified by age, a significant difference was observed between individuals with and individuals without diabetes, indicating that the observed differences in titers were the result of diabetes rather than age. Additionally, the differences in IgG titers remained significantly different between individuals with and individuals without diabetes even after controlling for BMI (P Ͻ 0.001 for BMI of Ͼ30, P ϭ 0.01 for BMI of Ͻ30), suggesting that the differences observed were the result of diabetes status rather than an effect of obesity. When age was stratified by diabetes status, no differences in antibody responses were observed among those diabetic patients older than 40 years of age and those younger than 40 years of age. However, in those without diabetes, we observed a significant association between age and low antibody titers in individuals older than 40 years of age. These results suggested that the lower titers observed in participants with diabetes were not affected by older age. When BMI was stratified by diabetes status, there was no significant difference in IgG titer among diabetics with BMI values of greater than or less than 30 (P ϭ 0.91 and 0.08, respectively).
The unadjusted odds ratio for diabetes status showed that participants without diabetes had 2.19 (95% CI, 1.47 to 3.25) times the odds of having a positive IgG response compared to partici- To determine if participants with diabetes on hypoglycemia medication have a better response to PspA than diabetics on no medication, we compared the median titers between those that were taking medication and those that were not taking any medication. Our results indicated that participants on medication for diabetes had significantly lower antibody titers than individuals not on medication (P Ͻ 0.01) ( Table 5). However, there was a significant difference in the trend of the median anti-PspA IgG titers when comparing all three categories (individuals on one medication, individuals on more than one medication, and ones taking no medication) (P ϭ 0.01). No significant difference in median anti-PspA IgG titers was observed when individuals taking 1 diabetes medication and individuals taking 2ϩ medications were compared (Table 5).
Poorly controlled diabetes was negatively associated with antibody titers. It has been shown that individuals with poorly controlled diabetes are at a higher risk of invasive pneumococcal infections than individuals with well-controlled diabetes (42,45). To determine if this difference might be associated with any of the measures of diabetes control, we evaluated the association of FBG and A1c with titers of antibodies in our total sample. We observed a moderate negative association of both FBG (r ϭ Ϫ0.13, P ϭ 0.01; Fig. 1A and 2A) and HbA1c (r ϭ Ϫ0.1, P ϭ 0.1; Fig. 1B and 2B) with IgG titers. The optimum antibody response was observed at FBG values of 100 to 150 mg/dl and A1c values of 5 to 7 mg/dl. The association of BMI with IgG titers was moderately statistically significant (Fig. 2C). Plasma specimens from participants with diabetes deposited less complement on pneumococci than samples from nondiabetic participants. Pneumococcal surface proteins such as PspA and PspC and pneumococcal capsular polysaccharide prevent deposition and activation of complement on the surface of pneumococci. An important function of antibodies during pneumococcal infections is therefore the enhancement of complement deposition and subsequent phagocytosis of the bacterium. We therefore measured the ability of our plasma specimens to deposit complement. We incubated pneumococci in the presence and absence of heat-inactivated human plasma and measured the deposition of baby rabbit complement in a fluorescence-activated cell sorter (FACS) assay. A significant difference in antibody-mediated enhancement of complement deposition was observed between individuals with and individuals without diabetes (Fig. 3) (P ϭ 0.04). Plasma specimens from nondiabetic individuals showed a measurable increase in fluorescence intensity (2-fold), indicating a strong opsonic activity of antibodies. However, only a marginal increase (1-to 1.5-fold) in complement deposition was observed when plasma specimens from diabetic participants were used to measure complement deposition.
To determine the avidity of antibodies, we measured deposition of complement using 1% and 10% human plasma. A significant increase in deposition of complement was observed at both concentrations when plasma samples from the control group FIG 2 Association of antibody (Aby) titer with FBG concentration, A1c concentration, and BMI. The association between variables was calculated using the Spearmen rank test. FBG and A1c values were found to be negatively associated with antibody titers. A P value of Ͻ0.05 was considered significant.
FIG 3
Complement deposition using plasma samples from diabetes and nodiabetes controls. Capsule type 2 strain D39 was incubated in the presence or absence of plasma from participants. Enhancement in deposition of baby rabbit complement was measured as the fold increase in mean fluorescent intensity (MFI) of bacteria in the presence of plasma from participants. Each point represents a single plasma sample run in duplicate. Fold increase was calculated by dividing the mean fluorescent intensity of pneumococci incubated with plasma by the mean fluorescent intensity of pneumococci incubated without plasma. The two groups were compared using a nonparametric t test for comparison of means. A P value of Ͻ0.05 was considered significant. without diabetes were used, indicating a high avidity of the antibodies. Comparable complement deposition occurred at 10% plasma from diabetes participants, which was decreased by 5-fold when plasma was used at a final concentration of 1% (data not shown).
We investigated the association of diabetes-related variables, particularly antibody titer, FBG concentration, and A1c concentration, with complement deposition. A moderately positive association was observed between complement deposition and anti-PspA titer (Fig. 4A). A moderate negative association between fold increases in complement deposition and FBG concentration was observed (r ϭ Ϫ0.2, P ϭ 0.09); however, the association was not statistically significant (Fig. 4B). A statistically significant association between fold increases in complement deposition and A1c concentration was observed (r ϭ Ϫ0.3, P ϭ 0.02) (Fig. 4C). These results suggested that the opsonic ability of antibodies was altered in individuals with poorly controlled diabetes.
Higher titers of antibodies and respective titers of opsonic activity to capsular type 14 in participants with diabetes than those without diabetes. Titers of antibodies to eight different capsular polysaccharides were measured in selected plasma samples as described in the Materials and Methods section (Fig. 5). Among all serotypes tested, titers to serotype 19F were highest among the eight serotypes tested. When geometric means for titers were compared between those with and those without diabetes, we observed a significantly higher titer against serotype 14 among participants with diabetes (P Ͻ 0.01) than controls without diabetes. Although not statistically significant, titers of antibodies to capsular types 19V and 18C were also higher in participants with diabetes. Additionally, titers against the most commonly occurring serotype (6B) were relatively higher in participants without diabetes than the diabetes group. variables associated with diabetes. A strong positive association of antibody concentration with an increase in the mean fluorescent intensity was observed, whereas moderate negative associations between an increase in mean fluorescent intensity and the AIc concentration and an increase in the mean fluorescent intensity and the FBG concentration were observed. Each point represents a plasma sample run in duplicate. Correlations were calculated using a nonparametric Spearman correlation test. A P value of Ͻ0.05 was considered significant.
FIG 5
Titers of antibodies to capsular polysaccharide in individuals with diabetes and no-diabetes controls. Concentrations of antibodies to eight different capsular polysaccharides were measured using a WHO ELISA. Each bar represents the geometric mean concentration (in g/ml). Comparison between individuals with diabetes and no-diabetes controls was performed using a nonparametric test. A P value of Ͻ0.05 was considered significant.
To further analyze the functionality of antibodies, we performed opsonophagocytosis assays using the neutrophil-like cell line HL-60 and baby rabbit complement. Heat-inactivated plasma specimens were incubated with target strains of pneumococci, followed by addition of baby rabbit complement, and then incubated with HL-60 cells. The higher titers of antibodies against capsule type 14 strains observed also corresponded with demonstration of higher opsonic activity (Table 6). That is, the opsonic titer (the highest titer of plasma at which 50% killing is observed) of plasma from diabetic participants was 2,016, whereas it was 649 for participants without diabetes (P ϭ 0.05). We observed a strong negative association of titers with A1c and FBG levels.
DISCUSSION
It is well-known that protection against carriage and invasive pneumococcal infection depends on the generation of protective titers of antibodies against both pneumococcal capsular polysaccharide and protein antigens (29,32,39). Protective antibodies are defined as antibodies that can efficiently opsonize pneumococci and induce phagocytic killing by resident macrophages or neutrophils (41). To determine if the susceptibility of individuals with diabetes is the result of lower titers or a poor protective ability of generated antibodies to capsular polysaccharide and/or protein antigens, we measured both titers and protective antibody functions. Our observation that diabetes was associated with poor antibody responses to PspA was consistent with observations made by Nam et al. indicating that baseline positive rates of cross-reactive antibodies against pandemic influenza virus in patients with diabetes were lower than those in age-and sex-matched nondia-betes controls (35). Additionally, Fabrizi et al. reported a lower seroprotection rate elicited by the hepatitis B vaccine in diabetics (both influenza virus and hepatitis B virus are protein antigens) (13).
Several factors can contribute to the observed poor antibody responses to protein antigens in individuals with diabetes, such as (i) alterations in antigen presentation, (ii) impairment in antibody generation, and (iii) antibody modification as a result of hyperglycemia. There is strong evidence for the perturbation of functions of sentinel cells such as monocytes/macrophages, neutrophils, antigen-presenting B cells, dendritic cells, and natural killer cells. Any or all of these defects can compromise the antibody responses to protein antigens. Additionally, B-cell immunoglobulin production is also altered in diabetes. This impairment in antibody generation is attributed to diabetes-mediated hyperglycemia. It has been shown that even in the absence of other components of the metabolic syndrome, hyperglycemia can temporarily alter the B-cell response. Using a mouse model, it was demonstrated that stimulation of total spleen cells from Akita mice (a model for hyperglycemia due to insulin misfolding) resulted in delayed immunoglobulin production (36). Additionally, a variety of abnormalities in T-cell functions have been reported in individuals with diabetes (with or without poor metabolic control). These abnormalities have been shown to be associated with impairment in the ability to produce circulating B cells and specific IgM and IgG antibody in response to vaccines.
Consistent with these observations, we also observed a negative association of antibody titer with FBG and A1c, indicating a role for hyperglycemia in the B-cell-mediated antibody response. Moreover, the low opsonic titers observed in our studies could also be explained by hyperglycemia. Accumulation of glucose in individuals with poorly controlled diabetes results in nonenzymatic glycation and subsequent loss of antibody function. Mass spectrometric analysis of immunoglobulins isolated from the plasma of diabetics showed an increase in the molecular weight of immunoglobulins. The weight increase corresponded with the number of glucose molecules deposited on the protein. The nonenzymatic glycation of immunoglobulins as a result of hyperglycemia could result in low antigen and receptor binding capacities of these antibodies. It is therefore likely that despite comparable titers, the functionality of the antibody is compromised in diabetics compared to nondiabetics (25,26). Contrary to our findings with PspA, we observed comparable titers of antibodies to all eight capsular polysaccharides tested in individuals with and individuals without diabetes. These findings were consistent with previous studies where comparable pre-and postimmunization titers of antibodies to capsular polysaccharides were observed in individuals with and individuals without diabetes. Additionally, diabetes patients also developed a similar magnitude of responses to a pneumococcal vaccine as those without diabetes. Similarly, patients with cystic fibrosis have similar titers of anticapsular antibodies (24), suggesting that responses to capsular polysaccharide are least affected by disorders that affect the immune system in general. Although titers to all serotypes were comparable between individuals with and individuals without diabetes, we observed a significantly higher titer to capsule type 14 in individuals with diabetes. Serotype 14 is mainly associated with bacteremia in elderly individuals. Older age (59 versus 45 years; P ϭ 0.01), and the higher susceptibility of diabetes individuals to invasive bacteremic pneumonia might explain the higher titer to serotype 14 in diabetics. In a study conducted by Schenkein et al., higher titers of capsular type 14 as a result of vaccinations were observed in older individuals than younger adults. Despite higher titers, a low opsonic activity was observed in these individuals (41), which is contrary to our observations demonstrating that higher antibody titers corresponded to higher opsonic activity. A simple explanation for these results is that even though the antibodies used in these assays were from individuals with or individuals without diabetes, the complement and granulocytes were not. Knowing that protection is the cumulative effect of the three components, namely, neutrophils, complement, and antibodies, working in a synergy, it is difficult to interpret this response as a protective response. It is very likely that even though the antibodies had a high opsonic activity, complement or the neutrophils from individuals with diabetes have impaired function, resulting in failure to protect against infections.
Results from our studies suggested that the response to protein antigens was compromised in individuals with diabetes. The T cells play a significant role in response to protein antigens by activating B cells and initiating antibody class switching. We have therefore focused our future studies on understanding the effect of diabetes on T-cell function and differentiation. Our working hypothesis is that diabetes creates an environment that modulates the differentiation and functions of T-cell subsets. Current studies are devoted to measuring the kinetics and strength of the T-cell response to heat-killed S. pneumoniae in whole blood of participants with and participants without diabetes. More specifically, we are measuring the differentiation of T cells into their subsets after antigenic stimuli. Future studies will involve vaccinating our participants and measuring the T-cell response to pneumococcal vaccines (polysaccharide and conjugate) in whole blood of individuals with and individuals without diabetes. These studies will provide us with an in-depth understanding of mechanisms that lead to vaccine failure in individuals with diabetes.
Limitations to this study are that it was performed in a predominantly Hispanic population, making it difficult to generalize our results to individuals of other ethnicities. Elucidating the mechanism resulting in low antibody titers with opsonic potential will also be difficult since these studies were carried out in vitro. A mouse model of diabetes or obesity will be important to the replication of these studies for understanding the mechanisms associated with failure to respond to protein antigens. | 2018-04-03T02:47:15.057Z | 2012-07-03T00:00:00.000 | {
"year": 2012,
"sha1": "5ffd1498d11af7d3f5d50e3b6a20ffb23fb19466",
"oa_license": null,
"oa_url": "https://cvi.asm.org/content/cdli/19/9/1360.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "5a3eb0347aa625f8cb92f1de28656ae30abc4ba7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
265712535 | pes2o/s2orc | v3-fos-license | Measurement of the t ( t ) over-bar production cross section in the all-jets final state in pp collisions at root s = 8 TeV
The cross section for tt production in the all-jets final state is measured in pp collisions at a centre-of-mass energy of 8 TeV at the LHC with the CMS detector, in data corresponding to an integrated luminosity of 18.4 fb−1. The inclusive cross section is found to be 275.6 ± 6.1 (stat) ± 37.8 (syst)±7.2 (lumi) pb. The normalized differential cross sections are measured as a function of the top quark transverse momenta, pT, and compared to predictions from quantum chromodynamics. The results are reported at detector, parton, and particle levels. In all cases, the measured top quark pT spectra are significantly softer than theoretical predictions.
Introduction
The top quark is an important component of the standard model (SM), especially because of its large mass, and its properties are critical for the overall understanding of the theory.Measurements of the top quark-antiquark pair (tt) production cross section test the predictions of quantum chromodynamics (QCD), constrain QCD parameters, and are sensitive to physics beyond the SM.The tt process is also the dominant SM background to many searches for new physical phenomena, and its precise measurement is essential for claiming new discoveries.
The copious top quark data samples produced at the CERN LHC enable measurements of the tt production rate in extended parts of the phase space, and differentially as a function of the kinematic properties of the tt system.Inclusive and differential cross section measurements from proton-proton (pp) collisions at centre-of-mass energies of 7 and 8 TeV have been reported by the ATLAS [1][2][3][4][5][6][7][8][9][10][11] and CMS collaborations [12][13][14][15][16][17][18][19][20][21][22][23][24].These are significantly more precise than the measurements of tt production in proton-antiproton collisions performed at the Tevatron [25].In this paper, we report new results from pp collision data at √ s = 8 TeV, collected with the CMS detector.Measurements of the tt inclusive cross section and the normalized differential cross sections are presented for the first time in the all-jets final state at this collision energy.The results are compared to QCD predictions, and are in agreement with other measurements in different decay channels.
Top quarks decay almost exclusively into a W boson and a b quark.Events in which both W bosons from the tt decay produce a pair of light quarks constitute the so-called all-jets channel.As a result, the final state consists of at least six partons (more are possible from initial-and final-state radiation), two of which are b quarks.Despite the large number of combinatorial possibilities, it is possible to fully reconstruct the kinematical properties of the tt decay products, unlike in the leptonic channels where the presence of one or two neutrinos makes the full event interpretation ambiguous.However, the presence of a large background from multijet production, and the larger number of jets in the final state make the measurement of the tt cross section in the all-jets final state more uncertain compared to the leptonic channels.Nevertheless, a high-purity signal sample can be selected, which increases significantly the signal-overbackground ratio compared to previous measurements in this decay channel [21,26,27].
The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter.Extensive forward calorimetry (pseudorapidity |η| > 3.0) complements the coverage provided by the barrel (|η| < 1.3) and endcap (1.3 < |η| < 3.0) detectors.Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid.The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a fixed time interval of less than 4 µs.The high-level trigger (HLT) processor farm further decreases the event rate from around 100 kHz to around 300 Hz, before data storage.A detailed description of the CMS apparatus, together with the definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [28].
Event simulation
The tt events are simulated using the leading-order (LO) MADGRAPH (v.5.1.5.11) event generator [29], which incorporates spin correlations through the MADSPIN [30] package and the simulation of up to three additional partons.The value of the top quark mass is set to m t = 172.5 GeV and the proton structure is described by the parton distribution functions (PDFs) from CTEQ6L1 [31].The generated events are subsequently processed with PYTHIA (v.6.426) [32] which utilizes tune Z2* for parton showering and hadronization, and the MLM prescription [33] is used for matching of matrix element jets to those from parton shower.The PYTHIA Z2* tune is derived from the Z1* tune [34], which uses the CTEQ5L PDF [31], whereas Z2* adopts CTEQ6L [31].The CMS detector response is simulated using GEANT4 (v.9.4) [35].
In addition to the MADGRAPH simulation, predictions obtained with the next-to-leading-order (NLO) generators MC@NLO (v.3.41) [36] and POWHEG (v.1.0 r1380) [37] are also compared to the measurements.While POWHEG and MC@NLO are formally equivalent up to NLO accuracy, they differ in the techniques used to avoid double counting of the radiative corrections when interfacing with the parton shower generators.Two different POWHEG samples are used: one uses PYTHIA and the other HERWIG (v.6.520) [38] for parton showering and hadronization.The events generated with MC@NLO are interfaced with HERWIG.The HERWIG AUET2 tune [39] is used to model the underlying event in the POWHEG+HERWIG sample, while the default tune is used in the MC@NLO+HERWIG sample.The proton structure is described by the PDF sets CT10 [40] and CTEQ6M [31] for POWHEG and MC@NLO, respectively.The QCD multijet events are simulated using MADGRAPH (v.5.1.3.2) interfaced with PYTHIA (v.6.424).
Jet reconstruction
Jets are reconstructed with the anti-k T clustering algorithm [41,42] with a distance parameter of 0.5.The input to the jet clustering algorithm is the collection of particle candidates that are reconstructed with the particle-flow (PF) algorithm [43,44].In the PF event reconstruction all stable particles in the event, i.e. electrons, muons, photons, and charged and neutral hadrons, are reconstructed as PF candidates using a combination of all of the subdetector information to obtain an optimal determination of their directions, energies, and types.All the reconstructed vertices in the event are ordered according to the sum of squared transverse momenta (p T ) of tracks used to reconstruct it and the vertex with the largest sum is considered the primary one, while all the rest are considered as pileup vertices.In order to mitigate the effect of multiple interactions in the same bunch crossing (pileup), charged PF candidates that are unambiguously associated with pileup vertices are removed prior to the jet clustering.This procedure is called charged-hadron subtraction (CHS) [45].An offset correction is applied for the additional energy inside of the jet due to neutral hadrons or photons from pileup.The resulting jets require a small residual energy correction, mostly due to the thresholds for reconstructed tracks and clusters in the PF algorithm and reconstruction inefficiencies [45].
The identification of jets that likely originate from the hadronization of b quarks is done with the "combined secondary vertex" (CSV) b tagger [46].The CSV algorithm combines the information from track impact parameters and identified secondary vertices within a given jet, and provides a continuous discriminator output.
Trigger
The data used for this measurement were collected with a multijet trigger event selection (path) which, from the HLT, required at least four jets reconstructed from calorimetric information with a p T threshold of 50 GeV and |η| < 3.0.The hardware trigger required the presence of two central (|η| < 3.0) jets above various p T thresholds (52-64 GeV), or the presence of four central jets with lower p T thresholds (32)(33)(34)(35)(36)(37)(38)(39)(40), or the scalar sum of all jets p T to be greater than 125 or 175 GeV.The various thresholds were adjusted within the quoted ranges according to the instantaneous luminosity.The trigger paths employed were unprescaled for a larger part of the run, yielding a data sample corresponding to an integrated luminosity of 18.4 fb −1 .
Selection and kinematic top quark pair reconstruction
Selected events are required to contain at least six reconstructed jets with p T > 40 GeV and |η| < 2.4 (jets are required to be within the tracker acceptance in order to apply the CHS), with at least four of the jets having p T > 60 GeV (so that the trigger efficiency is greater than 80% and the data-to-simulation correction factor smaller than 10%).Among the six jets with the highest p T (leading jets), at least two must be identified as coming from b hadronization by the CSV algorithm at the medium working point (CSVM), with a typical b quark identification efficiency of 70% and misidentification probability for light quarks of 1.4%, and these are considered the most probable b jet candidates.If there are more than two such jets, which happens in approximately 2% of the events, then the two with the highest p T are chosen.To select events compatible with the tt hypothesis, and to improve the resolution of the reconstructed quantities, a kinematic fit is performed that utilizes the constraints of the tt decay.A χ 2 fit is performed, starting with the reconstructed jet four-momenta, which are varied within their experimental p T and angular resolutions, imposing a W boson mass constraint (80.4 GeV [47]) on the light-quark pairs, and requiring that the top quark and antiquark have equal mass.Out of all the possible combinations from the six input jets, the algorithm returns the one with the smallest χ 2 and the resulting parton four momenta, which are used to compute the reconstructed top quark mass (m rec t ).The probability of the converged kinematic fit is required to be greater than 0.15.Overall, the kinematic fit requirements select approximately 5% (2%) of the tt (background) events.The distance in the η-φ space between the two b quark candidates must be ∆R bb = (∆η bb ) 2 + (∆φ bb ) 2 > 2.0, which has an efficiency of roughly 75% (50%) on tt (background) events.The last two requirements are applied to select events with unambiguous top quark pair interpretation and to suppress the QCD background that originates from gluon splitting into collinear b quarks [48].
Signal extraction
The background to the tt signal is dominated by the QCD multijet production process, while the other backgrounds, such as the associated production of vector bosons with jets, are negligible.Due to the limited size of the Monte Carlo (MC) simulated samples, the background is determined directly from the data.A QCD-dominated event sample is selected with the trigger and offline requirements described in Section 4.3 and requiring zero CSVM b tagged jets.In these events the most probable b quark candidates are determined by the kinematic fit.The resulting sample contains a negligible fraction of tt events (< 1%) and is treated exactly like the signal sample.After applying the ∆R bb > 2.0 and the fit probability requirements, the reconstructed top-like kinematic properties of events with no b jet are very similar to those with two b jets (confirmed using simulated QCD events).We use this QCD-dominated control sample to extract the shape (templates) of the various kinematic observables.The number of tt events (signal yield) is extracted from a template fit of m rec t to the data using parametrized shapes for signal and background distributions, where the signal shape is taken from the tt simulation and the QCD shape is taken from the control data sample described above.The background and signal yields are determined via a maximum likelihood fit to the m rec t distribution and are used to normalize the corresponding samples.Figures 1 and 2 show the fitted mass and the kinematic fit probability and ∆R bb distributions.The p T distribution of the six leading jets is shown in Fig. 3. From the output of the kinematic fit one can reconstruct the two top quark candidates, whose p T are shown in Fig. 4, and the properties of the tt system (p T , rapidity y) are shown in Fig. 5. Overall, the data sample is dominated by signal events, and the data are in agreement with the fit results.The jet p T spectra in data appear to be systematically softer than in the simulation, in agreement with the observations in Ref.
[24], related to a softer measured top quark p T spectrum.
Systematic uncertainties
The measurement of the tt cross section is affected by several sources of systematic uncertainty, both experimental and theoretical, which are described below and summarized in Table 1.The quoted values refer to the inclusive measurement, with small variations observed in the bins of the differential measurement presented in Section 7.2.
• Background modeling: the QCD m rec t template shape derived from the data control sample is varied according to the uncertainty of the method evaluated with simulated events, which impacts the extracted signal yield moderately (4.9%).
• Trigger efficiency: the efficiency of the trigger path is taken from the simulation and corrected with an event-by-event scale factor (SF trig ), calculated from data independent samples, that depends on the fourth jet p T .In the phase space of the measurement, the SF trig is greater than 0.83 and on average 0.96.The associated uncertainty is conservatively defined as (1 − SF trig )/2 and has a small impact (2.0%) on the cross section.• Jet energy scale and resolution: the jet energy scale (JES) and jet resolution (JER) uncertainties have significant impacts on the measured cross section due to the relatively high p T requirements on the fourth and sixth of the leading jets.In the simulated events, jets are shifted (smeared) according to the p T -and η-dependent JES (JER) uncertainty, prior to the kinematic fit, and the full event interpretation is repeated.The JES (JER) has a dominant (small) effect on the cross section measurement of 7.0% (3.5%).In addition, the JES/JER uncertainties affect the signal template, with a negligible impact (≈1%) on the cross section measurement.
• b tagging: the performance of the b tagger has a dominant effect on the signal acceptance because the selected events are required to have at least two jets satisfying the CSVM requirement.An event-by-event scale factor (SF btag ) is applied to the simulation, which accounts for the discrepancies between data and simulation in the efficiency of tagging true b jets and in the misidentification rate [46].The average value of SF btag is 0.99.The uncertainty in the SF btag is taken into account by weighting each event with the shifted value of SF btag which results in a cross section uncertainty of 7.3%.This is the leading systematic uncertainty.
• Integrated luminosity: the uncertainty on the integrated luminosity is estimated to be 2.6% [49].
• Matching partons to showers: the impact of the choice of the scale that separates the description of jet production via matrix elements or parton shower in MADGRAPH is studied by changing its reference value of 20 to 40 and 10 GeV, resulting in an asymmetric effect of −4.2, +2.4% on the cross section.GRAPH, Q is defined by , where the sum is over all additional final state partons in the matrix element calculations.The effect on the measured cross section is moderate and asymmetric (−0.5, +3.8%).
• Parton distribution functions: following the PDF4LHC prescription [50,51], the uncertainty on the cross section is estimated to be 1.5%, taking the largest deviation on the signal acceptance from all the considered PDF eigenvectors.
• Non-perturbative QCD: the impact of non-perturbative QCD effects is estimated by studying various tunes of the PYTHIA shower model that predict different underlying event (UE) activity and strength of the color reconnection (CR), namely, the Perugia 2011, Perugia 2011 mpiHi, and Perugia 2011 Tevatron tunes, described in Ref. [52], were used.The effect on the measured cross section is moderate: 4.4% for the UE and 1.4% for the CR.
• Hadronization model: the effect of the hadronization model on the signal efficiency is estimated by comparing the predictions from the MC@NLO +HERWIG and POWHEG +PYTHIA simulations, and it amounts to 2%.
Inclusive cross section
The signal yield (N tt ), extracted as described in Section 5, is used to compute the inclusive tt production cross section, according to the formula where (A ) is the simulated signal acceptance times efficiency in the measurement phase space (≈7 × 10 −4 ) corrected event-by-event with the trigger and b tagging efficiency scale factors and L is the integrated luminosity.The fitted signal amounts to 3416 ± 79 events.Taking into account the systematic uncertainties discussed in Section 6, the measured cross section is σ tt = 275.6 ± 6.1 (stat) ± 37.8 (syst) ± 7.2 (lumi) pb. (2) The precision of the measured inclusive cross section is dominated by the systematic uncertainties, and in particular by those related to JES and b tagging.
In order to parametrize the dependence of the result on the top quark mass assumption, the measurement was repeated using signal simulated samples with different generated top quark masses (167.5 and 175.5 GeV).The choice of the generated mass affects both the extracted signal yield and the signal efficiency.The quadratic interpolation of the measurements with the three different top quark masses is (3)
Differential cross sections
The size of the signal sample allows the differential measurement of the tt production cross section to be performed as a function of various observables.In order to confront the theoretical predictions, the differential cross sections are reported normalized to the inclusive cross section, resulting in a significant cancellation of systematic uncertainties.
The process of measuring the differential cross sections is identical to the inclusive case: in each bin of the observable used to divide the phase space, the signal is extracted from a template fit to the reconstructed top quark mass.Besides the physics interest, the choice of the observables used is mainly motivated by their correlation to m rec t , and the ability to extract smooth signal and background templates.The variables chosen are the p T of the two reconstructed top quarks.Figure 6 shows the fitted m rec t distributions in bins of the p T of the leading top quark.The differential measurements are first reported for the visible fiducial volume, as a function of the reconstructed top p T (detector level), and then extrapolated to the parton and particle levels.The detector-level result is shown in Fig. 7 and is free of most of the systematic uncertainties affecting the inclusive measurement.The corresponding numerical values are reported in Table 2.
The parton-level results shown in Fig. 8 are obtained from the detector-level measurement, after correcting for bin migration effects and extrapolating to the full phase space using a binby-bin acceptance correction.The unfolding of the bin-migration effect is performed with the D'Agostini method [53], implemented in the RooUnfold package [54], using the migration matrix derived from the simulation.The uncertainty due to the modeling of the migration matrix and the phase-space extrapolation is estimated by repeating the unfolding and acceptancecorrection procedures by varying the systematic sources described in Section 6.The numerical values of the normalized differential cross sections at parton level are reported in Table 3.It should be noted that there is a large extrapolation factor involved from the detector-level jets (≈7 × 10 −4 of the signal) to the full parton level, which results in large theoretical uncertainties.
In addition to the parton level, results are reported at particle level, in Fig. 9, in a phase space similar to the detector level by construction.This is defined as follows: first, particle jets are built in simulation from all stable particles (including neutrinos) with the same jet clustering algorithm as the detector jets.Then, starting from the six leading jets, the jets associated with B hadrons via matching in η-φ (∆R < 0.25) are identified as the b jet candidates.Events are further selected if p 4th jet T > 60 GeV and p 6th jet T > 40 GeV and if there are at least two b jets with ∆R bb > 2.0.For the selected events, a "pseudo top quark" is reconstructed from one b jet and the two closest non-b-tagged jets.The particle-level results are obtained in a similar way to the parton level, via unfolding and acceptance correction.The numerical values of the normalized differential cross sections at particle level are reported in Table 4.
The comparison of the measured and predicted differential top quark p T shapes reveals that the models predict a harder spectrum, both in the leading and in the subleading top quark p T , in the phase space of the measurement.This effect is also reflected on the jet p T distributions shown in Fig. 3.The POWHEG +HERWIG prediction is the closest to the data, but still shows a significant discrepancy.The parton-level results are accompanied by sizeable systematic uncertainties, 8 Summary dominated by the theoretical uncertainties due to the extrapolation to the full phase space.In contrast, the particle-level phase space is much closer to the visible one, and as a result the extrapolation uncertainties are smaller.
Summary
A measurement of the tt production cross section has been performed in the all-jets final state, using pp collision data at √ s = 8 TeV corresponding to an integrated luminosity of 18.4 fb −1 .The measured inclusive cross section is 275.6 ± 6.1 (stat) ± 37.8 (syst) ± 7.2 (lumi) pb for a top quark mass of 172.5 GeV, in agreement with the standard model prediction of 252.9 +6.4 −8.6 (scale) ± 11.7 (PDF + α S ) pb as calculated with the TOP++ (v.2.0) program [55] at next-to-next-to-leading order in perturbative QCD, including soft-gluon resummation at next-to-next-to-leading-log order [56], and assuming a top-quark mass m t = 172.5 GeV.Also reported are the fiducial normalized differential cross sections as a function of the leading and subleading top quark p T .Compared to QCD predictions, the measurement shows a significantly softer top quark p T spectrum.The differential cross sections are also extrapolated to the full partonic phase space, as well as to particle level, and can be used to tune Monte Carlo models.Table 3: Normalized differential tt cross section as a function of the p T of the leading (p T ) and subleading (p T ) top quarks or antiquarks.The results are presented at parton level in the full phase space.p T bin range (GeV) 1 σ dσ/dp T (GeV −1 ) stat (%) exp.syst (%) theo.syst (%) [0, 150] 6.72 × 10 −3 ±10.8 −3.7, +4.Individuals have received support from the Marie-Curie programme and the European Re- [4] ATLAS Collaboration, "Measurement of the cross section for top-quark pair production in pp collisions at √ s = 7 TeV with the ATLAS detector using final states with two high-pt leptons", JHEP 05 (2012) 059, doi:10.1007/JHEP05(2012)059,arXiv:1202.4892.
Figure 1 :
Figure 1: Distribution of the reconstructed top quark mass after the kinematic fit.The normalizations of the tt signal and the QCD multijet background are taken from the template fit to the data.The bottom panel shows the fractional difference between the data and the sum of signal and background predictions, with the shaded band representing the MC statistical uncertainty.
Figure 2 :
Figure 2: Distribution of the kinematic fit probability (left).Distribution of the distance between the reconstructed b partons in the η-φ plane (right).The normalizations of the tt signal and the QCD multijet background are taken from the template fit to the data.The bottom panels show the fractional difference between the data and the sum of signal and background predictions, with the shaded band representing the MC statistical uncertainty.
•Figure 3 :Figure 4 :
Figure 3: Distribution of the p T of the six leading jets.The normalizations of the tt signal and the QCD multijet background are taken from the template fit to the data.The bottom panels show the fractional difference between the data and the sum of signal and background predictions, with the shaded band representing the MC statistical uncertainty.
Figure 5 :
Figure 5: Distribution of the p T (left) and the rapidity (right) of the reconstructed top quark pair.The normalizations of the tt signal and the QCD multijet background are taken from the template fit to the data.The bottom panels show the fractional difference between the data and the sum of signal and background predictions, with the shaded band representing the MC statistical uncertainty.
Figure 6 :
Figure 6: Distribution of the reconstructed top quark mass after the kinematic fit in bins of the leading reconstructed top quark p T .The normalizations of the tt signal and the QCD multijet background are taken from the template fit to the data.The bottom panels show the fractional difference between the data and the sum of signal and background predictions, with the shaded band representing the MC statistical uncertainty.
Figure 7 :
Figure 7: Normalized fiducial differential cross section of the tt production as a function of the leading (left) and subleading (right) reconstructed top quark p T (detector level).The bottom panels show the fractional difference between various MC predictions and the data.Statistical uncertainties are shown with error bars, and systematic uncertainties with the shaded band.
Figure 8 :
Figure 8: Normalized differential cross section of the tt production at parton level as a function of the leading (left) and subleading (right) top quark p T .The bottom panels show the fractional difference between various MC predictions and the data.Statistical uncertainties are shown with error bars, while theoretical (theo.)and experimental (exp.)systematic uncertainties with the shaded bands.
Table 1 :
Fractional uncertainties in the inclusive tt production cross section.
Table 2 :
Normalized differential tt cross section as a function of the p T of the leading (p
Table 4 :
Normalized differential tt cross section as a function of the p T of the leading (p ) top quarks or antiquarks.The results are presented at particle level.
T search Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation | 2018-12-29T02:12:30.481Z | 2016-03-08T00:00:00.000 | {
"year": 2016,
"sha1": "70d41b8545cd2b0b33648bc5e0e748815810ee1b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-016-3956-5.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "7b040563b942c5bd195f4b1c0774cff398b530bd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250696915 | pes2o/s2orc | v3-fos-license | The Amphibian Short‐Term Assay: Evaluation of a New Ecotoxicological Method for Amphibians Using Two Organophosphate Pesticides Commonly Found in Nature—Assessment of Biochemical, Morphological, and Life‐History Traits
Abstract Amphibia is the most threatened class among vertebrates, with >40% of the species threatened with extinction. Pollution is thought to alter amphibian population dynamics. With the growing interest in behavioral ecotoxicology, the neurotoxic organophosphate pesticides are of special concern. Understanding how exposure to neurotoxics leads to behavioral alterations is of crucial importance, and mechanistic endpoints should be included in ecotoxicological methods. In the present study, we tested an 8‐day assay to evaluate the toxicity of two organophosphates, diazinon and chlorpyrifos, on Xenopus laevis, that is, on biochemical, morphological, and life‐history traits related to locomotion capacities. The method involves measuring biomarkers such as glutathione‐S‐transferase (GST) and ethoxyresorufin‐O‐deethylase (EROD; two indicators of the detoxifying system) in the 8‐day‐old larvae as well as acetylcholinesterase (AChE) activity (involved in the nervous system) in 4‐day‐old embryos and 8‐day‐old larvae. Snout‐to‐vent length and snout‐to‐tail length of 4‐day‐old embryos and 8‐day larvae were recorded as well as the corresponding growth rate. Fin and tail muscle widths were measured as well for testing changes in tail shape. Both tests showed effects of both organophosphates on AChE activity; however, no changes were observed in GST and EROD. Furthermore, exposure to chlorpyrifos demonstrated impacts on morphological and life‐history traits, presaging alteration of locomotor traits. In addition, the results suggest a lower sensitivity to chlorpyrifos of 4‐day‐old embryos compared to 8‐day‐old larvae. Tests on other organophosphates are needed to test the validity of this method for the whole organophosphate group. Environ Toxicol Chem 2022;41:2688–2699. © 2022 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals LLC on behalf of SETAC.
INTRODUCTION
For 30 years now, it has been widely accepted that amphibian populations worldwide have been facing a global decline (Barinaga, 1990;Collins, 2010; European Food Safety Authority [EFSA] Panel on Plant Protection Products and Their Residues et al., 2018). With >40% of species threatened with extinction, Amphibia is the most threatened class among the vertebrates (International Union for Conservation of Nature, 2022). Among other threats, such as habitat loss, disease, and invasive species, chemicals are thought to contribute to this global decline. In particular, amphibians are likely to be exposed to substances, such as pesticides used in farming. Indeed, these chemicals present in treatments sprayed on crops are taken up by rainwater flow directly in amphibian habitats (Chen et al., 2019). Thus, amphibians are exposed to pesticides at crucial periods of their life cycle: reproduction and metamorphosis (Sandin et al., 2018).
Pesticides have been demonstrated to have numerous effects on survival, reproduction, growth, development, and the immune and nervous systems of amphibians (EFSA Panel on Plant Protection Products and Their Residues et al., 2018;Peltzer et al., 2019). Among these compounds, those interacting with behavior are of crucial concern in amphibians. Indeed, individual behavioral traits are known to influence individual fitness and therefore population dynamics (Ballew et al., 2017;Ford et al., 2021). Recently, the necessity to develop approaches for evaluating the effects of pollutants on wildlife behavior was highlighted by an international workshop (Ford et al., 2021). More specifically, the workshop emphasized the mechanisms underlying behavioral alterations, which are barely understood but relevant to the adverse outcome pathway (AOP; Ankley et al., 2010); the AOP pictures the effects of a pollutant from its binding to a biological molecule to the impacts at the population, community, or ecosystem level. The workshop recommended adapting current protocols for disentangling the mechanisms underlying the behavioral changes.
Different protocols exist for testing amphibians. Among those, the frog embryo teratogenesis assay-Xenopus (FETAX) from ASTM International was originally developed "as an indicator of potential human developmental health hazards" (ASTM International, 1998). This protocol is a convenient test for screening molecules. Indeed, the 3Rs (Replacement, Reduction, and Refinement) do not apply to the embryonic stage in Europe (Sneddon et al., 2017), and its short duration (4 days) makes it suitable for a large substance screening assessment. Thus, it can be used as a preliminary test for screening the most toxic molecules and evaluating the relevant range of concentrations to test before a chronic behavioral assay. Nevertheless, the literature suggests that, in amphibians, 4-day-old embryos present a lower sensitivity to contaminants than subsequent stages (Berrill et al., 1998;Edginton et al., 2004; EFSA Panel on Plant Protection Products and Their Residues et al., 2018;Ortiz-Santaliestra et al., 2017;Yu, Wages, Cai, et al., 2013). This leads to an underestimation of chemical toxicity and thus could lead to unnecessary pain to animals. Indeed, if the range of concentrations used for a chronic assay were determined based on FETAX protocols, an overmortality of the subsequent stages could occur. One possibility to overcome these disadvantages is to develop a method that is still short but that covers a period >4 days.
In the present study, we used two organophosphate insecticides (OPIs) commonly found in nature for testing an 8-day protocol that could eventually be extended to other OPIs. These compounds form one of the largest groups of chemicals used as insecticides and represent ecotoxicological risks in both developed and developing countries (Derbalah et al., 2019;Malhat et al., 2018). For this test, we selected the diazinon and chlorpyrifos insecticides. Both target acetylcholinesterase (AChE) activity in insects, an enzyme involved in the nervous system. Although both are prohibited within the European Union and Switzerland, their use is still authorized in several countries including Brazil (Agência Nacional de Vigilância Sanitária, 2022), an important amphibian hotspot. Besides, numerous studies have illustrated their toxicity to amphibians and fish, representing a substantial source of information for developing this method (see Bonifacio et al., 2020;Colombo et al., 2005;EFSA Panel on Plant Protection Products and Their Residues et al., 2018). In this assay, we focused on examples of biochemical, morphological, and lifehistory traits because they are known to influence the behavior and thus represent good mechanistic endpoints. The species used in the present study was Xenopus laevis, a common model organism for amphibians.
As biochemical traits, we selected three enzyme activities. We measured the AChE activity as a potential mechanistic insight of behavioral changes because it reveals alterations of the nervous system. As the OPIs' target, we expect AChE's activity to be inhibited with increasing OPI concentrations. Glutathione-S-transferase (GST) and ethoxyresorufin-Odeethylase (EROD) activities were also measured as indicators of pesticide metabolism (Amiard-Triquet et al., 2012). These enzymes being involved in detoxification process, we expect their activities to be induced with increasing OPI concentrations. In addition to biochemical biomarkers, we quantified several morphological traits during the chlorpyrifos test. Because locomotion is highly related to body shape (Van Buskirk & McCollum, 2000), morphological traits represent potential endpoints for understanding the mechanisms behind behavioral changes. Snout-to-vent length (SVL) was measured on 4-day-old embryos (Nieuwkoop et al., 2020;Nieuwkoop-Faber [NF] Stage 45, hereafter referred to as embryos) and 8-day-old larvae (NF Stage 48, hereafter referred to as larvae). Growth rate was quantified from these two previous metrics as a life-history trait. Because of the reallocation of energy from growth to the detoxification function, we expect lower SVL and growth rate with increasing OPI concentrations. We also measured the snout-to-tail length (STL) on embryos and larvae and the fin width and tail muscle width on embryos. These parameters were used to compute the SVL-to-STL ratio for embryos and larvae as well as the fin width-to-muscle width ratio for embryos for testing potential effects of exposure on body shape. During the chlorpyrifos test, AChE activity and SVL were measured on both embryos and larvae. Because the literature suggests that embryos are less sensitive to chemicals (Berrill et al., 1998;Edginton et al., 2004;EFSA Panel on Plant Protection Products and Their Residues et al., 2018;Ortiz-Santaliestra et al., 2017;Yu, Wages, Cai, et al., 2013), we expect the larval responses to be of higher magnitude than those of the embryos.
Test organisms and husbandry conditions
Egg acquisition. African clawed frog (X. laevis, Daudin, 1802) eggs were obtained from a wild-type breeding colony reared at the Centre Hospitalier Universitaire Vaudois, Switzerland (approval number A31113002); and the entire experimental procedure was approved by the veterinary and ethics committee (VD3521a). Adults were maintained under a 12:12-h light: dark cycle at a temperature of 21°C and fed 6 g of fish food (Neo Grower) twice a week. To stimulate egg deposition, the females were injected with human chorionic gonadotropin (15 IU 30 h prefertilization and 750 IU ∼6 h prefertilization). When the females began laying eggs, the males were euthanized by intracelomic injection of 0.1 ml of 300 mg/ml pentobarbital. Once individuals were unconscious, the spinal cord was severed. Gonads were extracted after dissection and crushed in 2 ml F1 solution (4.56 g NaCl, 0.33 g KCl, 0.28 g CaCl 2 , 0.03 g MgCl 2 , 0.34 g NaHCO 3 , and 5.96 g N-2hydroxyethylpiperazine-N′-2-ethane-sulfonic acid in 1 L deionized water). Eggs were collected in a dry Petri dish by gently massaging the abdomen of the females. Once this step was completed, the eggs were sprayed with testis homogenate for fertilization. After 5 min of contact with sperm, eggs were covered with water to initiate egg membrane transformation. After the dorsoventral polarization phase, eggs were collected in a 50-ml vial and brought to the laboratory. The females and males used in each test were different individuals.
Egg dejellying. At stage NF 8, the egg mass was split into two equal batches. One of these batches was dejellied by bathing in a 2% L-cysteine solution buffered at pH 8.1, while the other batch was kept entire.
Testing
Test substances. Diazinon (Chemical Abstracts Service [CAS] no. 333-41-5, Pestanal; purity ≥98%) and chlorpyrifos (CAS no. 2921-88-2, Pestanal; purity ≥98%) were used as test compounds for the amphibian short-term assay and were supplied by Merck. Each pesticide concentration was quantified in 12-well plates (only for chlorpyrifos) and 125-ml plastic containers (diazinon and chlorpyrifos). The pesticide uptake by individuals is thought to influence the pesticide concentrations. Because we assumed that the maximum uptake occurs when the individual reaches the largest size, we quantified pesticides on samples collected during the last renewals of embryonic (days 3-4) and larval (days 7-8) stages. The quantification method consisted of triplicate injections using liquid chromatography coupled with tandem mass spectrometry. The limits of detection were 5 and 2.3 ng/L for diazinon and chlorpyrifos, respectively. Because diazinon and chlorpyrifos decreased over 24 h, the measured arithmetic mean between t 0 and t 24 h was used for the analyses and the figures. Concentration values are available in Supporting Information, Tables 1 and 2.
Stock solutions. The FETAX solution was used as the dilution water for the stock, test, and control solutions. The FETAX solution was composed of 625 mg NaCl, 96 mg NaHCO 3 , 30 mg KCl, 15 mg CaCl 2 , 60 mg CaSO 4 × 2H 2 O, and 75 mg MgSO 4 per liter of ultrapure water. As suggested by the larval amphibian growth and development assay (Organisation for Economic Co-operation and Development, 2015), the iodide concentration (I − ) was 10 µg/L. The pH of the final solution ranged from 7.7 to 7.9. A 20-mg/L diazinon stock solution was prepared in the FETAX solution, while a 1-mg/L chlorpyrifos stock solution was prepared in dimethyl sulfoxide (DMSO) with a proportion of 0.002% v/v DMSO/FETAX, as suggested by Hutchinson et al. (2006). Stock solutions were stored in the dark at ambient temperature during the tests. The test solutions were prepared daily by diluting the stock solutions. In the chlorpyrifos tests, the stock solution was diluted with a 0.002% v/v DMSO/FETAX solution.
Physicochemical parameters. During the tests, dissolved oxygen, pH, and conductivity were measured each day during medium renewal on fresh medium and 24-h-old medium. The average values for these parameters were 8.58 ± 0.11 mg/L, 7.8 ± 0.07, and 1647.12 ± 17.90 µS/cm, respectively.
Test conditions. Embryos were continuously exposed to six nominal diazinon concentrations (0, 0.0001, 0.001, 0.01, 0.1, and 1 mg/L) and five concentrations of chlorpyrifos (0, 0.0001, 0.001, 0.01, and 0.1 mg/L) in addition to a solvent control (0.002% v/v DMSO/FETAX). Exposure lasted from 5 h postfertilization (hpf) to 8 days postfertilization. The method was composed of two phases: the embryonic phase (from 5 hpf to day 4, stage NF 45) and the larval stage (from day 5 to day 8, stage NF 48). During the tests, individuals were reared in a climatic chamber at 21 ± 1°C with a 12:12-h light: dark cycle and an illumination of 680 lx. The test/control solutions were renewed daily. The locations in the chamber were assigned randomly. From day 5 to the end of the experiment, individuals were fed once per day with 60 μl of a 1:1 (m/m) mixture of spirulin:tetrafin (24 g:24 g/L; JBL Spirulina Premium and JBL Novo Bel). Both tests were performed by the same operator.
For limiting adsorption of diazinon and chlorpyrifos, 12-well plates and 125-ml plastic containers were preconditioned 24 h prior to their use, and medium was renewed before the beginning of the exposure.
Diazinon test. Twelve embryos from each dejellying condition were exposed in 12-well plates filled with 2 ml of test/ control solutions during the first phase. Each concentration/ dejellying condition (e.g., exposure to 0.1 mg diazinon/L of nondejellied eggs) was run in one replicate only. At day 5, 10 individuals from each concentration/dejellying condition (120 in total) were randomly selected and individually transferred to 125-ml plastic containers filled with 90 ml of test/control solutions until the end of the experiment. At day 8, larvae were euthanized in a 2-mg/L tricaine mesylate solution buffered at pH 7, quickly frozen in liquid natrium, and stored at −80°C for further biochemical biomarker measurements.
Chlorpyrifos test. Some improvements were made to this method. First, during the embryonic phase, each concentration/dejellying condition was run in three replicates. This choice allowed us to retain 120 supernumerary embryos, which were euthanized and stored at −80°C for further biochemical biomarker measurements. When transferred to 125-ml plastic containers, individuals were pseudorandomly selected, including four random individuals from Plate 1 and three random individuals from Plates 2 and 3. Finally, pictures of individuals were taken at day 4 and day 8 for morphological measurement of body length and growth rate.
Morphological traits
Pictures taken during the chlorpyrifos test were analyzed using ImageJ software. Individual SVLs were extracted from pictures of embryos and larvae. The SVL was measured as the distance between the middle of the mouth and the extremity of the intestines, as described in Figure 1. The growth rate was measured as the larvae SVL divided by the embryo SVL. Measurements of fin width and muscle width were recorded on embryos only because of larval shape, which avoided keeping individuals laid laterally while taking pictures. These metrics were, respectively, measured as the distance between the vent and the opposite fin edge and the overlapping distance between muscle edges, as described in Figure 1. This measurement was performed three times for each individual, and the individual means were used for the statistical tests. All morphological parameters were measured by the same operator.
Enzymatic activities
Levels of AChE, EROD, and GST were quantified in larvae. The low amount of biological tissue in embryos made the measurement of multiple biomarkers impossible in a single individual. Based on the results for 8-day-old larvae, we decided to quantify AChE only in embryos. All biochemical biomarkers were measured using the same operator.
Homogenization and protein quantification. Euthanized individuals were frozen at −80°C in reinforced tubes with approximately 40 ceramic beads and one steel bead. On the day of measurement, individuals were thawed and homogenized at 7200 rpm for 60 s in phosphate-buffered saline (PBS; 100 mM, pH 7.8) supplemented with a cocktail of protease inhibitors (Thermo Scientific™ Halt™ Protease Inhibitor Cocktail).
Protein concentration. The proteins were quantified spectrophotometrically using a bicinchoninic acid (BCA) assay (BCA Assay Kit; QuantiPro™). The reaction medium consisted of 100 µl of BCA reagent and 100 µl of sample. Optical density was measured using a multiplate reader capable of measuring the absorbance at 562 nm.
AChE. Activity of AChE was measured spectrophotometrically according to the method described by Ellman et al. (1961) and modified by Xuereb et al. (2009). The reaction medium consisted of 330 ml PBS (100 mM, pH 7.8), 20 µl of 0.425 mM 5.5-dithio-bis (2-nitro-benzoic acid; DTNB), 10 µl of acetylthiocholine iodide (1 mM), and 20 µl of sample. Kinetics were measured using a multiplate reader capable of measuring the absorbance at 405 nm. Readings were performed every 15 s for 6 min. Enzyme activity was expressed as moles per minute per milligram of protein, using a molar extinction coefficient of 1.36 10 −4 /M/cm. EROD. Activity of EROD was measured using fluorescence based on the method described by Burke and Mayer (1974). The reaction medium consisted of 150 µl of 0.162 mM 7-ethoxyresorufin, 2.5 mM nicotinamide adenine dinucleotide phosphate, and 30 µl of sample. Kinetics were measured using a multiplate reader with the following parameters: excitation wavelength, 535 nm; emission wavelength, 590 nm; and kinetic duration, 30 min.
GST. Activity of GST was determined spectrophotometrically using the method described by Habig et al. (1974). The reaction medium consisted of 150 ml of PBS (100 mM, pH 6.5), 180 µl of a mixture of glutathione (200 mM), 1-chloro-2,4-dinitrobenzene (40 mM), and 20 µl of sample. The kinetics were measured using a multiplate reader capable of measuring absorbance at 340 nm. Readings were performed every 15 s for 6 min, and the enzymatic activity was expressed as moles per minute per milligram of protein, applying a molar extinction coefficient of 9.6 mM/cm.
Statistical analysis
All statistical analyses were performed using R software (R Foundation for Statistical Computing, 2021).
Extreme outliers were removed from the data set using the boxplot method. An extreme outlier was defined as a data point lying outside three times the interquartile range.
Test methods for diazinon and chlorpyrifos were different regarding the nature of replicates. So were the statistical approaches. During the diazinon test, the replicates were the individual larvae within a plate. We then performed simple linear models for assessing the effects of exposure and dejellying conditions, in addition to their interaction. During the chlorpyrifos test, individual larvae were considered to be pseudoreplicates because three replicated plates were used for each concentration/dejellying condition. We then performed linear mixedmodel effects for assessing the effects of exposure and dejellying conditions, in addition to their interaction. In these mixed-effect models, the plate number was used as a random effect.
For each approach (i.e., simple and mixed-effects models), we compared three models using analysis of variance: Model 1 had concentration as a fixed effect, Model 2 had concentration and dejellying condition as fixed effects, and Model 3 had concentration, dejellying condition, and their interaction as fixed effects. For every studied parameter in the diazinon and chlorpyrifos tests (e.g., AChE, growth rate), comparisons showed no difference between Model 3 and Model 2 or between Model 2 and Model 1. This suggests no interaction between concentration and dejellying condition on the studied parameters nor a main effect of dejellying. A main effect is the effect of an explanatory variable on the response variable without taking into account another explanatory variable, in opposition to an interaction effect between two explanatory variables. Therefore, Model 1 was used for the whole of our study. Normality and homogeneity of residuals were graphically checked. When the residuals were not reasonably normally distributed, a log transformation was applied to data, and normality was tested again. After transformation, the dependent variable distributions were reasonably normally distributed. The p values resulting from the models were adjusted to control for the familywise error rate using the Bonferroni-Holm method (Holm, 1979). The α significance level of all tests was set at 0.05. In the pairwise comparisons, the reference group is composed of negative control individuals only.
Negative and solvent control
Comparison between the diazinon and chlorpyrifos tests reveals different magnitudes of AChE (respectively, Figures 2A and 3B) and GST (respectively, Figures 2C and 3D) activities in the negative controls for larvae. Activity of AChE in the diazinon test was approximately three times lower than that in the chlorpyrifos test (respectively, 5.88 and 17.95 nmol/min/mg protein). Activity of GST was more than two times lower in the diazinon test than that in the chlorpyrifos test (respectively, 0.188 and 0.448 nmol/min/mg protein). Activity of EROD was similar in both tests (respectively, 2.48 and 2.06 pmol/min/mg protein; Figures 2B and 3C).
Regarding chlorpyrifos results at both stages in negative controls, we observed a much lower AChE activity in embryos than in larvae, with 0.43 and 17.95 nmol/min/mg protein, respectively ( Figure 3A,B). The embryos are not very active compared to larvae, and such results were expected because AChE is involved in muscle contractions.
With respect to the solvent control, no impact of DMSO was observed graphically or statistically for every recorded parameter except for the embryos' fin width to muscle width ratio. In this condition, individuals in the solvent control demonstrated a higher ratio (Figure 4) compared to the control.
Diazinon test
Biochemical biomarkers. The results are reported in Table 1. The mean AChE activity under the control condition was 5.88 nmol/min/mg protein ( Figure 1A). Although nonsignificant, a decrease seems to start at a concentration of 0.095 mg/L, with a loss of 23% of activity (mean activity = 4.52 nmol/min/mg protein). At a concentration of 0.896 mg/L, we can observe a significant loss of 48% of AChE activity FIGURE 4: Snout-to-tail to snout-to-vent length ratio measured on 4-day-old embryos (A) and 8-day-old larvae (B) and fin width to muscle width ratio (C) measured on 4-day-old embryos continuously exposed to six concentrations of chlorpyrifos (*p = 0.01-0.05; **p = 0.001-0.01; ***p < 0.001). p values are adjusted with the Bonferroni-Holm method. Plates of origin are set as a random effect in statistical models. The central bar of the boxplot is the group median. Upper and lower hinges correspond to the 25th and 75th quantiles, respectively. Upper and lower whiskers extend from the closest hinge, respectively, to the largest and the smallest values at most 1.5 times the interquartile range. Violins represent the smoothed histogram of the data distribution. Extreme outliers are not displayed in the graphs. STL = snout-to-tail length; SVL = snoutto-vent length; DMSO = dimethyl sulfoxide; CPF = chlorpyrifos; FW = fin width; MW = muscle width.
(mean activity = 3.07 nmol/min/mg protein). Regarding EROD and GST activities, no significant impact of concentration was observed in this experiment (respectively, Figure 2B,C).
Chlorpyrifos test
Biochemical biomarkers. The results are reported in Table 2. Although no changes were observed in AChE activity in embryos ( Figure 3A), results on larvae demonstrate significant decreases of 35.4% (mean activity = 11.59 nmol/min/ mg) at a concentration of 0.0035 mg/L and 79.5% (mean activity = 3.67 nmol/min/mg protein) at a concentration of 0.0365 mg/L ( Figure 3B). The mean AChE activity in the control was 17.95 nmol/min/mg protein. Regarding EROD and GST activities, no significant statistical impact of concentration was observed in this experiment (respectively; Figure 3C,D). Tables 3 and 4. No significant changes were observed on embryos' SVL ( Figure 5A), while in larvae, SVL significantly decreased from 5.53 mm in the control condition to 5 mm at the highest concentration ( Figure 5B). The consequence is a significantly decreased 96-h growth rate at the highest concentration ( Figure 5C). The embryos' STL to SVL ratio demonstrated no changes ( Figure 4A), while the larval STL to SVL ratio showed a significant increase at the highest concentration ( Figure 4B) with respective values of 1.08 and 2.01. Lastly, embryos' fin width to muscle width ratio significantly increased at the highest concentration ( Figure 4C) from 2.11 in the control condition to 2.22. Surprisingly, a significant increase of the fin width to muscle width ratio occurred in the solvent control, while no changes were detected at the lower concentrations.
DISCUSSION
In the present study, we tested an 8-day protocol with diazinon and chlorpyrifos to evaluate their toxicity to amphibians' biochemical, morphological, and life-history traits at early stages, alteration of which is known to impair some behavioral endpoints.
Impact of dejellying
No impact of dejellying was observed on any of the recorded parameters. Because the amphibian jelly coat was demonstrated to be involved in embryonic protection against pollution (Bosisio et al., 2009), the dejellying conditions were expected to show differences in exposure toxicity. Our results suggest that this does not affect diazinon or chlorpyrifos toxicity.
Solvent concentration
The use of a 0.002% v/v DMSO/FETAX solution did not affect the recorded parameters except for the fin width to muscle width ratio. In that case, the solvent control had a significantly higher ratio than the negative control, though no changes appeared at concentrations <0.0365 mg chlorpyrifos/L. Nevertheless, the low level of significance of the test (p = 0.047) leaves some uncertainty, and the toxicity of solvents used in ecotoxicity testing should be investigated further.
Biochemical traits
As expected, the present study demonstrates inhibition of AChE in larvae by both diazinon at a concentration of 1 mg/L and chlorpyrifos at concentrations of 0.01 and 0.1 mg/L. Inhibition of AChE activity has been documented for diazinon and chlorpyrifos on X. laevis as well as in other amphibian species (Colombo et al., 2005;Tongo et al., 2012). Although these activities were expected to increase with the level of exposure to diazinon and chlorpyrifos, no effect was observed in larvae. To our knowledge, few have studied the impact of diazinon and chlorpyrifos on larvae GST and EROD activities, with most research on organophosphates focusing on AChE inhibition. But Güngördü et al. (2013) suggest different patterns of correlation between AChE and GST activities. An assumption for the origin of such different patterns is the differential impact of organophosphate metabolites on the detoxification process. Furthermore, EROD is a biomarker used for evaluating the response of the cytochrome P450A1 The amphibian short-term assay-Environmental Toxicology and Chemistry, 2022;41:2688-2699 subfamily. Another assumption is that other cytochrome P450 subfamilies may be involved in detoxifying OPIs in amphibians.
Morphological traits
In the chlorpyrifos test, larvae exposed to the highest concentrations had a smaller body size. Similar results were reported in Richards and Kendall (2003), who showed a decrease in body lengths of what they defined as "metamorphs" (from stage NF 46 to a 96-h exposure) compared to "premetamorphs" (from stage NF 14 to a 96-h exposure). However, in their study, the effects occurred at a chlorpyrifos concentration of 0.0001 mg/L, while it occurred at 0.0365 mg/L in the present study. This difference might be explained by the higher number of individuals per exposure condition in their work (mean of 81 individuals per condition), thus increasing test power. Nevertheless, the absence of specification in their study regarding the actual measured concentrations and the method for adjusting the p values does not allow us to evaluate the relevance of this assumption. In addition, our method demonstrated the impact of exposure on growth rate. Previously, chlorpyrifos has been shown to disrupt the endocrine system (Ur Rahman et al., 2021). Nevertheless, to our knowledge, only 4: Outputs of linear mixed-effects models testing effects of chlorpyrifos on 4-day-old embryo and 8-day-old larval snout-to-tail length to snout-to-vent length (SVL) ratio, and 4-day-old embryo fin width to muscle width ratio a few studies have investigated the impact of chlorpyrifos on amphibian growth rate (see Wijesinghe et al., 2011). Such changes are likely to have an impact on size at metamorphosis, a life-history trait known to influence behavior, fitness, and ultimately population dynamics (Bredeweg et al., 2019). Because of the decrease of SVL, the STL to SVL ratio was also affected, with larvae exposed to the higher concentration of chlorpyrifos demonstrating a higher ratio. No research demonstrated such effects of OPIs, but Yu, Wages, Cobb, and Maul (2013) documented a lower STL/SVL ratio in X. laevis embryos exposed to environmentally relevant concentrations of chlorothalonil, an organochlorine fungicide. The embryo tail shape was impacted as well at the highest chlorpyrifos concentration, presaging alteration of locomotion capacities.
Embryo sensitivity
Although no impact of chlorpyrifos was demonstrated on AChE activity in embryos in the present study, Colombo et al. (2005) showed significant statistical changes in X. laevis embryo AChE activity from day 3 at a nominal concentration of 0.1 mg chlorpyrifos/L. Important differences exist between their study and our measurement protocols that can explain this divergence. While the authors incubated pooled embryos (10 individuals per concentration) with DTNB for 10 min before measuring the absorbance, we incubated single individual homogenates for 4 min. Our results are supported by Richards and Kendall (2002), who demonstrated a higher sensitivity of "metamorphs" compared to "premetamorphs" with respective lower nominal concentrations affecting the whole cholinesterase activity of 0.01 and 0.1 mg chlorpyrifos/L. Regarding morphological traits, Richards and Kendall (2003) demonstrated significant decreases in "premetamorph" body length at nominal concentrations of 0.001 and 1 mg/L, while no changes were observed in the present study at similar concentrations. The higher sample size (an average of 83 individuals) in Richard and Kendall's study could explain such a difference. Nevertheless, as mentioned above, the concentrations measured and the method for adjusting the p values are not mentioned in their article. Contrary to embryos, the use of larvae allows the measurement of multiple biochemical biomarkers on a single individual. These findings highlight the interest of extending the exposure duration to 8 days. To summarize, our results suggest that larvae are more sensitive to chlorpyrifos than embryos regarding both AChE activity and morphological and life-history traits. More research is needed to extend this suggestion to other OPIs.
Environmental considerations
The concentrations at which the changes mentioned above occur are substantially higher than the concentrations usually measured in the environment. Nevertheless, some articles mention chlorpyrifos environmental concentrations up to 5.49 × 10 −3 mg/L in Mexico (Ávila-Díaz et al., 2021) and 11.2 × 10 −3 mg/L in Pakistan (Arain et al., 2018), while AChE inhibition occurred at 3.5 × 10 −3 mg/L in the present study. This suggests a potential short-term risk for amphibian larvae from these regions. Moreover, diazinon and chlorpyrifos are usually applied together, and the literature suggests a synergistic effect of a mixture of these two pesticides on fish (Laetz et al., 2009).
Limits of the method
An issue highlighted in the present study is the different magnitudes of biochemical biomarker values between different runs of the method, suggesting differences in enzymatic activity between offspring from different breeders. Poor repeatability of biochemical biomarkers is a known issue in ecotoxicology, and factors such as sex ratio and genome are known to modulate between bred responses in fish, for instance (Wang, 2018). A commonly recommended means for diminishing this variability is to increase the sample size.
CONCLUSION
The method proposed in the present study is promising and demonstrates the capacity to evaluate the effects of two organophosphate pesticides on biochemical and morphological traits, both considered to provide possible insight into behavioral alteration mechanisms. To our knowledge, this is the first protocol proposing a set of different mechanistic endpoints related to behavior, a crucial component of amphibian ecology. However, this approach should be validated with other OPIs for testing its suitability to this pesticide group. Besides, although X. laevis seems to be very tolerant to pollutants (see Adams et al., 2021;Yu, Wages, Cai, et al., 2013), it is often used as a surrogate species for amphibians. Its capacity to cover other species has been barely investigated, and the difficulty of transferring these results to other species without more comparative studies is obvious. Nevertheless, transposition of the present method to other amphibians would allow for comparative studies and should be tested. We believe that it can be easily optimized for many other amphibian species because most of them have an aquatic development (∼85% according to Nunes-de-Almeida et al. [2021]). Besides, AChE, EROD, and GST were already measured in other amphibian species (Venturino & de D'Angelo, 2005;Venturino et al., 2003), and the morphological traits measured in the present method are commonly used in amphibian ecology (see Van Buskirk & McCollum, 2000). Lastly, because this method demonstrated the effects of two OPIs on both the nervous system and morphology, we think that the implementation of behavioral tests should be considered with the aim of studying how the biochemical and morphological alterations are linked to behavioral changes.
Supporting Information-The Supporting information is available on the Wiley Online Library at https://doi.org/10.1002/ etc.5436. | 2022-07-21T06:16:22.511Z | 2022-07-20T00:00:00.000 | {
"year": 2022,
"sha1": "795558f120bf325070ede88a2bdc9440b2a93684",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "541397a0029d18c3ae3bf6c7146f68378887978f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195798909 | pes2o/s2orc | v3-fos-license | Location Privacy in Cognitive Radios with Multi-Server Private Information Retrieval
Spectrum database-based cognitive radio networks (CRNs) have become the de facto approach for enabling unlicensed secondary users (SUs) to identify spectrum vacancies in channels owned by licensed primary users (PUs). Despite its merits, the use of spectrum databases incurs privacy concerns for both SUs and PUs. Single-server private information retrieval (PIR) has been used as the main tool to address this problem. However, such techniques incur extremely large communication and computation overheads while offering only computational privacy. Besides, some of these PIR protocols have been broken. In this paper, we show that it is possible to achieve high efficiency and (information-theoretic) privacy for both PUs and SUs in database-driven CRN with multi-server PIR. Our key observation is that, by design, database-driven CRNs comprise multiple databases that are required, by the Federal Communications Commission, to synchronize their records. To the best of our knowledge, we are the first to exploit this observation to harness multi-server PIR technology to guarantee an optimal privacy for both SUs and PUs, thanks to the unique properties of database-driven CRN . We showed, analytically and empirically with deployments on actual cloud systems, that multi-server PIR is an ideal tool to provide efficient location privacy in database-driven CRN.
I. INTRODUCTION
The rapid growth of connected wireless devices has dramatically increased the demand for wireless spectrum and led to a serious shortage in spectrum resources. Cognitive radio networks (CRN s) [1] have emerged as a promising technology for solving this shortage problem by enabling dynamic spectrum access (DSA), which improves the spectrum utilization efficiency by allowing unlicensed/secondary users (SU s) to exploit unused spectrum bands (aka spectrum holes or white spaces) of licensed/primary users (PU s).
Currently, two approaches are being adopted to identify these white spaces: spectrum sensing and geolocation spectrum databases. In the spectrum sensing-based approach, SU s need to sense the PU channel to determine whether the channel is available for opportunistic use. The spectrum database-based approach, on the other hand, waives the sensing requirement and instead enables SU s to query a database (DB) to learn about spectrum opportunities in their vicinity. This approach, already promoted and adopted by the Federal Communications Commission (FCC), was introduced as a way to overcome the technical hurdles faced by the spectrum sensing-based approaches, thereby enhancing the efficiency of spectrum utilization, improving the accuracy of available spectrum identification, and reducing the complexity of terminal devices [2].
Moreover, it pushes the responsibility and complexity of complying with spectrum policies to DB and eases the adoption of policy changes by limiting updates to just a handful number of databases, as opposed to updating large numbers of devices [3].
FCC has designated nine entities (e.g. Google [4], iconectiv [5], and Microsoft [6]) as TV bands device database administrators which are required to follow the guidelines provided by PAWS (Protocol to Access White Space) standard [3]. PAWS sets guidelines and operational requirements for both the spectrum database and the SU s querying it. These include: SU s need to be equipped with geo-location capabilities, SU s must query DB with their specific location to check channel availability before starting their transmissions, DB must register SU s and manage their access to the spectrum, DB must respond to SU s' queries with the list of available channels in their vicinity along with the appropriate transmission parameters. As specified by PAWS standard, SU s may be served by several spectrum databases and are required to register to one or more of these databases prior to querying them for spectrum availability. The spectrum databases are reachable via the Internet, and SU s querying these databases are expected to have some form of Internet connectivity [7].
FCC has established a new service in the 3.5 GHz band, known as Citizens Broadband Radio Service (CBRS), in which the spectrum is also managed through a central databasedriven CRN , aka spectrum access system (SAS ), to enable spectrum sharing between military and federal incumbents and SU s. A separate entity with Environmental Sensing Capability (ESC) is responsible of populating DBs with data regarding PU s that do not wish to reveal their operational information such as their location or transmission characteristics. A similar concept, named licensed shared access (LSA), for the 2. 3-3.4 GHz band is also being developed in Europe to enable SU s to opportunistically access spectrum resources in this band owned by incumbent military aircraft services and police wireless communications. A major difference compared to SAS, is that in LSA, PU s are responsible for populating DBs by providing their a priori information; i.e. their activities and, therefore the spectrum availability information, are known upfront [8].
A. Location Privacy Issues in Database-Driven CRN s
Despite their benefits, database-driven CRN s suffer from serious security and privacy threats. Since they could be seen as a variant of of location based service (LBS), the disclosure of location information of SU s represents the main threat to SU s when it comes to obtaining spectrum availability from DB s. The fine-grained location, when combined with publicly available information, can easily reveal other personal information about an individual including his/her behavior, health condition, personal habits or even beliefs. For instance, an adversary can learn some information about the health condition of a user by observing that the user regularly goes to a hospital for example. The frequency and duration of these visits can even reveal the seriousness of a user illness and even the type of illness if the location corresponds to that of a specialty clinic. Matters get worse when SU s are mobile. As per the PAWS requirements, SU s need to query DB s whenever they change their location by at least 100 meters. This will make SU s constantly share their location as they move which could be exploited by a malicious service provider for tracking purposes.
The location privacy of SU s is not the only privacy concern that database-driven CRN s suffer from. Indeed, the location privacy of PU s may also be critical in CRN systems such as SAS , in the 3.5 GHz CBRS band, and LSA, in the 2.3-2.4 GHz band, where PU s are not commercial but rather military and governmental entities. To achieve efficient spectrum sharing without interference to military and federal incumbents, these systems require PU s, or entities with sensing capabilities such as ESC, to report PU s' operational data (including their location, frequencies time of use, etc.) to be included in the spectrum databases which may present serious privacy risks to these PU s.
Being aware of such potential privacy threats, both SU s and PU s may refuse to share their sensitive information with DBs, which may present a serious barrier to the adoption of databasebased CRN s, and to the public acceptance and promotion of the dynamic spectrum sharing paradigm. Therefore, there is a critical need for developing techniques to protect the location privacy of both PU s and SU s while allowing the latter to harness the benefits of the CRN paradigm without disrupting the functionalities that these techniques are designed for to promote dynamic spectrum sharing.
B. Research Gap and Objectives
Despite the importance of the location privacy issue in CRN s, only recently has it started to gain interest from the research community [9]. Some works focus on addressing this issue in the context of collaborative spectrum sensing [10]- [14]; others address it in the context of dynamic spectrum auction [15]. Protecting SU s' location privacy in databasedriven CRN s is a more challenging task, merely because SU s are required, by protocol design, to provide their physical location to DB to learn about spectrum opportunities in their vicinity. The heterogeneity of wireless devices and the versatility of services relying on the CRN technology [16] could also present some challenges in designing privacy-preserving mechanisms for users in CRN s. In fact, privacy-preserving solutions need to embrace the different resource constraints of each SU device and the various requirements of each service in terms of data rates and delay sensitivities. This makes it hard to leverage general purpose public key encryption-based techniques due to their high cost in terms of computation and communication overheads especially on resource-constrained devices. It is therefore crucial to design cost-effective protocols that offer strong privacy guarantees to users and also adapt to different systems requirements regardless of the constraints of the users.
The existing location privacy preservation techniques for database-driven CRN (e.g., [2], [17]- [21]) generally rely on three main lines of privacy preserving technologies, (i) kanonymity [22], (ii) differential privacy [23] and (iii) singleserver Private Information Retrieval (PIR) [24]. However, the direct adaptation of k-anonymity based techniques have been shown to yield either insecure or extremely costly results [25]. The solutions adapting differential privacy (e.g., [20]) not only incur a non-negligible overhead, but also introduce a noise over the queries, and therefore they may negatively impact the accuracy of spectrum availability information.
Among these alternatives, single-server PIR seems to be the most popular. PIR technology is a suitable choice for databasedriven CRN s, as it permits privacy preserving queries on a public database, and therefore can enable a SU to retrieve spectrum availability information from the database without leaking its location information. However, single-server PIR protocols rely on highly costly partial homomorphic encryption schemes, which need to be executed over the entire database for each query. Indeed, as we also demonstrated with our experiments in Section IV, the execution of a single query even with some of the most efficient single-server PIR schemes [26] takes approximately 20 seconds with a 80 M bps/ 30M bps bandwidth on a moderate size database (e.g., 10 6 entries). An end-to-end delay with the orders of 20 seconds might be undesirable for spectrum sensing needs of SU s in reallife applications. Also, some of the state-of-the-art efficient computational PIR schemes [27] that are used in the context of CRN s have been shown to be broken [26]. Thus, there is a significant need for practical location privacy preservation approaches for database-driven CRN s that can meet the efficiency and functionality requirements of SU s.
C. Our Observation and Contribution
The objective of this paper is to develop efficient techniques for database-driven CRN s that preserve the location privacy of SU s during their process of acquiring spectrum availability information. We also try to protect the operational privacy of PU s in systems that require incumbents to provide spectrum availability information to DB s. Specifically, we will aim for the following design objectives: (i) (location privacy of SU s) Preserve the location privacy of SU s, whether fixed or mobile, while allowing them to receive spectrum availability information; (ii) (efficiency and practicality) Incur minimum computation, communication and storage overhead. The cryptographic delay must be minimum to permit fast spectrum availability decision for the SU s, and storage/processing cost must be low to enable practical deployments. (iii) (faulttolerance and robustness) Mitigate the effects of system failures or misbehaving entities (e.g., colluding databases). (iv) (location privacy of PU s) The location information of PU s needs to be protected while still able to provide spectrum availability information to DB s. It is very challenging to meet all of these seemingly conflicting design goals simultaneously.
The main idea behind our proposed approaches is to harness special properties and characteristics of the database-driven CRN systems to employ private query techniques that can overcome the significant performance, robustness and privacy limitations of the state-of-the-art techniques. Specifically, our proposed approach is based on the following observation: Observation: FCC requires that all of its certified databases synchronize their records obtained through registration procedures with one another [28], [29] and need to be consistent across the other databases by providing exactly the same spectrum availability information, in any region, in response to SU s' queries [30]. That is, the same copy of spectrum database is available and accessible to the SU s via multiple (distinct) spectrum database administrators/providers. Is it possible exploit this observation to achieve efficiency location preservation techniques for database-driven CRN ?
In practice, as stated in PAWS standard [3], SU s have the option to register to multiple spectrum databases belonging to multiple service providers. Currently, many companies (e.g. Google [4], iconectiv [5], etc) have obtained authorization from FCC to operate geo-location spectrum databases upon successfully complying to regulatory requirements. Several other companies are still underway to acquire this authorization [31]. Thus, it is more natural and realistic to take this fact into consideration when designing privacy preserving protocols for database-based CRN s. Based on this observation, our main contribution is as follows: Our Contribution: To the best of our knowledge, we are the first to exploit the fact that multiple copies of spectrum DB s are available by nature in database-driven CRN s, and therefore it is possible to harness multi-server PIR techniques [24], [33] that offer information-theoretic privacy with substantial efficiency advantages over single-server PIR. This is achieved by relying on Shamir secret sharing-based techniques to either divide the content of SU s' queries or the spectrum availability information, or both, among the different DBs to prevent these DB s from inferring SU s' location from their queries or from learning PU s' sensitive operational data from the spectrum availability information.
We show, analytically and experimentally with deployments on cloud systems, that our adaptation of multi-server PIR techniques significantly outperforms the state-of-the-art location privacy preservation methods as demonstrated in Table I and detailed in Section IV. Moreover, our adaptations achieve information theoretical privacy while existing alternatives offer only computational privacy. This feature provides an assurance against even post-quantum adversaries [34] and can avoid recent attacks on computational PIR [26].
Notice that, multi-server PIR techniques require the availability of multiple (synchronized) replicas of the database. Therefore, despite their high efficiency and security, they received a little attention from the practitioners. For instance, in traditional data outsourcing settings (e.g., private cloud storage), the application requires a client to outsource only a single copy of its database. The distribution and maintenance of multiple copies of the database across different service providers brings additional architectural and deployment costs, which might not be economically attractive for the client.
In this paper, we showcased one of the first natural usecases of multi-server PIR, in which the multiple copies of synchronized databases are already available by the original design of application (i.e., spectrum availability information in multi-database CRN s), and therefore multi-server PIR does not introduce any extra overhead on top of the application. Exploiting this synergy between multi-database CRN and multiserver PIR permitted us to provide informational theoretical location privacy for SU s with a significantly better efficiency compared to existing single-server PIR approaches.
Desirable Properties: We outline the desirable properties of our approaches below.
• Computational efficiency: The adapted approaches are much more efficient than existing location privacy preserving schemes. For instance, as shown in Table I, LP -Chor and LP -Goldberg are more than 3 orders of magnitudes faster than the schemes proposed by Troja et al. [18], [19], and 10 times faster than XPIR [26] and PriSpectrum [2]. • Information Theoretical Privacy Guarantees: They can achieve information-theoretic privacy which is the optimal privacy level that could be reached as opposed to computational privacy guarantees offered by existing approaches. In fact some of these approaches are prone to recent attacks on computational-PIR protocols [26] and are not secure against post-quantum adversaries [34]. • Low communication overhead: Our approaches incur a reasonable communication overhead that is a middle ground between the fastest computational PIR [26] and the most communication efficient computational PIR [35]. • Fault-Tolerance and Robustness: Our proposed approaches are resilient to the issues that are associated with multi-server architectures: failures, byzantine behavior, and collusion. Even though the collusion of all of the service providers is unlikely to happen due to the competing nature of these companies and due to regulatory enforcement from bodies such as FCC to protect users data, we have however considered collusion in our system and security model. All proposed approaches can handle collusion of multiple DB s up to certain limit that is different for each approach. In addition, some of the proposed approaches can also handle faulty and byzantine DBs. Besides, simply hacking DB s, when the proposed approaches are in place, will not be sufficient to learn users' information since some of these protocols offer hybrid privacy protection by combining both computational and information-theoretic PIR protocols enabling them to offer computational privacy even when all of the DB s are compromised. • Experimental evaluation on actual cloud platforms: We deploy our proposed approaches on a real cloud platform, GENI [36], to show their feasibility. In our experiment, we create multiple geographically distributed VMs each playing the role of a DB. A laptop plays the role of a SU that queries DBs, i.e. VM s. Our experiments confirm the superior computational advantages of the adoption of multi-server PIR over the existing alternatives.
D. Differences Compared to the Preliminary Version
The main differences between this paper and its preliminary versions [37], [38] are as follows: (i) We further consider the location privacy issue of mobile SU s and offer a way to amortize the cost incurred by mobility. (ii) We also leverage multi-server PIR to address the location privacy issue of PU s in database-CRN systems that require PU s to provide spectrum availability to DBs. (iii) We discuss also a way to reduce the cost of LP -Chor by partitioning the spectrum database instead of simply replicating it using the RAID-PIR protocol [39] and we discuss the privacy-performance tradeoff of relying on such approach. (iv) We provide a more detailed performance evaluation that takes into account the latest advances in PIR technology, namely SealPIR [32] which relies on fully homomorphic encryption.
A. Notation and Building Blocks
We summarize our notations in Table II. Our adaptations of multi-server PIR rely on the following building blocks. Private Information Retrieval (PIR): PIR allows a user to retrieve a data item of its choice from a database, while preventing the server owning the database from gaining information on the identity of the item being retrieved [40].
One trivial solution to this problem is to make the server send an entire copy of the database to the querying user.
Obviously, this is a very inefficient solution to the PIR problem as its communication complexity may be prohibitively large. However, it is considered as the only protocol that can provide information-theoretic privacy, i.e. perfect privacy, to the user's query in single-server setting. There are two main classes of PIR protocols according to their privacy level: informationtheoretic PIR (itPIR) and computational PIR (cPIR).
• Information-theoretic or multi-server PIR: It guarantees information-theoretic privacy to the user, i.e. privacy against computationally unbounded servers. This could be achieved efficiently only if the database is replicated at k ≥ 2 noncommunicating servers [24], [33]. The main idea behind these protocols consists on decomposing each user's query into several sub-queries to prevent leaking any information about the user's intent. • Computational or single-server PIR: It guarantees privacy against computationally bounded server(s). In other words, a server cannot get any information about the identity of the item retrieved by the user unless it solves a certain computationally hard problem (e.g. prime factorization of large numbers), which is common in modern cryptography. Thus, they offer weaker privacy than their itPIR counterparts [27], [41]. Shamir Secret Sharing: This is a concept introduced by Shamir et al. [42] to allow a secret holder to divide its secret S into ℓ shares S 1 , · · · , S ℓ and distribute these shares to ℓ parties. In (t , ℓ)-Shamir secret sharing, where t < ℓ, if t or fewer combine their shares, they learn no information about S. However, if more than t come together, they can easily recover S. Given a secret S chosen arbitrarily form a finite field, the (t , ℓ)-Shamir secret sharing scheme works as follows: the secret holder chooses ℓ arbitrary non-zero distinct elements α 1 , · · · , α ℓ ∈ F. Then, it selects t elements σ 1 , · · · , σ t ∈ F uniformly at random. Finally, the secret holder constructs the polynomial f (x) = σ 0 + σ 1 x + σ 2 x 2 + · · · + σ t x t , where σ 0 = S. The ℓ shares S 1 , · · · , S ℓ , that are given to each party, are (α 1 , f (α 1 )), · · · , (α ℓ , f (α ℓ )). Any t + 1 or more parties can recover the polynomial f using Lagrange interpolation and thus they can reconstruct the secret S = f (0). However, t or less parties can learn nothing about S. In other words, if t + 1 shares of S are available then S can be easily recovered.
B. System Model and Security Definitions
We consider a database-driven CRN that contains ℓ DB s, where ℓ ≥ 2, and a SU registered to these DBs to learn spectrum availability information in its vicinity. We assume that these DBs share the same content and that they are synchronized as mandated by PAWS standard [3]. We also assume that DBs may collude in order to infer SU 's location. In the following, we present our security definitions. Definition 1. Byzantine DB: This is a faulty DB that runs but produces incorrect answers, possibly chosen maliciously or computed in error. This might be due to a corrupted or obsolete copy of the database caused by a synchronization problem with the other DBs.
Definition 5. Robust PIR: It can deal with DBs that do not respond to SU 's queries and allows SU to reconstruct the correct output of the queries in this situation. Definition 6. τ -independent PIR: The content of the database itself is information theoretically protected from the coalition of up to τ DB s, where 0 ≤ τ < k − t .
III. PROPOSED APPROACHES
In the proposed approaches, we tailor multi-server PIR to the context of multi-DB CRN s. We start by illustrating the structure of the spectrum database that we consider. Then, we give several approaches, each adapts a multi-server PIR protocol with different security, performance properties, and use cases. We model the content of each DB as an r × s matrix D of size n bits, where s is the number of words of size w in each record/block of the database and r is the number of records in the database, i.e. r = n/b, where b = s × w is the block size in bits. The k th row of D is the k th record of the database.
We further assume that each row of the database corresponds to a unique combination of the tuple (l x , l y , C , ts), where l x and l y represent one location's latitude and longitude, respectively, C is a channel number, and ts is a time-stamp. We also assume that SU s can associate their location information with the index β of the corresponding record of interest in the database using some inverted index technique that is agreed upon with DB s. An SU that wishes to retrieve record D β without any privacy consideration can simply send to DB a row vector e β consisting of all zeros except at position β where it has the value 1. Upon receiving e β , DB multiplies it with D and sends record D β back to SU as we illustrate below: This trivial approach makes it easy for DB s to learn SU 's location from the vector e β as D is indexed based on location.
In the following we present two approaches that try to hide the content of e β from DBs, and thus preserve SU 's location privacy. The approaches present a tradeoff between efficiency, and some additional security features.
A. Location Privacy with Chor (LP -Chor )
Our first approach, termed LP -Chor , harnesses the simple and efficient itPIR protocol proposed by Chor et al. [24]. We describe the different steps of LP -Chor in Algorithm 1 and highlight these steps in Fig. 1. Elements of D in this scheme belong to GF (2), i.e. w = 1 bit and b = s.
In LP -Chor , SU starts by invoking the inverted index subroutine InvIndex(l x , l y , C , ts) which takes as input the coordinates of the user, its channel of interest, and a timestamp and returns a value β. This value corresponds to the index of the record D β of D that SU is interested in. SU then constructs e β , which is a standard basis vector − → 1 β ∈ Z r having 0 everywhere except at position β which has the value 1 as we discussed previously. SU also picks ℓ − 1 r -bit binary strings ρ 1 , · · · , ρ ℓ−1 uniformly at random from GF (2) r , and computes ρ ℓ = ρ 1 ⊕ · · · ⊕ e β . Finally, SU sends which could be seen also as the XOR of those blocks D j in D for which the j th bit of ρ i is 1, then sends R i back to SU . SU receives R i s from DB i s, 1 ≤ i ≤ ℓ, and computes R 1 ⊕ · · · ⊕ R ℓ = (ρ 1 ⊕ · · · ⊕ ρ ℓ ) · D = e β · D, which is the β th block of the database that SU is interested in, from which it can retrieve the spectrum availability information.
LP -Chor is very efficient thanks to its reliance on simple XOR operations only as we discuss in Section IV. It is also (ℓ−1)-private, by Definition 2, as collusion of up to ℓ−1 DB s cannot enable them to learn e β , and consequently its location. In fact, only if ℓ DBs collude, then they will be able to learn e β by simply XORing their {ρ i } ℓ i=1 . However this approach suffers from two main drawbacks. First, it is not robust since even if one DB fails to respond, SU will not be able to recover D β . Second, it is not byzantine robust; if one or more DB s return a wrong response, SU will reconstruct a wrong block and also will not be able to recognize which DB misbehaved so as not to rely on it for future queries. In Section III-B we discuss a second approach that improves on these two aspects but with some additional overhead.
B. Location Privacy with Goldberg (LP -Goldberg)
Our second approach, termed LP -Goldberg, is based on Goldberg's itPIR protocol [33] which uses Shamir secret sharing to hide e β , i.e. SU 's query. It is a modification of Chor's scheme [24] to achieve both robustness and byzantine robustness. Rather than working over GF (2) (binary arithmetic), this scheme works over a larger field F, where each element can represent w bits. The database D = (w jk ) ∈ F r ×s in this scheme, is an r × s matrix of elements of F = GF (2 w ). Each row represents one block of size b bits, consisting of s words of w bits each. Again, D is replicated among ℓ databases DB i . We summarize the main steps of LP -Goldberg protocol in Algorithm 2 and illustrate them in Fig. 2. To determine the index β of the record that corresponds to its location, SU starts by invoking the subroutine InvIndex(l x , l y , C , ts) then constructs the standard basis vector e β ∈ F r as explained earlier. SU then uses (ℓ, t )-Shamir secret sharing to divide the vector e β into ℓ independent shares (α 1 , , ρ 1 ) · · · , (α ℓ , ρ ℓ ) to ensure a t -private PIR protocol as in Definition 2. That is, SU chooses ℓ distinct non-zero elements α i ∈ F * and creates r random degree-t polynomials f 1 , · · · , f r satisfying f j (0) = e β [j]. SU then sends to each DB i its share corresponding to the vector Some DB s may fail to respond to SU 's query and only kout-of-ℓ send their responses to SU . SU collects k responses from the k responding DBs and tries to recover the record at index β from the R i s by using the EASYRECOVER() subroutine from [33] which uses Lagrange interpolation to recover D β from the secret shares (α 1 , R 1 ), · · · , (α k , R k ). This is possible thanks to the use of (ℓ, t )-Shamir secret sharing as long as k > t and these k DB s are honest. In fact, by the linearity property of Shamir secret sharing, since is a set of (ℓ, t )-Shamir secret shares of e β , then will be also a set of (ℓ, t )-Shamir secret shares of e β · D, which is the β th block of the database. Thus, it is possible for SU to reconstruct D β using Lagrange interpolation as explained in Section II, by relying only on the k responses which makes LP -Goldberg robust by Definition 5. Also, the EASYRECOVER can detect the DB s that responded honestly, thus those that are byzantine as well, which should discourage DBs from misbehaving. More details about this subroutine could be found in [33].
SU SU SU 1: β ← InvIndex(l x , l y , C , ts) 2: Sets standard basis vector e β ← − → 1 β ∈ Z r 3: Chooses ℓ distinct α 1 , · · · , α ℓ ∈ F * 4: Creates r random degree-t polynomials 12: for c from 1 to s do 13: 16: if Recovery fails and ϑ < k − ⌊ √ k t ⌋ then 17: S c ← R 1c , · · · , R k c 18: Moreover, ϑ DB s among the k responding ones may even be byzantine, as in Definition 1, and produce incorrect response. In that case, it would be impossible for SU to simply rely on Lagrange interpolation to recover the correct responses. Since Shamir secret sharing is based on polynomial interpolation, the problem of recovering the response in the case of byzantine failures corresponds to noisy polynomial reconstruction, which is exactly the problem of decoding Reed-Solomon codes [43]. Thus, SU would rather rely on error correction codes and more precisely on the Guruswami-Sudan list decoding [44] algorithm which can correct ϑ < k − ⌊ √ k t⌋ incorrect responses. In fact, the vector R 1 [q], R 2 [q], · · · , R ℓ [q] is a Reed-Solomon codeword encoding the polynomial g q = j f j w jq , and the client wishes to compute g q (0) for each 1 ≤ q ≤ s to recover all the s words forming the record D β = g 1 (0), · · · , g s (0) . This is done through the HARDRECOVER() subroutine from [33]. This makes LP -Goldberg also ϑ-Byzantine-robust, by Definition 3, and solves the robustness issues that LP -Chor suffers from, however, this comes at the cost of an additional overhead as we discuss in Section IV. Corollary 1. LP -Chor and LP -Goldberg directly inherit the security properties of Chor's [24] PIR and Goldberg's [33] PIR respectively.
C. Location Privacy of Mobile SU s Through Batching
Thus far, we concerned only about non-mobile SU s that periodically submit an individual query to DB s to learn spectrum availability in their fixed location. However, things get more interesting with mobility. In fact, a mobile SU will need to query DBs multiple times as its location changes. While the previous two approaches perform well for non-mobile SU s, they will incur a significant overhead on both SU and DB s especially when SU is moving at a relatively high speed, which will require a large number of PIR queries.
Our third approach aims to protect the location privacy of mobile SU s while reducing the mobility-associated overhead. The idea is to exploit the fact that a mobile SU usually has an a priori knowledge of its trajectory to make it query DB s for its current and future locations by batching these queries together instead of sending them separately. We achieve this by relying on the itPIR protocol of Lueks et al. [45] that extends the scheme of Goldberg [33] to support batching of the queries using fast matrix multplication mechanisms inspired from batch codes [46]. We refer to this approach as LP -BatchPIR and we describe it in the following.
Each DB i that receives q simultaneous queries ρ (1) i , · · · , ρ (q) i from an SU can process them using LP -Goldberg by simply multiplying each query with D as illustrated in Step 8 of Algorithm 2. Alternatively, it can also group these queries into a matrix Q i of size q × r , where each row j corresponds to a query ρ (j) i , before computing the matrix product Q i · D. The careful reader will notice that this naive multiplication method would cost around 2qrs operations (including multiplications and additions) which can be prohibitively expensive especially for a large D or q. This problem boils down to a fast matrix multiplication problem and therefore can benefit from fast matrix multiplication algorithms such as Strassen's [47].
Strassen's algorithm consists on simply dividing both matrices Q i and D into four equally sized block matrices. Then instead of naively multiplying these submatrices, which will result in 8 submatrix multiplications (fundamentally equivalent to simple matrix multiplication), Strassen's algorithm creates linear combinations of blocks in a way that reduces the number of submatrix multiplications to 7. The exact approach is then applied recursively to the multiplications of the submatrices of the previous step. This simple yet powerful matrix multiplication technique will significantly reduce the overhead for DB s and therefore the delay that SU s experience to learn spectrum availability while moving as illustrated in Section IV.
A row j in the resulting matrix, R i = Q i · D, corresponds to DB i 's response to the j th query. SU will then recover the spectrum availability by combining same-index rows of the different R i s as in LP -Goldberg.
D. Location Privacy of PU s
As we mentioned earlier, in database-driven CRN s, DBs' content comprises operational information of PU s which may be very sensitive in systems such as SAS in the 3.5 GHz CBRS band where PU s are military and governmental entities. The service providers use this operational data to feed their models and populate the spectrum databases with availability information but do not share the PU s' location information in response to SU s' queries. Therefore, SU s do not present a serious threat to PU s privacy as opposed to the service providers which could be malicious, and could misuse PU s' sensitive operational data.
In this subsection, we present another approach to take into account the privacy of these PU s as well. For this we make use of another extension of the Goldberg PIR scheme known as τ -independence, to prevent DB s from learning the content of D even if up to τ DB s collude to learn D as defined in Definition 6. This is achieved by making PU s populate the DBs with spectrum availability information pertaining to their respective channels instead of the service providers, by secretly sharing each record they want to add, among the different service providers using Shamir secret sharing techniques, similar to how SU s secretly share their queries. That way, each service provider will not be able to decode this data, and only SU s which have access to the secret can retrieve the record by combining the different shares from the different DBs. This is motivated by the fact that DB s are expected to be populated by PU s themselves as it is the case in LSA systems, or by a highly trusted independent entity, the ESC, as in SAS systems. Therefore, whenever a PU or an ESC submits a PU activity record of index j to DBs it will divide it into s words W j1 , · · · , W js and distributes Shamir secret shares of every word among the ℓ DBs as reflected in Algorithm 3. Each DB i will now have a different content D (i) : jc } 1≤i≤ℓ form a (τ, ℓ)-Shamir secret sharing of word W jc . This requires that the random values α i s, used to create Shamir secret shares as explained in Section II-A, are shared beforehand among SU s and PU s. This could be done by FCC during the registration phase, for instance, and must not be communicated to DB s. This way, records revealing operational data of PU s, which could be used by DBs to build knowledge of the activity of these PU s and track them, are information-theoretically protected from DB s as long as no more than τ of these DB s collude. However, for this protocol to work, this condition must hold: 0 < t ≤ t + τ < k ≤ ℓ. While this extension of LP -Goldberg should have no impact on the performance from SU s and DBs side as we show in Section IV, it has, however, an impact on the t-privacy of the protocol. In fact as the τindependence level, controlling how many DBs can collude to learn the record submitted by PU , sought by PU increases, the maximum achievable t-privacy level will decrease since t + τ < k must always hold.
E. Location Privacy of SU s in Partitioned-database CRN s
In this section, we present another location privacypreserving approach for SU s in the case where the spectrum database content is distributed among the different DBs instead of simply replicating it as in the previous approaches. This Algorithm 3 D β ← τ -LP -Goldberg (ℓ, r , b, t , w ) FCC 1: Chooses ℓ distinct α 1 , · · · , α ℓ ∈ F * . 2: Shares these α i s only with PU s and SU s. PU PU PU 3: Divides its activity record j into s words W j1 , · · · , W js 4: Creates s random degree-τ polynomials g j1 , · · · , g js ∈ R F[x] s.t. g jc (0) = W jc ∀c ∈ [1, · · · , s] 5: Sends w (i) SU SU SU 7: β ← InvIndex(l x , l y , C , ts) 8: Sets standard basis vector e β ← − → 1 β ∈ Z r 9: Creates r random degree-t polynomials if Recovery fails and ϑ < k − ⌊ k (t + τ )⌋ then 22: S c ← R 1c , · · · , R k c 23: could be motivated by the fact that some database-driven CRN s may have multiple DBs covering different or slightly overlapping regions. It could also be a way to reduce cost by making each DB manage a portion of the database.
For that we rely on the RAID-PIR protocol due to Demmler et al. [39] which builds on Chor's scheme to reduce the communication overhead and the computation required at the server side. The idea here is very similar to that of Chor's but here the vector e β is divided into ℓ chunks. Each query q i sent to DB i is divided into π chunks as illustrated in Figure 3, where π is a redundancy parameter that controls the minimum number of DBs that need to collude to recover the record D β with 2 ≤ π ≤ ℓ. This parameter also controls the number of chunks in every query and how often the chunks overlap throughout these queries [39].
The details of this approach are described in Algorithm 4. To optimize the cost, SU can use a pseudo random generator, P RG, to generate the π − 1 chunks of q i as illustrated in Algorithm 4. For that, SU randomly generates ℓ seeds s 1 , · · · , s ℓ of size κ bits each, where κ is the symmetric Fig. 3: RAID-PIR [39] security parameter, and expands each seed s i into π−1 random chunks rnd i [j], using P RG, each of size r ℓ as depicted in step 4 of Algorithm 4. The first chunk of query q i , denoted as f i , is computed to cancel out the π−1 other i th chunks rnd i [j] of each of the other DBs, if applicable, and is obtained by xoring those π − 1 chunks with the i th chunk of e β . Thanks to the use of the P RG, SU does not need to send the whole query and needs only to send a compacted version of q i , denoted as q ′ i , composed of f i and the seed s i , used to generate the other chunks of the full query q i , to DB i . Then, DB i will use the same pseudo-random generator, P RG, with the seed that it received to generate the full query q i . Once q i recovered, DB i will construct its answer R i by xoring the records in D whose indices match those of the set bits in q i . Finally, SU needs only to xor the results from the different DB s to recover the β th record.
SU SU SU 1: β ← InvIndex(l x , l y , C , ts) 2: Sets standard basis vector e β ← − → 1 β ∈ Z r 3: Picks ℓ seeds s i ∈ R {0, 1} κ 4: Expands s i to π − 1 chunks rnd i [j] ← P RG(s i , j) ∀j ∈ [(i mod ℓ) + 1, (i + π − 2 mod ℓ) i consisting of chunk f i and seed s i to DB i Each DB DB DB i 8: Expands its received s i as in Step 4 to get full query q i 9: R i ← 1≤j≤r qij =1 D j , D j is the j th record of D 10: Sends R i to SU SU SU SU 11: Receives R 1 , · · · , R ℓ 12: D β ← R 1 ⊕ · · · ⊕ R ℓ As the size of the query q i is just π/ℓ · r , each DB now needs to store and process only π/ℓ · r records of D which will be beneficial to DB s especially if the number of these databases increases.
A. Analytical Comparison
We start by studying the proposed approaches' performance analytically and we compare them to existing approaches. For LP -Goldberg, we choose w = 8 to simplify the cost of computations as in [43]; since in GF (2 8 ), additions are XOR operations on bytes and multiplications are lookup operations into a 64 KB table [43]. We summarize the system communication complexity and the computation incurred by both DB and SU and we illustrate the difference in architecture and privacy level of the different approaches in Table III. As we mentioned earlier, existing research focuses on the single DB setting. We compare the proposed approaches to existent techniques despite the difference of architecture to show the great benefits that multi-server PIR brings in terms of performance and privacy as we discuss next. We briefly discuss these approaches in the following.
Gao et al. [2] propose a PIR-based approach, termed PriSpectrum, that relies on the PIR scheme of Trostle et al. [27] to defend against the new attack that they identify. This new attack exploits spectrum utilization pattern to localize SU s. Troja et al. [18], [19] propose two other PIR-based approaches that try to minimize the number of PIR queries by either allowing SU s to share their availability information with other SU s [18] or by exploiting trajectory information to make SU s retrieve information for their current and future positions in the same query [19].
Despite their merit in providing location privacy to SU s these PIR-based approaches incur high overhead especially in terms of computation. This is due to the fact that they rely on cPIR protocols to provide location privacy to SU s, which are known to suffer from expensive computational cost. In fact, answering an SU 's query through a cPIR protocol, requires DB to process all of its records, otherwise DB would learn that SU is not interested in them and would then learn partial information about the record D β , and consequently SU 's location. This makes the computational cost of most cPIR based location preserving schemes linear on the database size from DB side as we illustrate in Table III. Now this is not exclusive to cPIR protocols as even itPIR protocols may require processing all the records to guarantee privacy, however, the main difference with cPIR protocols is that the latter have a very large cost per bit in the database, usually involving expensive group operations like multiplication modulo a large modulus [26] as opposed to multi-server itPIR protocols. This could be seen clearly in Table III as both LP -Chor and LP -Goldberg require DB to perform a very efficient XOR operation per bit of the database. The same applies to the overhead incurred by SU which only performs XOR operations in both LP -Chor and LP -Goldberg , while performing expensive modular multiplications and even exponentiations over large primes in the cPIR-based approaches.
In terms of communication overhead, the proposed approaches incur a cost that is linear in the number of records r and their size b. As an optimal choice of these parameters is usually r = b = √ n [24], [26], [33], [43] then this cost could be seen as O( √ nw ) to retrieve a record of size √ nw bits, which is a reasonable cost for an information theoretic privacy. Moreover, as illustrated in Table III, existent approaches fail to provide information theoretic privacy as the underlying security relies on computational PIR schemes. The only approaches that provide information theoretic location privacy are LP -Chor , LP -Goldberg, and RAID-LP -Chor which are (ℓ − 1)-private, t -private, and (π − 1)-private respectively, by Definition 2. It is worth mentioning that PriSpectrum [2] relies on the well-known cPIR of Trostle et al. [27] representing the state-of-the-art in efficient cPIR. However, this cPIR scheme has been broken [26], [48]. Since the security of PriSpectrum follows that of Trostle et al. [27] broken cPIR, then PriSpectrum fails to provide the privacy objective that it was designed for. However, we include it in our performance analysis for completeness.
B. Experimental Evaluation
We further evaluate the performance of the proposed schemes experimentally to confirm the analytical observations. Hardware setting and configuration. We have deployed the proposed approaches on GENI [36] cloud platform using the percy++ library [49]. We have created 6 virtual machines (VMs), each playing the role of a DB and they all share the same copy of D. We deploy these GENI VMs in different locations in the US to count for the network delay and make our experiment closer to the real case scenario where spectrum service providers are located in different locations. These VMs are running Ubuntu 14.04, each having 8 GB of RAM, 15 GB SSD, and 4 vCPUs, Intel Xeon X5650 2.67 GHz or Intel Xeon E5-2450 2.10 GHz. To assess the SU overhead we use a Lenovo Yoga 3 Pro laptop with 8 GB RAM running Ubuntu 16.10 with an Intel Core m Processor 5Y70 CPU 1.10 GHz. The client laptop communicates with the remote VMs through ssh tunnels. We are also aware of the advances in cPIR technology, and more precisely the fastest cPIR protocols in the literature: XPIR which is proposed by Aguilar et al. [26] and SealPIR due to Angel et al. [32]. We include these protocols in our experiment to illustrate how multi-server PIR performs against the best known cPIR schemes if they are to be deployed in CRN s. We use the available implementation of these protocols provided in [50] and [51] and we deploy their server components on a remote GENI VM while the client component is deployed on the Lenovo Yoga 3 Pro laptop. Dataset. Spectrum service providers (e.g. Google, Microsoft, etc) offer graphical web interfaces and APIs to interact with their databases allowing to retrieve basic spectrum availability information for a user-specified location. Access to full data from real spectrum databases was not possible, thus, we generated random data for our experiment. The generated data consists of a matrix that models the content of the database, D, with a fixed block size b = 560 B while varying the number of records r . The value of b is estimated based on the public raw data provided by FCC [52] on a daily basis and which service providers use to populate their spectrum databases. Results and Comparison. We first measure the query end-toend delay of the proposed approaches and plot the results in Fig. 4. We also include the delay introduced by the existing schemes based on our estimation of the operations included Troja et al [18] n g · ψ · log 2 q + (2 Variables: t⊕ is the execution time of one XOR operation. p is a large prime, and Mulp and Expp are the execution time of performing one modular multiplication, and one modular exponentiation respectively. ψ denotes the number of bits that an SU shares with other SU s in [18], ng is the number of SU s within a same group in [18]. δ is the number of DB segments in [19]. d is the recursion level, α is the aggregation level, C is the Ring-LWE ciphertext size, λ is the number of elements returned by DB, F is the expansion factor of the underlying cryptosystem, ℓ0 is the number of bits absorbed in a cyphertext, all are used in [26]. (Enc, Dec) are respectively the encryption and decryption cost for Ring-LWE cryptosystem used in [26]. (E, D) are respectively the encryption and decryption cost for Fan-Vercauteren [53] cryptosystem used in [32]. N is the query size bound in XPIR and SealPIR and is typically is typically 2048 or 4096 based on recommended security parameters.
in Table III. The end-to-end delay that we measure takes into consideration the time needed by SU to generate the query, the network delay, the time needed by DB to process the query, and finally the time needed by SU to extract the β th record of the database. We consider two different internet speed configurations in our experiment. We first rely on a highspeed internet connection of 80M bps on the download and 30M bps on the upload for all compared approaches. Then we use a low-speed internet connection of 1M bps on the upload and download to assess the impact of the bandwidth on LP -Chor and LP -Goldberg, and also on XPIR as well. 4 shows that the proposed schemes perform much better than the existing approaches in terms of delay even with low-speed internet connection. They also perform better than the fastest existing cPIR protocols XPIR and SealPIR. This shows the benefit of relying on multi-server itPIR in multi-DB CRN s. Also, and as expected, LP -Chor scheme performs better than LP -Goldberg thanks to its simplicity. As we will see later, LP -Goldberg also incurs larger communication overhead than LP -Chor as well. This could be acceptable knowing that LP -Goldberg can handle collusion of up-to ℓ DB s, and is robust in the case of (ℓ − k ) non-responding DB s, and ϑ byzantine DB s, as opposed to LP -Chor . This means that LP -Goldberg could be more suitable to real world scenario as failures and byzantine behaviors are common in reality. Fig. 4 also shows that the network bandwidth has a significant impact on the end-to-end latency. This is due to the relatively large amount of data that needs to be exchanged during the execution of these protocols which requires higher internet speeds. (a) SU Computation Overhead. (b) DB Computation Overhead.
Fig. 5: Computation Comparison
We also compare the computational complexity experienced by each SU and DB separately in the different approaches as shown in Table III. We further illustrate this through experimentation and we plot the results in Fig. 5a, which shows that the proposed schemes incur lower overhead on the SU than the existing approaches. The same observation applies to the computation experienced by each DB which again involves only efficient XOR operations in the proposed schemes. We illustrate this in Fig. 5b.
We also study the impact of non-responding DB s on the end-to-end delay experienced by the SU in LP -Goldberg as illustrated in Fig. 6. This Figure shows that as the number of faulty DBs increases, the end-to-end delay decreases since SU needs to process fewer shares to recover the record D β . As opposed to LP -Chor , in LP -Goldberg, SU is still able to recover the record β even if only k out-of-ℓ DB s respond. Please recall also that our experiment was performed on resource constrained VMs to emulate DBs, however in reality, DB s should have much more powerful computational resources than those of the used VMs which will have a tremendous impact on further reducing the overhead of the proposed approaches. Figure 7 illustrates the impact of SU 's desired privacy level in LP -Goldberg on the processing time incurred by both SU and DB s. As expected, increasing the value of t , which controls the number of DB s that can collude without inferring the content of the query, should not have any impact on each DB as they will always perform the same operations regardless of the privacy level. However, since the results sent by DB s could also be considered as a (t , ℓ)-Shamir secret sharing of the retrieved record, when t increases, then the number of secret shares required to recover the record increases which will result in more computation for the SU when performing Lagrange interpolation over higher degree-t polynomials.
We further study the impact of the number of byzantine DB s on the processing time on SU side in LP -Goldberg as depicted in Figure 8. As expected, having more byzantine DB s will increase the complexity of decoding the different shares, that SU receives from DBs, using the relatively expensive HARDRECOVER subroutine from [33].
As for τ -LP -Goldberg, the τ -independence extension will have no impact on the processing time of DB s and should also have no impact on SU s as long as t + τ is constant. This means that both PU s and SU s will always seek the maximum privacy levels for their data and queries such that t + τ < k. This is reflected in Figure 9. However the processing time will be linear in t + τ similar to Figure 7a.
As for the case of mobile SU s, we compare the performance of batching multiple queries for the future locations of a SU to that of sending separate consecutive queries using Fig. 9: Performance of τ -independent LP -Goldberg , with k = ℓ = 6 and t + τ < k LP -Goldberg, SealPIRand,and XPIR as depicted in Figure 10. Using batching mainly reduces the computation on DBs side and will reduce the end-to-end delay for answering the queries of the moving SU . Query end-to-end delay (s) LP-BatchPIR LP-Goldberg XPIR SealPIR Fig. 10: Query RTT for a moving SU We also demonstrate the benefit of relying on RAID-LP -Chor and partitioning the database content among DB s, instead of simply replicating it, on the DB s' side for several values of the redundancy parameter π. As expected, π = 2 yields the best performance however it also offers the lowest level of resistance to collusion. Setting π to be equal to ℓ will is equivalent to the original scheme LP -Chor and will have the best performance. Therefore, RAID-LP -Chor offers a performance-privacy tradeoff that is controlled by the redundancy parameter π. Table III. What really makes a difference between these schemes' communication overheads is the associated constant factor which could be very large for some protocols. Based on our experiment and the expressions displayed in Table III, we plot in Fig. 12, the communication overhead that the CRN experiences for each private spectrum availability query issued by SU for the different schemes. The scheme with the lowest communication overhead is that of Troja et al. [19] especially for a large number of records thanks to the use of Gentry et al. PIR [35] which is the most communication efficient single-server protocol in the literature having a constant communication overhead. However this scheme is computationally expensive just like most of the existing cPIR-based approaches as we show in Fig. 4. RAID-LP -Chor is the second best scheme in terms of communication overhead followed byLP -Chor , but they also provide information theoretic privacy. As shown in Figure 12, RAID-LP -Chor is significantly more efficient than LP -Chor , which again shows the benefit, in terms of overhead, of distributing the spectrum availability information among multiple DBs. As shown in Fig. 12, LP -Chor incurs much lower communication overhead than LP -Goldberg thanks to the simplicity of the underlying Chor PIR protocol. However, as we discussed earlier, LP -Goldberg provides additional security features compared to LP -Chor . SealPIR has a relatively high communication overhead especially for smaller database size but its overhead becomes comparable to that of LP -Chor when the database's size gets larger as shown in Fig. 12. This could be a good alternative to the cPIR schemes used in the context of CRN s especially that it introduces much lower latency which is critical in the context of CRN s. Still, the proposed approaches have better performance and also provide information-theoretic privacy to SU s, which shows their practicality in real world.
V. RELATED WORK
There are other approaches that address the location privacy issue in database-driven CRN s. However, for the below mentioned reasons we decided not to consider them in our performance analysis. For instance, Zhang et al. [17] rely on the concept of k-anonymity to make each SU queries DB by sending a square cloak region that includes its actual location. k-anonymity guarantees that SU 's location is indistinguishable among a set of k points. This could be achieved through the use of dummy locations by generating k − 1 properly selected dummy points, and performing k queries to DB, using the real and dummy locations. Their approach relies on a tradeoff between providing high location privacy level and maximizing some utility. This makes it suffer from the fact that achieving a high location privacy level results in a decrease in spectrum utility. However, k-anonymity-based approaches cannot achieve high location privacy without incurring substantial communication/computation overhead. Furthermore, it has been shown in a recent study led by Sprint and Technicolor [25] that anonymization based techniques are not efficient in providing location privacy guarantees, and may even leak some location information. Grissa et al [21], [54] propose an information theoretic approach which could be considered as a variant of the trivial PIR solution. They achieve this by using setmembership probabilistic data structures/filters to compress the content of the database and send it to SU which then needs to try several combinations of channels and transmission parameters to check their existence in the data structure. However, LPDB is only suitable for situations where the structure of the database is known to SU s which is not always realistic. Also, LPDB relies on probabilistic data structures which makes it prone to false positives that can lead to erroneous spectrum availability decision and cause interference to PU 's transmission. Zhang et al. [20] rely on the ǫ-geoindistinguishability mechanism [55], derived from differential privacy to protect bilateral location privacy of both PU s and SU s, which is different from what we try to achieve in this paper. This mechanism helps SU s obfuscate their location, however, it introduces noise to SU 's location which may impact the accuracy of the spectrum availability information retrieved.
VI. CONCLUSION
In this paper, with the key observation that database-driven CRN s contain multiple synchronized DB s having the same content, we harnessed multi-server PIR techniques to achieve an optimal location privacy for both SU s and PU s and for different use cases with high efficiency. Our analytical and experimental analysis indicates that our adaptation of multiserver PIR for database-driven CRN s achieve magnitudes of time faster end-to-end delay compared to the fastest stateof-the-art single-server PIR adaptation with an information theoretical privacy guarantee. Given the demonstrated benefits of multi-server PIR approaches without incurring any extra architectural overhead on database-driven CRN s, we hope this work will provide an incentive for the research community to consider this direction when designing location privacy preservation protocols for CRN s. | 2019-07-03T17:12:48.000Z | 2019-06-12T00:00:00.000 | {
"year": 2019,
"sha1": "14736c388560e445a601f208a27f2f41fd0bf6ba",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://doi.org/10.1109/tccn.2019.2922300",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "4c41575b9c4e52389916848cfcc0958f32593b3e",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
58931534 | pes2o/s2orc | v3-fos-license | THE ABDUS SALAM INTERNATIONAL CENTRE FOR THEORETICAL PHYSICS MEASURING HIGGS BOSON ASSOCIATED LEPTON FLAVOUR VIOLATION IN ELECTRON-PHOTON COLLISIONS AT THE ILC
We study the LFV Higgs production processes eγ → lφ (l = μ, τ ;φ = H,A) as a probe of Higgs mediated LFV couplings at an electron-photon collider, where H and A are extra CP even and odd Higgs bosons, respectively, in the two Higgs doublet model. Under the constraints from the current data of muon and tau rare decay, the cross section can be significantly large. It would improve the experimental upper bounds on the effective LFV coupling constants. In addition, the chirality nature of the LFV Higgs coupling constants can be measured by selecting electron beam polarizations. MIRAMARE – TRIESTE August 2010
Introduction
Lepton Flavour Violation (LFV) is clear evidence of new physics beyond the standard model (SM). It can be naturally induced in various new physics scenarios such as supersymmetric extensions of the SM. The origin of LFV would be related to the structure of the fundamental theory at high energies. Therefore, new physics models can be explored by measuring the LFV processes. In the minimal supersymmetric SM with heavy right-handed neutrinos (MSSMRN), the LFV Yukawa interactions can be radiatively generated via the slepton mixing [2,3]. The slepton mixing can be induced by the running effect from the neutrino Yukawa interaction even when flavour blind structure is realized at the grand unification scale [2].
In this report, we discuss the physics potential of the LFV Higgs boson production process e − γ → ℓ − ϕ (ℓ = µ, τ ; ϕ = h, H, A) where h, H and A are neutral Higgs bosons. It can be an useful tool for measuring Higgs-boson-mediated LFV parameters in two Higgs doublet models (THDMs) including Minimal Supersymmetric SMs (MSSMs). The total cross sections for these processes can be large for allowed values of the LFV couplings under the constraint from the current experimental data. Measuring these processes, the bounds for the Higgs boson associated LFV coupling constants can be improved significantly. Furthermore, the chirality of these couplings can be measured by using the polarized initial electron beam.
Higgs boson associated LFV coupling constants
The effective Yukawa interaction for charged leptons is given in the general framework of the THDM by [8] where ℓ Ri (i = 1-3) represent isospin singlet fields of right-handed charged leptons, L i are isospin doublets of left-handed leptons, Y ℓi are the Yukawa coupling constants of ℓ i , and Φ 1 and Φ 2 are the scalar iso-doublets with hypercharge Y = 1/2. Parameters ǫ X ij (X = L, R) can induce LFV interactions in the charged lepton sector in the basis of the mass eigenstates. In Model II THDM [20], ǫ X ij vanishes at the tree level, but it can be generated radiatively by new physics effects [3]. The effective Lagrangian can be rewritten in terms of physical Higgs boson fields. Assuming the CP invariant Higgs sector, there are two CP even Higgs bosons h and H (m h < m H ), one CP odd state A and a pair of charged Higgs bosons H ± . From Eq. (1), interaction terms can be deduced to [3,8] where P L is the projection operator to the left-handed fermions, m ℓi are mass eigenvalues , α is the mixing angle between the CP even Higgs bosons, and tan β ≡ Φ 0 2 / Φ 0 1 . Once a new physics model is assumed, κ X ij can be predicted as a function of the model parameters. In supersymmetric SMs, LFV Yukawa coupling constants can be radiatively generated by slepton mixing. Magnitudes of the LFV parameters κ X ij can be calculated as a function of the parameters of the slepton sector. For the scale of the dimensionful parameters in the slepton sector to be of TeV scales, we typically obtain |κ X ij | 2 ∼ (1-10) × 10 −7 [2,3]. In the MSSMRN only κ L ij are generated by the quantum effect via the neutrino Yukawa couplings assuming flavour conservation at the scale of right-handed neutrinos.
LFV Higgs production processes
We now discuss the lepton flavour violating Higgs boson production processes e − γ → ℓ − ϕ (ℓ = µ, τ ; ϕ = h, H, A) in eγ collisions. The differential cross section is calculated by using the effective LFV parameters κ X ij as where The functions are defined as η ± = 1 + z ± β ℓϕ cos θ where θ is the scattering angle of the outgoing lepton from the beam direction. The effective LFV parameters can be written by where P e is the polarization of the incident electron beam: P e = −1 (+1) represents that electrons in the beam are 100% left-(right-) handed. At the ILC, a high energy photon beam can be obtained by Compton backward-scattering of laser and an electron beam [23]. The full cross section can be evaluated from that for the sub process by convoluting with the photon structure function as [23] σ where x max = ξ/(1 + ξ), x min = (m 2 ℓ + m 2 ϕ )/s ee , ξ = 4E e ω 0 /m 2 e with ω 0 to be the frequency of the laser and E e being the energy of incident electrons, and x = ω/E e with ω to be the photon energy in the scattered photon beam. The photon distribution function is given in Ref. [23]. We note that when sin(β−α) ≃ 1 and m H ≃ m A (In the MSSM, this automatically realizes for m A 160 GeV) signal from both e − γ → ℓ − H and e − γ → ℓ − A can be used to measure the LFV parameters, while the cross section for e − γ → ℓ − h is suppressed.
In FIG. 1, we show the full cross sections of e − γ → τ − A as a function of the centerof-mass energy of the e − e − system for tan β = 50 and m A = 350 GeV. Scattered leptons mainly go into the forward direction, however most of events can be detected by imposing the escape cut ǫ ≤ θ ≤ π − ǫ where ǫ = 20 mrad [24]. The cross section can be around 10 fb with the maximal allowed values for |κ 31 | 2 under the constraint from the τ → eη data. The results correspond that, assuming the integrated luminosity of the eγ collision to be 500 fb −1 and the tagging efficiencies of a b quark and a tau lepton to be 60% and 30%, respectively, about 10 3 of τ − bb events can be observed as the signal, where we multiply factor of two by adding both e − γ → ℓ − A → ℓ − bb and e − γ → ℓ − H → ℓ − bb. Therefore, we can naively say that non-observation of the signal improves the upper bound for the e-τ mixing by 2-3 orders of magnitude if the backgrounds are suppressed. In FIG. 1 (left), those with a set of the typical values of |κ L 31 | 2 and |κ R 13 | 2 in the MSSMRN are shown for P e = −0.9 (dashed), P e = +0.9 (long dashed), and P e = 0 (dotted), where we take (|κ L 31 | 2 , |κ R 13 | 2 ) = (2 × 10 −7 , 0). The cross sections are sensitive to the polarization of the electron beam. They can be as large as 0.5 fb for P e = −0.9, while it is around 0.03 fb for P e = +0.9. In FIG. 1 (right), the results with (|κ L 31 | 2 , |κ R 13 | 2 ) = (2 × 10 −7 , 1 × 10 −7 ) in general supersymmetric models are shown for each polarization of the incident electrons. The cross sections are a few times 1 fb and not sensitive for polarizations. Therefore, by using the polarized beam of the electrons we can separately measure |κ L 31 | 2 and |κ R 13 | 2 and distinguish fundamental models with LFV. In FIG. 2, the full cross sections of e − γ → µ − A are shown for tan β = 50 and m A = 350 GeV. Those with the maximally allowed values for |κ 21 | 2 = |κ L 21 | 2 + |κ R 12 | 2 from the µ → eγ datacan be 7.3 fb where we here adopted the same escape cut as before discussed a . This means that about a few times 10 3 of the signal µ − bb can be produced for the integrated luminosity of the eγ collision to be 500 fb −1 , assuming tagging efficiencies to be 60% for a b quark and 100% for a muon, and using both e − γ → µ − A and e − γ → µ − H. These results imply that eγ collider can improve the bound on the e-µ by a factor of 10 2−3 . Obtained sensitivity can be as large as those at undergoing MEG and projected COMET experiments. Because of the different dependencies on the parameters in the model, µ → eγ can be sensitive than the LFV Higgs boson production for very high tan β( 50) with fixed Higgs boson mass. We also note that rare decay processes can measure the effect of other LFV origin when Higgs bosons are heavy. Therefore, both the direct and the indirect measurements of LFV processes are complementary to each other. In FIG. 2 (left), those in the MSSMRN are shown for P e = −0.9 (dashed), P e = +0.9 (long dashed), and P e = 0 (dotted), where we take (|κ L 21 | 2 , |κ R 12 | 2 ) = (2 × 10 −7 , 0). They can be as large as a few times 10 −3 fb for P e = −0.9 and P e = 0, while it is around 10 −4 fb for P e = +0.9. In FIG. 2 (right), the results with (|κ L 21 | 2 , |κ R 12 | 2 ) = (2 × 10 −7 , 1 × 10 −7 ) are shown in general supersymmetric models in a similar manner.
It is understood that these processes are clear against backgrounds. For the processes of e − γ → τ − ϕ → τ − bb. The tau lepton decays into various hadronic and leptonic modes. The main background comes from e − γ → W − Zν, whose cross section is of the order of 10 2 fb. The backgrounds can strongly be suppressed by the invariant mass cut for bb. The backgrounds for the process e − γ → µ − ϕ → µ − bb also comes from e − γ → W − Zν → µ − bbνν which is small enough. Signal to background ratios are better than O(1) before kinematic cuts. They are easily improved by the invariant mass cut, so that our signals can be almost background free.
Conclusion
We have studied the Higgs boson associated LFV at an electron photon collider. Lots of new physics model can predict the LFV Yukawa interactions. The cross section for e − γ → ℓ − ϕ (ℓ = µ, τ ; ϕ = H, A) can be significant for the allowed values of the effective LFV couplings under the current experimental data. By measuring these processes at the ILC, the current upper bounds on the effective LFV Yukawa coupling constants are expected to be improved in a considerable extent. Such an improvement can be better than those at MEG and COMET experiments for the e-µ-ϕ vertices, and those at LHCb and SuperKEKB for the e-τ -ϕ vertices. Moreover, the chirality of the LFV Higgs coupling can be separately measured via these processes by using the polarized electron beam. The electron photon collider can be an useful tool of measuring Higgs boson associated LFV couplings. | 2019-11-06T07:12:07.524Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "45488b50ecec9818b115cecf9a49ac4f3696c6bd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "48292604fb23a8cd7d680cc2626125365f1a2009",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
266597300 | pes2o/s2orc | v3-fos-license | The characteristics of elderly suicidal attempters in the emergency department in Korea: a retrospective study
Background Although Korea ranks first in the suicide rate of elderly individuals, there is limited research on those who attempt suicide, with preventive measures largely based on population-based studies. We compared the demographic and clinical characteristics of elderly individuals who attempted suicide with those of younger adults who visited the emergency department after suicide attempts and identified the factors associated with lethality in the former group. Methods Individuals who visited the emergency department after a suicide attempt from April 1, 2017, to January 31, 2020, were included. Participants were classified into two groups according to age (elderly, ≥65 years; adult, 18–64 years). Among the 779 adult patients, 123 were elderly. We conducted a chi-square test to compare the demographic and clinical features between these groups and a logistic regression analysis to identify the risk factors for lethality in the elderly group. Results Most elderly participants were men, with no prior psychiatric history or suicide attempts, and had a higher prevalence of underlying medical conditions and attributed their attempts to physical illnesses. Being sober and planning suicide occurred more frequently in this group. In the elderly group, factors that increased the mortality rate were biological male sex (p<0.05), being accompanied by family members (p<0.05), and poisoning as a suicide method (p<0.01). Conclusion Suicide attempts in elderly individuals have different characteristics from those in younger adults and are associated with physical illness. Suicides in the former group are unpredictable, deliberate, and fatal. Therefore, tailored prevention and intervention strategies addressing the characteristics of those who are elderly and attempt suicide are required.
Introduction
Despite remarkable advances in modern medicine and technology that have led to a decrease in mortality rate and an increase in life expectancy every year, the number of deaths by suicide has not diminished.Korea ranked first in suicide rate among the Organization for Economic Co-operation and Development (OECD) member countries in 2003 and has been in first or second place thereafter.The average suicide rate in OECD member countries in 2019 was 11.0 per 100,000 population, and Korea ranked first with 24.6 suicides per 100,000 population, which was more than twice the OECD average suicide rate [1].Suicide is a tragic issue with significant social and economic costs.In 2019, the Centers for Disease Control and Prevention of the United States estimated the so-individuals are insufficient.Most existing studies are epidemiological and investigated sociodemographic characteristics and revealed associations with suicidal thoughts through questionnaires.Therefore, studies involving sufficient numbers of people who attempted suicide are rare.This study investigated the demographic and clinical characteristics of elderly individuals visiting the emergency department who had attempted suicide.Consequently, we confirmed existing research results, identified risk factors related to suicide in the elderly population, and used them to classify risk groups for prevention.
Methods
Ethical statements: This study was approved by the Institutional Review Board (IRB) of Yeungnam University Hospital (IRB No: 2023-01-016), and the requirement for informed consent was waived due to the retrospective nature of the study.
Patients
A total of 2,011 patients visited the Department of Emergency at Yeungnam University Hospital between April 1, 2017, and January 31, 2020, after attempting suicide.Among them, cases in which a suicide attempt was confirmed through the information provided by the patient or when the patient denied having attempted suicide, but a guardian or rescuer provided objective information confirming such an attempt were included in the study.The exclusion criteria were as follows: children under 18 years of age and cases in which only suicidal thoughts were reported but no suicide attempt was made.A total of 779 individuals who attempted suicide, including 656 non-elderly (18-64 years) and 123 elderly ( ≥ 65 years) individuals, were studied.
Study procedure and assessment
This study used the interview records of case managers of the "Emergency Department-Based Suicide Attempts Post-Management Project, " a national suicide prevention project, and the medical records of the Departments of Psychiatry and Emergency Medicine at Yeungnam University Hospital was designated as a regional emergency medical center in 2019 and receives approximately 25,000 patients annually.The institution has participated in this project since 2017.Through case management, this project promotes the emotional stability of those who attempt suicide and visit the emergency department, and it prevents the recurrence of suicide attempts by linking them with necessary treatment and counseling services.
If a patient who visits the emergency department of a research cial cost of suicide and suicide attempts in 2020 at approximately $165 billion [2].Regarding Korea, the Health Insurance Policy Research Institute under the National Health Insurance Service published a socioeconomic cost analysis report of the ten major causes of death in 2015.According to this report, the socioeconomic costs related to suicide were estimated at 6.448 trillion won [3].Korea has a high suicide rate in all age groups; however, the suicide rate among the elderly population is particularly high.Korea set a record for the highest suicide rate of elderly individuals among OECD member countries from 2013 to 2020.As of 2019, the suicide rate per 100,000 people by age group in Korea was 33.7, 46.2, and 67.4 for people in their 60s, 70s, and ≥ 80 years old, which is 2.2, 2.8, and 3.1 times higher than the OECD averages (15.2, 16.4, and 21.5), respectively [1].According to the statistics on the causes of death announced by the Statistics Korea in 2020, 3,392 people aged 65 years or older died from suicide attempts in Korea [4].Although the global population is aging, the Korean population is aging at a faster rate than that of other countries.Most countries, including Korea, follow the United Nations (UN) definition of elderly as those over 65 years of age.The UN defines an aging society as a population aged 65 years or older that accounts for 7% or more of the total population, an aged society as 14% or more, and a super-aged society as 20% or more.Korea became an aging society in 2000, and this trend has accelerated remarkably.It is predicted that 20.6% of Koreans will be 65 years or older by 2025, making Korea a super-aged society at an unprecedented speed [5].This demographic shift has significant implications for various social and economic sectors, and preparations are crucial for future development and well-being.
Research has shown that suicide attempts are more serious and more likely to result in mortality in those who are elderly than in those who are younger [6].In addition, it has been reported that physical illness or disability in elderly individuals is strongly associated with suicide attempts and that limited social connections are associated with suicidal ideation, non-suicidal self-harm, and suicide [7].There are several risk factors for suicide in the elderly population, including serious psychiatric disorders, depression, and a history of suicide attempts [8].In a Korean study, suicidal ideation was significantly higher among elderly men living alone than not living alone.This study also revealed that higher levels of depression, lower self-esteem, and poor economic status were associated with suicide [9].As part of a regionally tailored suicide prevention project, local governments are implementing preventive measures targeting elderly individuals and those living alone and are continuing attempts to lower the suicide rate in the elderly population [10].However, systematic studies and indicators of the characteristics and risk factors of suicide attempts among elderly institution is recorded as having made a suicide attempt in the National Emergency Department Information System, the emergency medicine and psychiatric departments, and the case managers are automatically contacted.Emergency medicine doctors provide physical treatment, and the psychiatric department records psychosocial and clinical factors, including the presence of mental illness, psychiatric symptoms, suicidal ideation, and suicide plans, through interviews and then provides psychotherapy.Case managers receive education and records management training through the Korea Respect for Life Hope Foundation (formerly the Central Suicide Prevention Center) and evaluate the items in the suicide attempt follow-up management manual.The demographic data of patients who attempted suicide, history of suicide attempts, coexisting diseases, medical conditions, and clinical data necessary for this study were included in the case manager's questionnaire prepared in advance.
Statistical analysis
Data obtained from the medical and clinical records were processed using IBM SPSS ver.21.0 (IBM Corp., Armonk, NY, USA).Statistical significance was found when the p-value was less than 0.05.Adult patients aged 19 years or older were divided into elderly ( ≥ 65 years) and non-elderly adult (hereafter adult; < 65 years) groups, and the characteristics and specific relationships of the collected demographic and clinical data of suicide attempts were compared.When the dependent variable was categorical, the chisquare test or Fisher exact test was used.A post-hoc analysis was conducted using Bonferroni correction.When the dependent variable followed a normal distribution, the Student t-test was used.In addition, logistic regression analysis was performed within the elderly suicide attempt group to analyze the independent factors influencing the reasoning of these patients.
Comparison of demographic characteristics between elderly and adult groups
The elderly group had significantly higher and lower proportions of male and female (p < 0.001), respectively, than the adult group.The elderly group had a lower percentage of highly educated individuals with at least a college degree and a higher percentage of individuals who were illiterate (p < 0.001) and held a job (p = 0.019) than the adult group.The proportion of unmarried participants was lower and that of married participants was higher in the elderly group than in the adult group (p < 0.001).In addition, no significant differences were found between the two groups in terms of cohabitation (p = 0.997), religion (p = 0.124), health insurance (p = 0.359), or monthly household income (p = 0.880).When conducting a post-hoc analysis using the Bonferroni correction, the results were found to be comparable (Table 1, Supplementary Table 1).
Comparison of clinical characteristics between elderly and adult groups
The elderly group had lower suicide attempt rates among those with a history of suicide attempts (p = 0.007) and a significantly lower number of previous suicide attempts than the adult group (p < 0.001).The proportion of individuals who had never received psychiatric treatment was higher and the percentage of individuals currently receiving psychiatric medication was lower in the elderly group than in the adult group (p= 0.018).The number of past psychiatric admissions was lower in the elderly group than in the adult group (p < 0.001).There was a significant difference in suicide awareness; however, this was attributable to the proportion of individuals whose status could not be assessed (p = 0.006).The elderly group had more recent acute and chronic diseases than the adult group (p < 0.001).There were no significant differences in physical (p = 0.295) and psychiatric (p = 0.372) treatments after past suicide attempts or family histories of psychiatry (p = 0.789) and suicide attempts (p= 0.542) (Table 1, Supplementary Table 2).
Comparison of suicide-related characteristics between elderly and adult groups
Among the suicide attempt methods, the elderly group had a higher prevalence of poisoning than the adult group (p < 0.001).In the elderly group, the proportion of those choosing houses and hospitals as places to attempt suicide was higher than in the adult group (p < 0.001).The ratios of hospital visits with family and friends were higher and lower, respectively, in the elderly group (p = 0.021).The elderly group had fewer suicide attempts in a drunken state (p = 0.002), and more planned suicide attempts (p < 0.001) than the adult group.In the elderly group, the rate of not asking for help before attempting suicide was higher and that of asking for help was lower (p < 0.001).Among the events that triggered suicide attempts, the proportion of diseases was higher, whereas that of intersocial, psychiatric, and socioeconomic problems was lower (p < 0.001).The sincerity of suicide attempts (p < 0.001) and transfer or discharge rates (p < 0.001) were higher in the elderly group than in the adult group.In the elderly group, there were fewer individuals with clear consciousness, and more were in a comatose state than in the adult group (p = 0.003).Elderly individuals had a lower incidence of no or slight injury but a higher rate of needing admission or mortality (p < 0.001).There was a difference in referrals to psychiatric treatment; however, this difference was due to mental deterioration or death (p < 0.001).No significant differences were found between the two groups regarding suicide notes (p = 0.371) or suicide with other people (p> 0.99) (Table 2, Supplementary Table 3).
Factors affecting lethality of elderly individuals who attempted suicide
In the univariate logistic regression analysis performed on the variables used in the correlation analysis, biological male sex (odds ratio [OR], 5.804; 95% confidence interval [CI], 1.248-26.984),a family member accompanying the person to the emergency department (OR, 0.064; 95% CI, 0.005-0.760),and suicide attempt by poisoning (OR, 0.191; 95% CI, 0.058-0.633)were identified as significant risk factors for mortality (Table 3).
Discussion
This study found differences in demographic, clinical, and suicide attempt-related characteristics between elderly and adult individuals who attempted suicide.The findings support the results of previous epidemiological studies showing differences in suicidal ideation and suicide attempts between individuals who are elderly and those who are younger [11,12].
The proportion of male participants was significantly higher in the elderly group than in the adult group.Women make more suicide attempts; however, the suicide mortality rate is higher among men [13].Considering previous reports that the rates of suicidal ideation and attempts increase with age in men who are elderly [14], assessments of suicide risk and immediate interventions for this population are particularly necessary.There were fewer college graduates and more illiteracy in the elderly group than in the adult group.According to the 2020 elderly survey report [11], approximately 10.6% of elderly individuals aged 65 years or older had no education; 31.7%,23.3%, and 28.4% had graduated from elementary, middle, and high schools, respectively; and only 5.9% had a community college or higher education.This could be considered a characteristic of the elderly group that is unrelated to suicide attempts.However, existing studies have shown that suicidal ideation, hopelessness, and depression are higher in elderly people with low education, and that low education [15] in the elderly population is related to low self-efficacy [16] and subjective quality of life [17].Therefore, lack of education and low educational attainment, which are more prominent in the elderly group, may have contributed to the increase in suicide attempts.The rate of suicide attempts of elderly individuals living alone was high (57.72%),but there was no difference compared with that of the adult group [11].There was no significant difference between the two groups in terms of type of medical insurance or monthly household income; however, the proportion of participants receiving medical aid was similarly high in both groups.As of 2021, 592,807 of the 1,516,525 beneficiaries of medical aid, or approximately 39%, were seniors aged 65 years or older [18].In this study, among the elderly participants who visited the emergency department because of a suicide attempt, 74.8% were medical aid beneficiaries, which is higher than that of the general elderly population.This is consistent with previous findings that socioeconomic status in the elderly population is associated with depression and suicidal ideation [19,20].When examining psychiatric history, the elderly group had a higher proportion of individuals with no history of and a lower proportion of individuals currently receiving psychopharmacological treatment and past psychiatric hospitalizations.This appears to contrast with established studies that identify psychiatric history as an important risk factor for reattempting suicide and a major factor in increasing suicide risk and completion rates [21].However, this could be due to negative perceptions and neglect of mental health care in the past, as well as low accessibility to such services.It should also be considered that these societal impacts may be even more pronounced among older individuals.In the elderly group, a higher proportion of individuals had no history of suicide attempts, and the number of past suicide attempts was lower than in the adult group.In the elderly population, the presence of a suicide attempt is associated with an even higher suicide risk [22,23].When connecting these findings to previous studies indicating lower levels of depression, anxiety, and suicide-related scales in individuals who attempted suicide and are elderly than in those who are non-elderly, suicide among elderly individuals may exhibit characteristics that make it more easily overlooked and difficult to predict [24].Therefore, heightened attention should be paid to elderly individuals who appear to have a lower suicide risk, considering their psychiatric history, scales, and even suicide history.
In our study, elderly individuals were more likely than adults to have underlying chronic diseases.Although physical diseases commonly increase with age, it is important to pay attention to the high prevalence of depression and suicidal ideation in hospitalized patients who are elderly [25].In older adults, both physical and mental illnesses can independently increase the risk of suicide, and multiple diseases can further increase such risk [25][26][27].This is consistent with the higher rate of physical illness as a reason for suicide in the elderly group than in the adult group.Physical discomfort or underlying diseases in older adults are mediated by feelings of depression and hopelessness, which increase the severity of suicide attempts [26,27].Additionally, considering that elderly individuals with depression often complain of physical discomfort rather than emotional discomfort [28], their suicide risk needs to be assessed in not only psychiatry but also in other departments.According to a psychological autopsy study, nearly 50% of those aged 60 years or older who died by suicide visited a medical institution in the month of death, 26% in the week before death, and 7% on the day before death, but more than half of the counseling was for physical discomfort [29].
The proportion of those who chose poisoning as the suicide method was higher in the elderly group than in the adult group.According to global statistics, hanging is the most common method of suicide, and it is the same in Korea [10,30].However, the probability of survival in the emergency department owing to the high fatality rate is significantly lower than that of poisoning.Compared with adults who are younger, the severity and prognosis of patients who are elderly are worse for poisoning; therefore, more attention and care are needed [12].
The finding that suicide attempts in the elderly group are more deliberate is consistent with previous reports [29].This is because, in the case of suicides in the elderly population, attempts are often made because of existing suicidal thoughts that have lasted for a long time rather than those that have been triggered by a specific event [31,32].
The rate of asking for help immediately before the suicide was significantly lower in the elderly group.Paradoxically, suicidal ideation in the elderly population tends to be chronic because it is often long-standing.In one study, 49% of suicide attempts in individuals over the age of 60 years revealed suicidal intentions within the year prior to death, and 18% of cases overtly expressed suicidal ideation [30].Therefore, there is a period during which intervention is possible for elderly individuals, and it is necessary to devise timely and appropriate intervention methods.
Suicide attempts by the elderly group were more genuine and medically lethal.This supports previous findings that suicide by older individuals has a high fatality rate [23,33] and is consistent with previous reports that older adults have higher suicidal intentions than younger adults [34].
In the elderly group, the factors that increased fatality were (1) biological male sex, (2) being accompanied by family members, and (3) poisoning as a suicide method.Although female participants had a higher rate of suicidal thoughts and attempts than their male counterparts, male mortality rates were higher in previous studies [35]; the same results were confirmed in the elderly group in our study.In 37 OECD countries, persons aged 70 years and older are more likely to die by suicide than any other age group, and the tendency toward fatal suicidal behavior prevails in men aged 75 years, with rates six times higher than those in women [36].Many younger adults visited the emergency department by themselves or with friends.However, among the elderly participants, there were many cases in which they could not reach the emergency department on their own because of their physical fragility after a serious suicide attempt.Many patients were transferred and, under these circumstances, being accompanied by a family member may be related to the mortality rate.According to 2017 data from the Korea Emergency Medical Information System, poisoning was the most common suicide attempt method, and the rate of choosing poisoning for suicide attempts increased with age [37].Another study that evaluated suicide attempts by poisoning showed that psychiatric drugs (43.4%) were the most common substances used across all age groups, whereas pesticides (50.3%) were the most common substances used for self-poisoning among elderly individuals [38].Self-poisoning is also associated with poorer clinical outcomes in elderly patients than in younger adult patients.Fatal substances are often selected by older individuals.Additionally, more serious medical situations may be caused by toxic substances due to preexisting diseases and aging, which are believed to increase the mortality rate of the elderly population.In one study, demographic and clinical factors, such as older age, biological male sex, interpersonal stress, and impression of schizophrenia, were associated with mortality among those who attempted suicide and were younger than 65 years.However, in the same study, no factors affecting the mortality of suicide attempts in elderly individuals were found, but there may have been limitations, as only 37 suicide attempts by older individuals were included [6].
The limitations of this study were as follows.First, considering the results of a previous study in which less than 30% of those attempting suicide visited the hospital, the current study was conducted on those who visited the emergency department of a university hospital; therefore, there may be limitations in generalizing the results.Second, this study included critically ill participants who attempted suicide and visited the emergency department, which limited the use of validated scales owing to time and environmental constraints.However, this study was not indirectly performed via a questionnaire survey but rather by directly assessing high-risk patients who attempted suicide, including a sufficient number of elderly participants who were compared with younger adults.In addition, this study attempted to identify the predictors of suicide mortality in the elderly group and found significant results.These findings highlight the importance of conducting large well-designed studies to replicate and validate our results.
In conclusion, our study revealed distinct characteristics of elderly individuals who attempted suicide compared with those of younger adults who attempted suicide.Physical illness plays a significant role in suicide attempts and related life events among older adults.Suicide attempts among the elderly were more premeditated and serious, employing lethal methods such as pesticide poisoning.Moreover, these patients were less likely to receive appropriate psychiatric treatment, were hesitant to seek help, and faced higher lethality due to underlying medical conditions.These findings underscore the need for a tailored preventive strategy aimed at addressing the specific needs of the elderly population.
Table 2 .
Suicide-related characteristics of elderly and adult groups
Table 3 .
Factors affecting lethality of suicide attempts in participants who are elderly | 2023-12-30T06:18:15.141Z | 2023-12-29T00:00:00.000 | {
"year": 2023,
"sha1": "79780edba4b9e8e1fc8544060a6a8c68e7a6eb1c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7bd2f2037bde2044d54db6d30fef1712e450ac2a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
72115005 | pes2o/s2orc | v3-fos-license | Gravity Modulation Effects of Hydromagnetic Elastico-Viscous Fluid Flow past a Porous Plate in Slip Flow Regime
The two-dimensional hydromagnetic free convective flow of elastico-viscous fluid (Walters liquid Model B) with simultaneous heat and mass transfer past an infinite vertical porous plate under the influence of gravity modulation effects has been analysed. Generalized Navier’s boundary condition has been used to study the characteristics of slip flow regime. Fluctuating characteristics of temperature and concentration are considered in the neighbourhood of the surface having periodic suction. The governing equations of fluidmotion are solved analytically by using perturbation technique. Various fluid flow characteristics (velocity profile, viscous drag, etc.) are analyzed graphically for various values of flowparameters involved in the solution. A special emphasis is given on the gravity modulation effects on both Newtonian and non-Newtonian fluids.
Introduction
The analysis of viscoelastic fluid flow is one of the important fields of fluid dynamics. The complex stress-strain relationships of viscoelastic fluid flow mechanisms are used in geophysics, chemical engineering (absorption, filtration), petroleum engineering, hydrology, soil-physics, biophysics, and paper and pulp technology. The viscosity of the viscoelastic fluid signifies the physics of the energy dissipated during the flow and its elasticity represents the energy stored during the flow. As Walters liquid (Model B ) contains both viscosity and elasticity it is different from Newtonian fluid because the Newtonian fluid does not have the concept of elasticity as it discusses only the concept of energy dissipation.
The phenomenon of transient free convection flow from a vertical plate has been analysed by Siegel [1]. Gebhart [2] has studied the fluid motion in the presence of natural convection from vertical elements. Chung and Anderson [3] have investigated the nature of unsteady fluid flow including the effects of natural convection. Schetz and Eichhorn [4] have investigated the above unsteady problem in the vicinity of a doubly infinite vertical plate. Goldstein and Briggs [5] have studied the problem of free convection about vertical plates and circular cylinders. Two-and three-dimensional oscillatory convection in gravitationally modulated fluid layers have been investigated by Clever et al. [6,7]. A study of thermal convection in an enclosure induced simultaneously by gravity and vibration has been done by Fu and Shieh [8]. Convection phenomenon of material processing in space has been described by Ostrach [9]. Li [10] has investigated the effect of magnetic fields on low frequency oscillating natural convection. Deka and Soundalgekar [11] have analysed the problem of free convection flow influenced by gravity modulation by using Laplace transform technique. Rajvanshi and Saini [12] have studied the free convection MHD flow past a moving vertical porous surface with gravity modulation at constant heat flux. The influence of combined heat and mass transfer and gravity modulation on unsteady flow past a porous vertical plate in slip flow regime has been examined by Jain and Rajvanshi [13].
In this study, an analysis is carried out to study the effects of gravity modulation on free convection unsteady flow of a viscoelastic fluid past a vertical permeable plate with slip flow regime under the action of transverse magnetic field. The velocity field and magnitude of shearing stress at the plate are obtained and illustrated graphically to observe the viscoelastic effects in combination with other flow parameters.
ISRN Applied Mathematics
The constitutive equation for Walters liquid (Model B ) is where is the stress tensor, is isotropic pressure, is the metric tensor of a fixed coordinate system , V is the velocity vector, and the contravariant form of is given by It is the convected derivative of the deformation rate tensor defined by Here 0 is the limiting viscosity at the small rate of shear which is given by ( ) being the relaxation spectrum. This idealized model is a valid approximation of Walters liquid (Model B ) taking very short memories into account so that terms involving have been neglected. Walter [14] reported that the mixture of polymethyl methacrylate and pyridine at 25 ∘ C containing 30.5 gm of polymer per litre and having density 0.98 gm/mL fits very nearly to this model.
Mathematical Formulation
An unsteady two-dimensional free convective flow of an electrically conducting elastico-viscous fluid past a vertical porous plate has been analysed in the presence of gravity modulation and slip flow regime magnetic field of uniform strength 0 is applied in the direction normal to the plate. Induced magnetic field is neglected by assuming very small values of magnetic Reynolds number (Crammer and Pai [15]). The electrical conductivity of the fluid is also assumed to be of smaller order of magnitude. Let -axis be taken along the vertical plate and -axis is taken normal to the plate. Let and be, respectively, the temperature and the molar species concentration of the fluid at the plate and let ∞ and ∞ be, respectively, the equilibrium temperature and equilibrium molar species concentration of the fluid. The geometry of the problem is shown by Figure 1. The governing equations of the fluid motion are as follows: In fact, nearly two hundred years ago Navier [16] proposed a more general boundary condition, which includes the possibility of fluid slip. Navier's proposed boundary condition assumes that the velocity at a solid surface is proportional to the shear rate at the surface.
The boundary conditions of the problem are
Method of Solution
The gravitational acceleration is considered as ISRN Applied Mathematics 3 Let us introduce the following nondimensional quantities: where is the displacement variable, is the time, is the frequency of oscillation, is the dimension velocity, is the dimensionless temperature, is the dimensionless concentration, Gr is Grashof number for heat transfer, Gm is Grashof number for mass transfer, is magnetic parameter, is viscoelastic parameter, Pr is the Prandtl number, Sc is the Schmidt number, is gravity modulation parameter, and ℎ is the slip parameter.
Then the nondimensional forms of the governing equations of motions are as follows: Boundary conditions in the dimensionless form are given as follows: Assuming small amplitude of oscillations ( ≪ 1) in the neighbourhood of the plate, the velocity, temperature, and concentration are considered as Using (12) in (10) and equating the like powers of and neglecting the higher powers of we get 1 Relevant boundary conditions for solving the above equations are as follows: Equations (14) are solved by using the boundary conditions (15) and their solutions are given by The presence of elasticity in the governing fluid motion constitutes a third-order differential equation (13) but for Newtonian fluid = 0; then the differential equation reduces to of order two. And also it is seen that as there are insufficient boundary conditions for solving (13), we use multiparameter perturbation technique. Since , which is a measure (dimensionless) of relaxation time, is very small for a viscoelastic fluid with small memory, thus following Ray Mahapatra and Gupta [17] and Reza and Gupta [18], we use the multiparameter perturbation technique, and we consider 0 = 00 + 01 + ( 2 ) , Using the perturbation scheme (17) in (13) The modified boundary conditions for solving the above equations are 00 = ℎ 00 , 01 = ℎ [ 01 + 00 ] , 10 = ℎ 10 , 11 = ℎ [ 11 − 4 10 + 00 + 10 ] at = 0, The solutions of (18) relevant to the above boundary conditions are presented as follows: The constants of the solutions are not presented here for the sake of brevity.
Knowing the velocity field, the shearing stress at the plate is defined as The effects of gravity modulation on free convective flow of an elastico-viscous fluid with simultaneous heat and mass transfer past an infinite vertical porous plate under the influence of transverse magnetic field have been analyzed. It is seen from Figure 2 that both Newtonian and non-Newtonian fluid flows accelerate asymptotically in the neighbourhood of the plate and then they experience a decline trend as we move away from the plate. The nonzero value of velocity profile at = 0 represents the strength of slip at the plate. Also, it can be concluded that the growth in viscoelasticity slows down the speed of fluid flow. Effects of gravity modulation parameter on the governing fluid motions are shown in Figure 3 and it is observed that as increases, the speed of fluid flow increases along with the increasing values of = (0, 0.2 and 0.4). Application of transverse magnetic field leads to the generalization of Lorentz force and its effect is displayed by the nondimensional parameter, . Figure 4 shows the impact of magnetic parameter on the fluid flow against the displacement variable . As magnetic parameter increases, the strength of Lorentz force rises and, as a result, the flow is retarded. It is also noticed that, as decreases, the effect of viscoelasticity is seen prominent. Grashof number is defined as the ratio of buoyancy force to viscous force and its positive value identifies the flow past an externally cooled plate and the negative value characterizes the flow past an externally heated plate. Figure 5 shows that as Gr increases, the resistance of fluid flow diminishes, and as a consequence the speed enhances along with the increasing values of visco-elastic parameter. Again, when the flow passes an externally heated plate (Gr = −10), a back flow is noticed in case of both Newtonian and non-Newtonian fluids.
Results and Discussions
Knowing the velocity field, it is important from a practical point of view to know the effect of viscoelastic parameter on shearing stress or viscous drag. helps to study the simultaneous effect of momentum and thermal diffusion in fluid flow. Effect of Prandtl number on the magnitude of shearing stress is seen in Figure 6 and it can be concluded that the magnitude of shearing stress increases rapidly for Pr < 8 of both Newtonian and non-Newtonian fluids, but for higher values of Pr (>8) it increases steadily. In mass transfer problems, the importance of Schmidt number cannot be neglected as it studies the combined effect of momentum and mass diffusion. The same phenomenon as that in case of growth in Prandtl number is experienced against Schmidt number for various values of viscoelastic parameter (Figure 7). Effect of gravity modulation parameter on the viscous drag is shown in Figure 8 and it is revealed that as increases, the magnitude of shearing stress increases.
The rate of heat transfer and rate of mass transfer are not significantly affected by the presence of viscoelasticity of the governing fluid flow.
Conclusions
From the present study, we make the following conclusions. (iii) Effect of viscoelasticity is seen prominent during the lower order magnitude of magnetic parameter.
(iv) A back flow is experienced in the motion governing fluid flow past an externally heated plate.
(v) As gravity modulation parameter increases, the magnitude of shearing stress experienced by Newtonian as well as non-Newtonian fluid flows increase.
Conflict of Interests
The author does not have any conflict of interests regarding the publication of paper.
Acknowledgment
The author acknowledges Professor Rita Choudhury, Department of Mathematics, Gauhati University, for her encouragement throughout this work. | 2019-03-09T14:07:52.571Z | 2014-05-21T00:00:00.000 | {
"year": 2014,
"sha1": "ed11f5b88c2e805941351a5bbe981aec6fb8b914",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2014/492906.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1c4c023e23cf181dc385de371ef2d23c6b4974a5",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225017419 | pes2o/s2orc | v3-fos-license | Child Sexual Abuse Prevention Program: Reference to the Indonesian Government
Background: Child Sexual Abuse (CSA) was a global problem widespread in many countries. Komisi Perlindungan Anak Indonesia or Indonesian Children Protection Commission (KPAI) recorded as many as 1.880 children become victims of sexual abuse such as rape, fornication, sodomy and paedophilia. The Government of Indonesia become made become efforts both national and international scale, but there is no effective and applicable program that has been implemented. Objective: The purpose of this article was to analyse the programs had been implemented to prevent sexual violence against children. Method: This article was a literature study by examining 38 articles related to the program against child abuse. The researcher was looking for reference sources from the Science Direct, Sage pub and Google Scholar online become. The keywords used were Child Sex Abuse Prevention Program, Parenting Program, Parent Training, Parent Intervention, Maltreatment, Violence, and Violence Prevention. Result: In children, programs that had been implemented include C-SAPE; IGEL; Train the trainer; BST; A program for minorities in Australia; Cool and Safe. For parents, the programs that had been applied include ACT-RSK; Triple-P; RETHINK; The Incredible Years Parents, Teachers, and Children Training Series; PACE; The Making Choices and Strong Families; The African Migrant Parenting; Strengthening Families; 123 Magic; PDEP and FAST. Conclusion: The sexual violence prevention program for children that can be implemented by the Indonesian government was using teaching methods based on school curricula that can be delivered by teachers. For parent, the program that could be implemented by the Indonesian government was using positive parenting methods that focus on preventing sexual violence against children and delivered by expert facilitators. To reach children and families with different cultural backgrounds, the Indonesian government could adapt sexual violence prevention programs for the Australian minorities and The African Migrant Parenting. Keyword: child sexual abuse, prevention program
INTRODUCTION
A child is someone under 18 (eighteen) years of age, including a child in the womb (Perubahan Atas Undang-Undang Nomor 23 Tahun 2002Tentang Perlindungan Anak, 2014. Children have the rights to be recognised in international law since 1924 when the Declaration on the Rights of International Children is first adopted by the League of Nations. The next Instruments of the human rights, from the United Nations, such as the Universal Declaration of Human Rights 1948, and regional instruments such as the American Declaration of Human Rights and obligations made in the same year, acknowledging more generally the human right to be free from violence, abuse, and exploitation (Convention on the Rights of the Child, 1989;Czerwinski et al., 2018).
Since 2011-2016, the Indonesian Children Protection Commission (KPAI) recorded as many as 809 children that focus victims of online sexual crimes and 1,880 children become victims of sexual abuse such as rape, fornication, sodomy and paedophilia (Komisi Perlindungan Anak Indonesia, 2016). The Ministry of Social Affairs recorded 1.956 children victims of sexual abuse in 2016 and increased to 2.117 children in 2017 (Permani, 2018).
The Indonesian Government has made efforts to prevent both national and international scales. On a national scale the effort was made by ratifying the Convention on the Rights of the children on September 2, 1990;Established Law No. 23 of 2002 andit is updated to Act No. 35, 2014 about child protection; Established the Ministry of Women Empowerment and Child Protection, establishing the women's Empowerment office, child protection, population control and disaster families (DP3AP2KB); In cooperation with both central and regional police in child protection; with immigration in the event of deportation of foreigners who proved to be the perpetrator of paedophilia and with the child protection institutions KPAI, Forum and Children's Council. Internationally, Indonesia also collaborates with NCB-INTERPOL in international/transnational crime prevention in Indonesia (Melati et al., 2015;Septia, 2016;Utami, 2018).
It can be concluded that there have been efforts to prevent sexual abuse in children conducted by the Indonesian Government; however, KPAI commissioner chairman Putu Elvina states that an effective and applicable program as part of efforts to prevent harassment was not available (Dedi Hendrian, 2018). Therefore, the researcher is interested in collecting what become ever been applied as a preventative effort against sexual abuse in children. Of the 38 articles found, there were 17 sexual violence prevention programs consisting of 6 sexual violence prevention programs for chikdren and 9 sexual prevention programs for parents. Of the 17 programs found, 4 programs were inisiated by the goverment and 13 programs were initiated by Non Government Organizations (NGOs).
METHOD
Of the 38 articles found, they were categorized into two groups, namely: child prevention programs and parental prevention programmes. Those programs were implemented in Europe, America, Australia, Africa and Asia. The results of this literature study were expected to be a picture for the Indonesian government regarding efforts to prevent sexual violence against children.
RESULTS AND DISCUSSION
In many countries, studies on policymaking and sexual violence prevention program for children have been conducted. The researcher divides the results of literature studies into two categories namely, children's preventive programs. A sexual violence prevention program for children is a program that focuses on providing interventions for children so that the child is able to protect themselves. A program for preventing sexual violence for parents is a program that focuses on providing intervention to parents so that parents can prevent their children becoming victims of sexual violence.
Child preventive Program, all the preventive efforts made to the child are based on the child as the primary victim who will have a complex public health problem for life after becoming a victim of sexual violence (Müller, Röder and Fingerle, 2014;Czerwinski et al., 2018;Bustamante et al., 2019). Table 1 shows that there are several programs to prevent sexual violence against children which have been implemented in several countries. The programs include C-SAPE, IGEL, Train the trainer, BST, The program for Australian minorities, Cool and Safe.
C-SAPE (The Child Sexual Abuse Prevention Education) is a child sexual abuse prevention program by incorporating sexual education in an elementary school education curriculum. This program aims to teach children about sexual harassment and provided skills to children to avoid sexual harassment (National Sexual Violence Resource Center, 2011). The benefits of the C-SAPE program include: increasing children's knowledge about sexual harassment and self-protection, increasing children's skills in reporting and asking for help, increasing self-confidence (del Campo Sanchez and Sanchez, 2006;Walsh and Brandon, 2012;Kim and Kang, 2017).
Germany implements the IGEL program. The program aims to increase the strength and ability of children to protect themselves from sexual abuse. After implementing, the child could protect themselves from sexual harassment (Czerwinski et al., 2018).
Early sexual education in elementary school children has been implemented in several countries. Hawaii undertakes development by including a school based train the trainer program. The program aims to increase children's awareness of situations that are at risk for sexual harassment (Keeping Children Safe Coalition, 2011). The program is effectively implemented and could increase children's knowledge about body boundaries, appropriate and inappropriate touches (Baker et al., 2012;Barron and Topping, 2013).
The Body Safe Training (BST) program is a child sexual abuse prevention program developed by Dr Wurtele in 1986 and updated in 2007. The program purposes to help children recognize potential abusive situations, teach children to say no, fight the harassers, and report their experience (Lucy Faithfull Foundation, 2014). Output of this program is that children could protect themselves from sexual harassment by recognizing situations that have the potential to be harassing, being able to say no, being able to fight off the offender and being able to report sexual harassment experienced (Zhang et al., 2014).
Preventive efforts with school based early sexual education in children are a method that is very commonly done. Several countries that have implemented this prevention program do not consider cultural factors in the design of program designs. In Australia, the implementation of this program has made the minorities there increasingly marginalized and racism is created. Cultural factors also need to be considered in the design and evaluation of prevention programs for school based children (Sawrikar and Katz, 2018).
Technological advances can also be utilized as an innovation to health programs. An inovation program called "Cool and Safe" was created as an effort to prevent web-based child sexual abuse targeting children of elementary school age. This program aims to prevent child sexual abuse by providing knowledge about safe behavior, appropriate touches and inapproproate touches. The program has been tested and results show that the program is worth applying and has no significant anxiety side effects (Müller, Röder and Fingerle, 2014). Table 1 shows that there are 6 sexual violence prevention programs for children.
Preventing children from becoming victims of sexual violence is the core objective of them. From those programs, 3 programs were carried out in 2 sessions, 1 program was carried out in 3 sessions and 2 programs are not explained in the literature. The topics presented are divided into 2 categories. The first category is about basic education including what sexual harassment is, the types of touch and how to behave safely and what are the places at risk. The second category is more related to how to avoid sexual harassment, how to protect yourself from the perpetrators (dare to say no, dare to fight and dare to report about sexual abuse experience) and how to increase sensitivity to unsafe conditions.
Of the 6 sexual violence prevention programs for children found, it can be concluded that the target of the program is children between 3-13 years of age or in other words it focuses on pre-school to elementary school age children. In Indonesia, cases of sexual violence against children recorded by the Ministry of Social Affairs in 2017 reached 2.117 cases (Permani, 2018). The age range of victims of sexual violence in Indonesia is between 0-16 years old (VOA, 2019).
From the 6 programs found, it can be concluded that the program presenters differ depending on the program. The program is a curriculum based program, the school teachers, counseling teachers and religious teachers can be presenters. The program is not curriculum based program, then the presenter is an expert trainer (Rahmaniah, 2014). 242 Jurnal Promkes: The Indonesian Journal of Health Promotion and Health Education Vol. 8 No. 2, September 2020, 238-252 doi: 10.20473/jpk.V8.I2.2020 Of all the available programs, it has been proven to be effective and able to increase the target's knowledge and skills on what and how to protect themselves from sexual harassment. Barriers to implementing the program include less involved parents, less partnered schools, cultural factors, distance, information, infrastucture and policies. Indonesia has the similar conditions. Child sexual abuse that occur in Indonesia is the result from lack of attention from parents to children because parents are busy (VOA, 2019). Cultural background makes it difficult to teach sexuality material to children because it is one of the factors that makes it easy for perpetrators of sexual abuse from abroad to enter Indonesia (Irawan, 2016).
Indonesia has not implemented an effective and applicable program related to the prevention of sexual violence against children (Dedi Hendrian, 2018). The Ministry of Women's Empowerment and Child Protection implements the Three Ends program. The program aims to end violence against women and children, end human trafficking and end economic inequality (Kementerian Pemberdayaan Perempuan dan Perlindungan Anak, 2016). End violence against women and children were carried out by providing information on the rights of women and children that reached the entire Indonesian community, functioning village-level institutuions, functioning of the women's and child protection task force in the regions and ensuring massive support from stakeholders (Kementerian Pemberdayaan Perempuan dan Perlindungan Anak, 2016). Efforts to prevent sexual violence against children are briefly alluded to in the Three Ends program, but it has not specifically focused on preventing sexual violence against children. Efforts made by Indonesia are making policies in the form of laws and conducting partnerships with various organizations both government and private (Septia, 2016).
Efforts to prevent sexual violence againts children are not done only by the government.
A non governmental organization, ECPAT Indonesia implemented a sexual violence againts children prevention with the Smart School Online Module for Children "Eksploitasi Seksual Anak di Ranah Online‖. The topics presented by expert facilitators were what was sexual exploitation of children in the online realm, who was vulnerable to being the perpetrators and victims of sexual exploitation of children in the online realm, why sexual exploitation of children in the online realm could occur and what could children do to prevent sexual exploitation (ECPAT Indonesia, 2018).
Prevention programs of sexual abuse in children are often only focused on children. Development Program to the parent domain is also required. Knowledge obtained by children from the school cannot be applied optimally. The parental role is required as an amplifier; in this case, it is a preventive effort of sexual abuse in children (Rudolph and Zimmer-Gembeck, 2018;Rudolph et al., 2018). Table 2 shows that parents can be involved in efforts to prevent violence against children. Parents can provide supervision and monitoring, provide protection to children, and help increasing children's knowledge about self protection. Parental involvement need as an efforts to prevents sexual violence against children. It can be advocated to the government to be included in the program to prevent sexual violence against children (Letourneau et al., 2017;Rudolph and Zimmer-Gembeck, 2018;Rudolph et al., 2018;Jin, Chen and Yu, 2019). Programs to prevent sexual violence against children for parents include ACT-RSK; Triple-P; The Incredible Years Parents, Teachers, and Children Training Series; PACE; The Making Choices and Strong Families; The African Migrant Parenting; Strengthening Families; 123 Magic; PDEP; FAST.
Americans adopt the ACT-RSK (ACT-Raising Safe Kids) program as an effort to reduce early childhood violence. This program aims to reduce the number of child abuse (Knox andBurkhart, 2011, 2014;Knox, Burkhart and Hunter, 2011).
Triple-P program is very effective.
Jurnal Promkes: The Indonesian Journal of Health Promotion and Health Education
Received: 29-01-2020, Accepted: 28-08-2020, Published Online:28-09-2020 The Incredible Years Parents, Teachers, and Children Training Series is a reinforcement programfor parents, teachers, children, and families. The program aims to improve the social, emotional and academic competence of parents and teachers to prevent children from developing behavioral problems (Webster-Stratton, 2011). The effectiveness of this program has been tested and the results is, fewer students have behavioral problems when they are taught by parents and teachers who have received program training. This happens because there is an increase in the skills of parents and teachers in childcare and classroom management (Furlong and McGilloway, 2012;Wager, Wager and Wilson, 2015).
PACE (Parenting our Children to Excellence) is a parenting training program designed to improve parents' coping and self efficacy skills in childcare (Audience, 2017). This program has been tested on 610 parents in Indianapolis, United States. The results show that the PACE program could significantly improve harmonious relations between parents and children, especially for families who have the potential to abuse children (Begle and Dumas, 2011).
The Making Choices and Strong Families is a program designed to strengthen families by increasing parenting skills and developing emotional management in children. This program is effective in promoting harmonious relations between parents and children (Conner and Fraser, 2011;Fraser et al., 2014). The African Migrant Parenting is a childcare program implemented by the Spectrum Migrant Resource Center to ensure new immigrants and refugees in Australia could maximize their potential to care for children, strengthen their role so as to produce positive childcare. As a results, Immigrants and refugees in Australia can maximally educate their children even though their cultural backgrounds differ from where they came from. After getting educated, parents have different perspectives on physical submission and access to food for their children (Leone, 2014).
Strengthening Families is an internationally recognized family empowerment program that is proven effective in improving children's mental health. The results of the effectiveness study show that this program is able to reduce anger, make the child's parents' relationship better and provide understanding to parents about child care (Riesch et al., 2012;Burn et al., 2019). 123 Magic is a parental strengthening program that focuses on emotional control. The purpose of this program is to improve parenting skills in positives parenting. This program is effectively implemented because parents could take care of children posititvely and the environtment around the child became harmonious (Phelan, 2016).
Positive Discipline in Everyday Parenting (PDEP) is a parenting approach program that educates and guides children in good behavior. The program was carried out in 4 sessions and the topic was about parenting. Benefits of the program include: parents no longer use physical punishment, it increases parental self efficacy and reduces conflict between parents and children (Durrant et al., 2014;Durrant, 2017).
Family And School Together (FAST) is a program created by the United Nations Office on Drugs and Crime (UNODC). FAST is a multi-family intervention aiming to improve parental empowerment so that it could build a good relationship with the child (Maalouf and Campello, 2014). This program has been implemented in several countries, such as Turkmenistan, Kyrgyzstan, Guatemala, Nicaragua, Albania, Serbia, Montenegro, Macedonia, Bosnia Herzegovia and Brazil. The result of implementing the program is that parents become active in activities that involved their children (McDonald and Sayger, 1998;Maalouf and Campello, 2014). Table 2 shows that there are 11 sexual violence prevention programs for parents. The core objective of the various programs is educating children to behave well. In achieveing these core goal, each program takes a similar approach, namely by teaching about good parenting. Parenting method is done by managing emotions, increasing the coping mechanisms and self eficacy of parents, and doing parental empowerment. Children who have a bad childhood background (mistreated and become victims of sexual violence) potential to become perpetrators of sexual violence in the future. The low quality of self from perpetrators of sexual violence against Received: 29-01-2020, Accepted: 28-08-2020, Published Online:28-09-2020 children shows that the family which is expected to provide the basis for the development of the child's personality does not fuction properly, including the function of family control, and the family environtment does not work well (Teja, 2016).
Of the 11 sexual violence prevention programs for parent found, the length of implementation of the program depends on the topic provided and the target audience. If the target is multisectoral, then the topic is taught a lot and the teaching time is longer. Of all available programs, the subject matter will be taught by expert trainers. This is the same as the presenters in the program of sexual violence prevention in children that are not based on school curricula. The goal of the whole program is parents. Based on sexual violence against children data in Indonesia, most of the perpetrators come from families with violent parenting. The existence of conflict in the family causes the perpetrator to not be able to correctly identify the roles of men and women. This is what causes the perpetrators to grow into a paedophile (Handayani, 2012).
All programs have been proven effective because they are able to reduce the number of violence in children, parents are able to manage emotions so that the relationship between parents and children becomes more harmonious. Barries to the program are location, time, cost, and goverment commitment. This obstacle is almost similar with the obstacle in sexual violence prevention for children. Indonesia is a country with diverse cultural background and an archipelago (Lestari, 2015). Government commitment is one of the many barriers to policy implementation in Indonesia (VOA, 2019).
Since 2016, Indonesia has implemented a positive parenting program, The purpose of this program was to build a warm relationship between children and parents and stimulated child develompment. The material taught were the stage of child development, effective communication and positive discipline (Kemendikbud, 2016). Compared to prevention programs for child sexual violence for parents that have been carried out abroad, this positive parenting program was not specific yet. But in this implementation, there were some material regarding reproductive health and early detection of deviant behavior (Kemendikbud, 2016).
A non governmental organization, ECPAT Indonesia implemented a violence against children prevention with the Smart School Online Module for Family and Teacher -Eksploitasi Seksual Anak di Ranah Online‖. The topic presented by expert facilitators were a general understanding of the sexual exploitation of children in the online realm and what could be done to prevent the exploitation of children in the online realm (ECPAT Indonesia, 2018). 246 Jurnal Promkes: The Indonesian Journal of Health Promotion and Health Education Vol. 8 No. 2, September 2020, 238-252 doi: 10.20473/jpk.V8.I2.2020 ©2020. Jurnal Promkes: The Indonesian Journal of Health Promotion and Health Education. Open Access under CC BY-NC-SA License. Received: 29-01-2020, Accepted: 28-08-2020, Published Online:28-09-2020 | 2020-10-17T01:32:05.807Z | 2020-09-24T00:00:00.000 | {
"year": 2020,
"sha1": "e1ea1d188ffbb62ccfb711ff0263fc97996d1fe1",
"oa_license": "CCBYNCSA",
"oa_url": "https://e-journal.unair.ac.id/PROMKES/article/download/15958/12130",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e1ea1d188ffbb62ccfb711ff0263fc97996d1fe1",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
53687782 | pes2o/s2orc | v3-fos-license | A review of marine phylogeography in southern Africa
How to cite this article: Teske PR, Von der Heyden S, McQuaid CD, Barker NP. A review of marine phylogeography in southern Africa. 2011;107(5/6), Art. #514, 11 pages. doi:10.4102/ sajs.v107i5/6.514 The southern African marine realm is located at the transition zone between the Atlantic and Indo-Pacific biomes. Its biodiversity is particularly rich and comprises faunal and floral elements from the two major oceanic regions, as well as a large number of endemics. Within this realm, strikingly different biota occur in close geographic proximity to each other, and many of the species with distributions spanning two or more of the region’s marine biogeographic provinces are divided into evolutionary units that can often only be distinguished on the basis of genetic data. In this review, we describe the state of marine phylogeography in southern Africa, that is, the study of evolutionary relationships at the species level, or amongst closely related species, in relation to the region’s marine environment. We focus particularly on coastal phylogeography, where much progress has recently been made in identifying phylogeographic breaks and explaining how they originated and are maintained. We also highlight numerous shortcomings that should be addressed in the near future. These include: the limited data available for commercially important organisms, particularly offshore species; the paucity of oceanographic data for nearshore areas; a dearth of studies based on multilocus data; and the fact that studying the role of diversifying selection in speciation has been limited to physiological approaches to the exclusion of genetics. It is becoming apparent that the southern African marine realm is one of the world’s most interesting environments in which to study the evolutionary processes that shape not only regional, but also global patterns of marine biodiversity.
The southern African marine realm is located at the transition zone between the Atlantic and Indo-Pacific biomes.Its biodiversity is particularly rich and comprises faunal and floral elements from the two major oceanic regions, as well as a large number of endemics.Within this realm, strikingly different biota occur in close geographic proximity to each other, and many of the species with distributions spanning two or more of the region's marine biogeographic provinces are divided into evolutionary units that can often only be distinguished on the basis of genetic data.In this review, we describe the state of marine phylogeography in southern Africa, that is, the study of evolutionary relationships at the species level, or amongst closely related species, in relation to the region's marine environment.We focus particularly on coastal phylogeography, where much progress has recently been made in identifying phylogeographic breaks and explaining how they originated and are maintained.We also highlight numerous shortcomings that should be addressed in the near future.These include: the limited data available for commercially important organisms, particularly offshore species; the paucity of oceanographic data for nearshore areas; a dearth of studies based on multilocus data; and the fact that studying the role of diversifying selection in speciation has been limited to physiological approaches to the exclusion of genetics.It is becoming apparent that the southern African marine realm is one of the world's most interesting environments in which to study the evolutionary processes that shape not only regional, but also global patterns of marine biodiversity.
Introduction
Phylogeography is the study of the historical and phylogenetic components of the spatial distribution of gene lineages within and amongst closely related species. 1,2Many phylogeographic studies have focused on species of conservation concern, 3 whilst others have used the approach to investigate species complexes 4 or address questions in invasion biology. 5In many instances, results have been interpreted in a somewhat narrative manner, linking genetic disjunctions with past climates or physical barriers and limited dispersal.More rigorous interpretations can be obtained when phylogeography is used in a comparative context; if the genetic structure of codistributed but evolutionarily independent populations is congruent, then this reveals common processes that have driven genetic divergence. 6,7If focused on multiple taxa, phylogeographic studies can thus be a very powerful tool in the identification of locations and processes central to the origin and maintenance of biological diversity. 8,9 a recent review of phylogeographic studies, Beheregaray 10 highlighted challenges for the Southern Hemisphere, noting that 77% of all studies were on boreal taxa, whilst biodiversity-rich developing nations are lagging in their use of this powerful method.South Africa was listed as 21st out of the 100 most productive countries in terms of publishing phylogeographic studies, with a total of 68 papers at that time.South Africa was also the 4th most productive country in the Southern Hemisphere, after Australia, Brazil and New Zealand.In light of the imbalance between the North and South, it is timely and appropriate that an assessment of the discipline in southern Africa is undertaken.Here we present a synthesis of key findings and a candid look ahead for phylogeographic research on marine organisms, which we hope can be used to identify research gaps, motivate for new studies and drive new directions, not only in regional, but also global marine biological research.
Southern Africa has a long and diverse coastline, comprising rocky and sandy shores, kelp forests, estuaries and coral reefs, yet marine phylogeography lags behind phylogeographic research on terrestrial biota.Although papers that could be considered to have a phylogeographic component were sporadically published during the 1980s 11,12 and 1990s, 13,14 a concerted effort to study the region's marine biota began less than a decade ago. 15,16In recent years, marine phylogeography has primarily been driven by three South African research groups, based at Rhodes University, Stellenbosch University and the University of Pretoria.Each group has its own focus: the group at Rhodes University focuses primarily on coastal invertebrates, the group at Stellenbosch University on coastal and deep-water fishes, commercially exploited crustaceans and other coastal invertebrates, and that at the University of Pretoria focuses exclusively on commercially important fish species.As southern Africa is of great interest from a biogeographic point of view because of its location at the transition zone between the Atlantic Ocean and Indian Ocean biomes, it has also featured prominently in a number of key phylogeographic studies with a global focus. 17,18In this review, we highlight several areas of research where southern African marine organisms have featured prominently, identify significant gaps in terms of both sampling design and technical aspects, and discuss how these shortcomings can be addressed in the near future.
Coastal phylogeography
Of the different fields of marine phylogeography that are being studied in southern Africa, coastal phylogeography can be considered the one about which we know most.Since 2000, 23 papers dealing with the phylogeography of coastal taxa have been published and several more are either in press or in preparation.Papers authored by South African researchers understandably dominate the literature; access to coastal sites is easy, sampling is relatively simple and cheap, and usually many samples can be obtained in a short period of time.A recent review paper examined the phylogeographic patterning of southern African coastal taxa 19 and some of the general trends identified are briefly discussed below.
Location of coastal phylogeographic breaks
Most coastal species are divided into regionally confined genetic lineages whose distributions in many cases are linked with southern Africa's marine biogeographic provinces. 20,21hylogeographic breaks separating such lineages have been identified in three regions (Figure 1). 19 the south-west coast, phylogeographic breaks that coincide with the biogeographic disjunction between cooltemperate and warm-temperate biota 22 have been reported near Cape Point 23,24 and Cape Agulhas. 25,26The region between these sites is sometimes considered a transition zone, 27 and several species have phylogeographic breaks at both sites, with distinct lineages that are endemic to this transition zone. 20,23ylogeographic breaks on the south-east coast, at the disjunction between warm-temperate and subtropical biota, 22 have been difficult to define because their exact locations differ considerably for different species, and, in some, there is considerable overlap of genetic lineages. 23,28he continental shelf in this region gradually widens from north to south, deflecting the warm Agulhas Current away from the coast, limiting its influence on coastal biota (Figure 1). 29The northernmost breaks in this region have been identified on the Central Wild Coast (Transkei region) 20,30 and the southernmost breaks were reported near Algoa Bay. 24,31e third area where phylogeographic breaks have been identified coincides approximately with the transition zone between subtropical and tropical biotas on the east coast; 32 some species have phylogeographic breaks in north-eastern South Africa near St Lucia 21,33 and others have breaks farther north in Mozambique. 34,35 important finding is that not all species that occur in more than one marine biogeographic province exhibit genetic structure, and those that do need not have phylogeographic breaks at the same localities.Some species are not genetically structured across one or more biogeographic disjunctions, 20 and several taxa show no genetic structuring along their entire ranges. 36,37In addition, several species with low capacity for dispersal exhibit phylogeographic breaks that do not coincide with present-day marine biogeographic disjunctions, 20,23 suggesting that in these, historical patterns are retained by limited gene flow. 38Also, although planktonic dispersers usually do not have any phylogeographic breaks within marine biogeographic provinces, this does not necessarily imply that all are panmictic within provinces.Whilst panmixia has been identified in a highly philopatric coastal fish that disperses primarily by means of planktonic larvae, 37 significant genetic structure was found in the brown mussel, Perna perna. 39Populations of this species residing in different bays in the warm-temperate province were not only genetically distinct from each other on the basis of differences in haplotype frequencies, but they were also distinct from populations on the open coast.the warm, southward-flowing Agulhas Current on the south-east coast, and the cold, northward-flowing Benguela Current on the west coast.The region can be divided into four major marine biogeographic provinces -cool temperate, warm temperate, subtropical and tropical -each of which has its own assemblage of species.Coastal phylogeographic breaks between provinces have been identified at three major localities that in most cases coincide with the disjunctions between the provinces: south-west coast (westernmost -Cape Point, easternmost -Cape Agulhas), south-east coast (southernmost -Algoa Bay, northernmost -Wild Coast) and northern east coast (St Lucia).
Maintenance of coastal phylogeographic breaks
Even though many of southern Africa's coastal species have high dispersal potential because of well-developed locomotory abilities and/or extended planktonic dispersal phases, phylogeographic breaks are often surprisingly abrupt.By linking oceanography with life history, it should be possible to establish the relative importance of the interacting factors that contribute to population genetic structuring and population connectivity. 40Hypotheses explaining how distinct genetic patterns are maintained fall into two major categories, (1) genetic lineages are separated by barriers that limit dispersal and (2) regional genetic lineages are adapted to the environmental conditions characteristic of their marine biogeographic province and in many cases are unable to establish themselves in adjacent provinces.
Oceanic dispersal barriers
Proposed dispersal barriers that limit mixing of adjacent genetic lineages include upwelling cells, 17 river discharge, 14 coastal currents or eddies 30,41 and even a coastal dunefield. 20ld-water upwelling: Numerous studies on marine species have indicated that cold-water upwelling can represent a strong dispersal barrier. 17,42On the South African west coast, some coastal species have gaps in their distribution across a region with strong, persistent upwelling that may extend over hundreds of kilometres (e.g. the mussel Perna perna) 28 and dispersal of marine organisms from the Indian Ocean into the Atlantic Ocean is limited. 18,43Some marine species show high levels of differentiation on the west coast (unpublished data), whilst in other studies, the same genetic lineages were identified on either side of the cold-water barrier, 28,44 suggesting that in these, divergence was either very recent or that populations on either side are connected by ongoing gene flow.Whether genetic disjunctions on the west coast are solely linked to upwelling cells or whether local oceanographic features such as eddies retain larvae in their natal environments has yet to be examined.
Freshwater discharge: On the south-east coast, freshwater discharge from the Mbashe River has been invoked as a dispersal barrier that prevents mixing of subtropical and temperate biota. 14However, in many invertebrate species, the phylogeographic breaks in this region are not located near this river. 4,30It remains to be tested whether larger rivers, such as the Tugela or the Gariep, represent dispersal barriers that limit dispersal of marine organisms along the coast.
Currents: Currents may represent dispersal barriers when water and larvae are mostly displaced offshore, away from suitable habitat in which to settle. 45The trajectories of drifters released on the South African south and east coasts showed remarkably little overlap. 41None of the drifters released on the south coast moved close to the east coast, and drifters released on the east coast eventually became entrained in the Agulhas Current and were moved hundreds of kilometres offshore.This suggests that large-scale regional hydrodynamics significantly reduce mixing between the temperate and subtropical biotas.However, drifters were released several kilometres offshore, so it is likely that winddriven inshore currents facilitate some northward dispersal on the south-east coast, which would explain the presence of the temperate lineages of some coastal invertebrates as far north as the central Wild Coast (Figure 1). 26,30Indeed, in an experiment using plastic drift cards, it was found that twice as many cards were retained in this region compared to cards released from two sites on the east coast, the majority of which were caught in the Agulhas Current. 46netic methods of analysing the strength and directionality of gene flow represent a useful additional tool for studying the role of currents in dispersal.They estimate long-term trends and only incorporate information from individuals that have dispersed and recruited successfully.
Broad-scale patterns from the different biogeographic areas show the influence of the major current systems on dispersal, and the evidence points to the importance of the interplay of the duration of larval dispersal with local current regimes.Four major gene-flow scenarios have been identified, (1) strong northward flow on the west coast with the Benguela Current, (2) strong southward flow on the east coast with the Agulhas Current, (3) some bidirectional gene flow inshore of the Agulhas Current on the south-east coast and (4) bidirectional gene flow on the south coast.These are discussed in more detail below.
The west coast, which is dominated by the northwardflowing Benguela Current, shows the strongest signal of asymmetrical or unidirectional gene flow patterns. 24Very little information on gene flow is available from the east coast, but the limited data there are support the idea that longdistance dispersal is mostly facilitated by the southwardflowing Agulhas Current. 23 the south and south-east coasts, migration is not as clear.In the barehead goby, Caffrogobius caffer, gene flow was shown to be predominantly with the Agulhas Current, 36 but in another rocky shore fish, the bluntnose klipfish (Clinus cottoides), most gene flow occurred in the opposite direction. 24n invertebrates, bidirectional gene flow was identified on the south coast, with more eastward than westward dispersal, 23 and gene flow on the south-east coast was also bidirectional, indicating that much dispersal takes place by means of nearshore currents. 30Together, these somewhat contradictory examples illustrate why life history plays an important role in determining population genetic structuring.For example, like many other gobioid fishes, C. caffer probably has a long larval dispersal phase 47 whereas adult fishes are confined to high-shore rock pools and probably do not disperse at all. 48ispersal is therefore only by larvae that make use of the Agulhas Current.In contrast, clinid fishes have extremely limited larval dispersal and it is highly unlikely that young fish are able to disperse by means of the offshore Agulhas Current, but likely rather use the inshore Agulhas countercurrent for dispersal. 24Further evidence for counter-current driven dispersal comes from the eastward range expansion of the invasive Mediterranean mussel, Mytilus galloprovincialis. 49t is also likely that strengthening of the current during the winter months facilitates the annual sardine run in South Africa. 50nefields: Dunefields seem to be an unlikely dispersal barrier for marine species, but their importance has also been documented elsewhere.51 A possible explanation is that, in addition to coastal dunefields representing long stretches of unsuitable habitat for rocky shore and estuarine species, regions where these are located are characterised by strong, persistent onshore winds, which may limit long-shore dispersal of plankton in the surface water.
Province-specific adaptations
Although there is little doubt that dispersal barriers limit gene flow between provinces, many can be considered to be incomplete.For example, many upwelling cells affect the surface waters for only short periods of time, 52 and many wind patterns (such as the shoreward south-easterly) are seasonal. 53Maintenance of genetic structure in the absence of strong dispersal barriers is possible only when levels of selection are high. 54An alternative hypothesis explaining the maintenance of coastal phylogeographic breaks suggests that although many species can reach adjacent provinces, they do not establish themselves permanently, either because they are ill-adapted to local environmental conditions or because they are outcompeted by their sister taxa.Adaptation of genetic lineages to environmental conditions that differ between provinces has been documented in several recent studies.The larvae of the subtropical lineage of the mudprawn, Upogebia africana, cannot survive the colder water temperatures that are typical of the temperate province during winter (Figure 2). 30This observation suggests that, even though they can potentially settle outside their own province during summer, 23 they are unable to establish themselves in the temperate provinces.Differences in osmoregulatory abilities of warm-temperate and subtropical lineages of the estuarine sandprawn, Callianassa kraussi, may reflect adaptation to differences in the salinities of the estuaries of each region, and therefore limit dispersal of each genetic lineage into adjacent provinces. 21Lastly, the fact that the temperate lineage of the brown mussel, Perna perna, is less tolerant of sand inundation and high temperatures than its subtropical sister lineage may partly explain its absence from the east coast. 41
Origin of coastal phylogeographic breaks
Most studies on southern African coastal taxa that describe phylogeographic breaks limit themselves to suggesting factors that are likely to maintain them.Explaining how such patterns have arisen is proving more challenging.Molecular dating indicates that coastal phylogeographic breaks are the result of historical processes that precede the beginning of the present interglacial period. 20,55However, such estimates are mostly based on few loci, which limits accuracy, and the markers used may not provide sufficient resolution to detect very recent divergence events (discussed below).The ages of congruent genetic disjunctions may differ considerably for different species, and genetic differentiation between sister lineages in adjacent provinces may range from differences in haplotype frequencies in recently diverged lineages 24,34 to lineages being so distinct that each can be considered to be a distinct species. 21,33For example, divergence time estimates that were based on more than one locus indicated that in the brachyuran crown crab species complex, Hymenosoma orbiculare, a split into temperate and subtropical lineages occurred at least 16 million years ago, 4 whereas congruent regional genetic units of the clinid fish C. cottoides diverged as recently as 60 000 years ago. 24 contrast to south-eastern Australia 56 or Indonesia, 57 there are no geological features in southern Africa that could have acted as land bridges during episodes of low sea level and that could have completely isolated populations of coastal taxa.There is consequently no compelling evidence for any geological vicariance events along the coast that could have driven simultaneous divergence in multiple species. Th region's coastal morphology nonetheless changed considerably as a result of climate oscillations during the Pleistocene. Fo example, during the Last Glacial Maximum (26 500 -19 000 years ago), 58 when the sea level was about 120 m lower than it is today, 59 large areas of continental shelf were exposed, particularly south of Cape Agulhas.60 Also, the region's sea surface temperatures cooled as a result of intensified upwelling on the west coast 61 and a reduced influence of the Agulhas Current.62 How these changes may have affected habitat availability and the amount of gene flow along the coast is poorly understood, but the role of oceanic dispersal barriers (discussed in the previous section) in driving the evolution of regional lineages needs to be assessed in this context.The exposure of the Agulhas Bank during the Last Glacial Maximum resulted in the southern tip of Africa being about 200 km south of where it is today, and, in combination with colder water temperatures in the region during that time, this may have presented a coldwater dispersal barrier similar to that on the west coast.The Agulhas Current weakened during glacial phases and may have ceased to flow during winter, 62 suggesting that advection of larvae away from the coast would have been considerably reduced, with stronger bidirectional longshore dispersal by means of nearshore currents.This possibility suggests that the role of the Agulhas Current in limiting mixing of regional biotas may never have been substantially more important than it is today.The fact that species from the east coast can temporarily establish themselves in the eastern portion of the temperate province during the summer months 63 indicates that, even today, it represents a highly permeable barrier.The same can be said of upwelling cells and the freshwater plumes of large rivers.Also, some dispersal barriers have formed more recently than the genetic lineages they separate, 20 suggesting that they only contribute towards maintaining genetic structure that was already present.Population genetic theory suggests that even a small amount of migration between populations will prevent genetic divergence by drift, 64 which indicates that southern Africa's historical oceanic dispersal barriers may be insufficient to explain the origin of marine phylogeographic breaks.
The association of genetic lineages with marine biogeographic provinces could point to ecological factors driving genetic divergence.Two recent studies have indicated that divergence could be driven by climate oscillations.In the first study, a range expansion from the south-east coast to the south-west coast that occurred during the previous interglacial period (~120 000 years ago) was identified in the coastal snail Nassarius kraussianus by means of coalescentbased molecular dating of mitochondrial DNA (mtDNA) sequence data (Figure 3). 31Westward range expansions of warm-water molluscs during this period are well documented in the fossil record. 65However, in contrast to other coastal molluscs, subsequent climatic cooling did not result in a range contraction in N. kraussianus, whose shells were used as ornaments by humans living on the southwest coast during the last glacial period. 66This lack of range contraction suggests that the species' western populations adapted to cooler water.The species is today represented by a younger temperate lineage and a more ancient lineage that occurs in the subtropical and tropical provinces, with a phylogeographic break located near Algoa Bay (Figure 1).Congruent but much older divergence events that could be linked to range expansions during warm climatic phases, followed by adaptation and speciation during cooler phases, have also been reported in the Hymenosoma orbiculare species complex using multilocus DNA sequence data. 4ological divergence scenarios linked to climate oscillations may explain why divergence times differ considerably amongst species with congruent phylogeographic breaks.
Although most species may undergo range expansions in response to shifting boundaries between marine biogeographic provinces as a result of climate oscillations, 65 adaptations to unfavourable environmental conditions during a particular range shift may only have arisen in a small fraction of the species affected, whilst the ranges of most others would have contracted.
Offshore marine phylogeography
Genetic studies of offshore populations are rare compared to coastal species.This difference can be ascribed to a number of factors, including the expense of obtaining samples, the lack of taxonomic expertise for some groups, and the lack of inclusion in multinational, large-scale research programmes. in South Africa and Namibia.The major focus regarding offshore stocks is on transboundary management between Namibian and South African fish stocks.Interestingly, there appears to be genetic structure in the deep-water hake, Merluccius paradoxus, 55,67 between Namibia and South Africa, as well as between individuals within South Africa.However, as M. paradoxus was shown to only have population genetic structuring with adult fishes and no structure for juvenile nonspawning fish, 67 the focus to date has been on understanding the structuring of adult fishes.One study has examined the distribution of the eggs and larvae of M. paradoxus and those of its shallow-water sister taxon M. capensis (which in the early stages are morphologically identical) and found that spawning depth differs signficantly, but that most larvae of a certain size are found on the continental shelf at depths of about 200 m. 68Several other studies have used molecular techniques on offshore marine species in southern Africa; for example, the lanternfish, Lampanyctodes hectoris, showed no significant genetic differentiation between South African and Namibian stocks. 69There also appears to be no differentiation in the squid Loligo vulgaris between disparate spawning areas in South Africa. 70In contrast, horse mackerels, Trachurus capensis, show slight differentiation between Namibia and South Africa based on allozyme loci. 11Preliminary research using allozymes also suggested slight stock differentiation of orange roughy, Hoplostethus atlanticus, in Namibian waters. 71ere have also been a number of studies on commercially important crustaceans, in particular rock lobsters of the genera Jasus and Palinurus.These have primarily focused on understanding population genetic structuring and demographic changes of lobster species in the region.Even though lobsters have extended planktonic dispersal phases, some genetic structuring between sampling areas was recovered for the Tristan rock lobster, Jasus tristani, 72 and for the Natal deep-sea lobster, Palinurus delagoae, which exhibits shallow but significant structuring between Mozambican and South African populations. 34In the most important commercially exploited crustacean in South Africa, the westcoast rock lobster, Jasus lalandii, genetic diversity is highest on the south-west coast and decreases towards the edge of the species' distribution. 73In contrast, the south-coast rock lobster, Palinurus gilchristi, shows no population genetic structuring along its range. 74
Indo-West Pacific phylogeography
Many marine organisms with high dispersal potential have long been considered to have distributions incorporating the entire Indian Ocean whereas the presence of temperate low-dispersal species in southern Africa and Australasia has traditionally been attributed to morphological stasis in Gondwanan relics that diverged as a result of the break-up of the ancient supercontinent.Both paradigms have been challenged by genetic studies.Large-scale phylogeographic studies have indicated that the populations of many marine organisms can be divided into lineages that are confined to the western Indian Ocean and lineages that are associated with the eastern Indian Ocean and/or the West Pacific. 75,76A recent study on fish species that occur both in southern Africa and in Australia indicated that, although there is little genetic structure in pelagic species, many inshore species are highly divergent between the regions.This study suggests that a third of the nearly 1000 fish species that occur in both regions may include cryptic species. 77Molecular dating further indicates that many of the low-dispersal species thought to be Gondwanan relics diverged long after the break-up of the supercontinent, and post-Gondwanan transoceanic dispersal is considered to be a more appropriate hypothesis explaining the observed sister-taxon relationships. 78olonisation of Australia from southern Africa via the west wind drift has been proposed for low dispersal species, 78 but colonisation patterns of highly dispersed taxa are not yet fully understood. 79
Coastal phylogeography in the western Indian Ocean
Despite the importance of the western Indian Ocean as a biodiversity hotspot and several major research initiatives, the region's phylogeography remains poorly explored.The majority of phylogeographic studies dealing with marine species from the western Indian Ocean have included samples from South Africa only. 28,80Studies that included samples from other western Indian Ocean countries have mostly compared large-scale genetic structure between the western and eastern Indian Ocean, or throughout the Indo-West Pacific. 75,81Very few studies have focused on genetic structure and gene flow along the East African coast, or between the African mainland and the region's islands.In those that have, low sample sizes and the inclusion of just a handful of sampling sites have made inferences about the location of phylogeographic breaks and levels of gene flow throughout the region problematic, 82,83 a notable exception being a recent study of the fiddler crab, Uca annulipes. 84An earlier attempt at summarising what few data there are suggested that genetic structure in the western Indian Ocean exists mostly at tropical locations, whereas south-eastern African marine populations lacked genetic structure, 85 a trend that was rejected by several more recent studies. 34,35To date, most of the phylogeographic studies that have employed a finescale sampling approach, and that have not dealt exclusively with South African fauna, involved extensive sampling in South Africa plus some additional Mozambican sites. 33,35ven these studies have suffered from the problem of large gaps between the South African and Mozambican sites.To study the phylogeography of the western Indian Ocean more comprehensively requires that the level of sampling that has proven so useful to detect genetic structure in temperate and subtropical South Africa be extended to the tropical regions to the north-east.However, the size of this region, and the logistical difficulties involved in reaching sampling sites, will require not only substantial funding, but also a strengthening of collaborations amongst researchers from different western Indian Ocean countries.
Antitropical distributions in the eastern Atlantic Ocean
Several temperate southern African marine animals have sister-taxon relationships with species in the temperate north-eastern Atlantic.Examples include hake (Merluccius spp.), 86 anchovies (Engraulis spp.), 87 krill (Nyctiphanes spp.), 88 Octopus vulgaris, 89 spiny lobsters (Palinurus spp.) 90 and intertidal ascidians (Pyura herdmani). 91Although it is possible that some of these disjunct distributions are the result of recent human-mediated transport from one region to the other, molecular dating indicates that most divergence events considerably predate the historical period.This suggests that migrants must have crossed the highly significant dispersal barrier represented by warm equatorial waters to establish themselves successfully.
Genetic markers used in marine phylogeography
Mirroring a global trend in phylogeographic research, 10 the majority of studies on southern African marine organisms have used mtDNA sequence data, with the cytochrome oxidase c subunit I gene being particularly popular.The reason for this is obvious -the primers for this marker are 'universal' and can be used for a wide variety of taxa.However, there are numerous disadvantages to using mtDNA exclusively, including that with a few exceptions, it is only inherited in the female line and is thus unsuitable for the study of hybridisation or reproductive isolation amongst different genetic lineages, and that molecular dating based on a single marker is less accurate than dating based on multilocus data. 92veral recent studies have used nuclear sequence data such as nuclear genes, introns or ribosomal RNAs in conjunction with mtDNA, and congruent genetic patterns were recovered for the two types of genetic markers. 24,33These studies have so far rejected the notion that in species with low dispersal potential, haphazard genetic structure can readily arise in the absence of any underlying environmental factors. 93To researchers who have exclusively used mtDNA sequence data until now, introns are likely to become the nuclear marker of choice.Not only are similar skills required in terms of data generation and interpretation, but the information content of introns is similar to that of mtDNA.Recent software developments for phasing the two sequences superimposed onto each other in trace files generated from heterozygous individuals 94,95 have rendered tedious cloning unnecessary, and a number of universal 96 and taxon-specific 33,97,98 primer sets have been developed.In non-model organisms for which no suitable primers are available for amplifying introns, the development of anonymous nuclear markers 99 may be a suitable alternative.
Whilst DNA sequence data from mtDNA or nuclear markers have proven suitable for detecting phylogeographic breaks and identifying cryptic speciation, they are of limited use in the study of very recently evolved genetic patterns, such as those that formed during or after the Last Glacial Maximum, or those that formed during historical times.Microsatellites (also known as short tandem repeats) are excellent markers for the study of such recent evolutionary events because of their high mutation rate.Even though a number of microsatellite libraries have been developed specifically for South African marine organisms, particularly for teleosts, 100,101,102,103 we are aware of only four research papers that have actually used these markers to study marine phylogeography in southern Africa. 25,35,37,104her types of genetic markers with considerable potential for the elucidation of marine phylogeography have yet to be used in southern Africa.For example, amplified fragment length polymorphism (AFLP) is now firmly established as a genetic marker for terrestrial plants, 105 but we are not aware of any studies on southern African algae, seagrasses or mangroves that have used them, and their use in animals is so far limited to aquaculture. 106ere to from here?
Although there are substantial data on some aspects of marine phylogeography in southern Africa, other aspects require further attention.Firstly, despite considerable insight into marine phylogeographic breaks gained during the past decade, two regions have not received sufficient attention.
In the tropical north-east, phylogeographic breaks in species with low dispersal potential were identified near St Lucia, 21,33 and several planktonic dispersers have phylogeographic breaks in southern Mozambique. 34,35Because of logistical difficulties in accessing sites, there were large gaps between sampling sites in all studies focusing on this region, and it is possible that there is in fact more than one phylogeographic break.Even less research has focused on the west coast, and more intensive sampling, which also includes sites in Namibia and Angola, is needed to better understand genetic structuring in this region. 19 terms of the nearshore biota studied, most research has so far focused on rocky shore or estuarine species, and only two studies have been on sandy shore organisms. 13,16s sandy beaches make up about 42% of the South African shoreline and are a dominant feature particularly on the east coast, 107 more research efforts should be concentrated on understanding the genetic structuring of sandy shore organisms.
Surprisingly little phylogeographic research has been conducted on commercially important species. 37,67,89,104Given that commercially exploited coastal teleosts have primarily been used as model taxa to position Marine Protected Areas, more phylogeographic research evaluating the current Marine Protected Areas network is warranted.In addition, the dearth of offshore genetic research demands serious attention.In the light of increased commercial, artisanal and recreational fishing, as well as possible warming of ocean currents in the region, 108 it becomes all the more important not only to understand population structuring, but also the likely evolutionary response of offshore marine species to climatic change. 55Phylogeographic studies have also inadvertently uncovered cryptic speciation in marine species. 109With at least 25% of southern African endemic fishes yet to be described, 110 it is likely that biodiversity inventories not only of fishes, but of all marine taxa will greatly benefit from phylogeographic research.
In addition to focusing on neglected taxa and obtaining samples from regions where little research has been conducted, considerably more effort needs to be placed on generating, not only multispecies, but also multilocus genetic data sets.In addition to the increased use of nuclear sequence data and AFLPs, the development of microsatellite libraries needs to be a major focus of southern African marine phylogeography in the coming years.As a result of their high mutation rate, microsatellites will allow researchers to study genetic patterns driven by factors such as fishing pressure and climate change, as well as to obtain more reliable information on gene flow.For example, migration rates estimated using coalescent-based methods such as those implemented in MIGRATE-N 111 or IMa 112 are often interpreted as reflecting contemporary gene flow, 23,24 but they may in fact be strongly influenced by historical events, 111 particularly when they are based on comparatively slowly evolving markers such as mtDNA or introns.Microsatellites would further allow the identification of cryptic species or stocks that have evolved too recently to be detectable using DNA sequence data.Recent advances in sequencing technology (e.g.454 pyrosequencing) are likely to make the development of microsatellite libraries and single nucleotide polymorphism libraries more accessible to southern African researchers.
Conclusion
The southern African marine realm is an exceptionally interesting environment in which to study evolutionary processes.Because it is located at the transition zone between the Atlantic Ocean and Indo-Pacific biomes, the region's biodiversity is particularly high.Although South Africa has a very active marine biological community and conventional marine research is of a high standard, research addressing fundamental evolutionary concepts is still poorly developed.
In the coming years, marine phylogeographic research needs to move from being mostly descriptive to becoming more analytical.For example, most studies have been limited to interpreting phylogeographic patterns on the basis of oceanographic data, but it would be desirable to explore how marine organisms' evolutionary histories have shaped present-day patterns, which should include testing alternative hypotheses of when and how genetic structure evolved. 113,114ceanographic research in southern Africa has concentrated on offshore features, with an enormous emphasis on the economically important Benguela upwelling system, 115 and to a lesser degree on the Agulhas Current. 52Although recent initiatives have begun to address this, 116 we have a relatively poor understanding of the hydrodynamically complex nearshore region that hampers our ability to interpret genetic data from taxa that live in shallow waters and disperse within the nearshore arena.In addition, the fact that there is strong evidence for adaptive differentiation between recently evolved sister lineages in the region's different marine provinces suggests that a greater focus needs to be placed on studying selection pressure.In addition to conducting physiological studies on evolutionary lineages that have been identified using selectively neutral genetic markers, focusing on markers that are under selection would greatly improve our understanding of the relative importance of dispersal barriers and selection gradients in driving the evolution of new species.We believe that it is time to put southern Africa 'on the map' as one of the world's most interesting regions in which to study marine phylogeography, and help afford it a similar status to that presently occupied by the Cape Floristic Kingdom, the African Great Lakes and the terrestrial fauna of Madagascar.
FIGURE 1 :
FIGURE 1: Southern African oceanography and location of coastal phylogeographic breaks.The region is dominated by two boundary currents: the warm, southward-flowing Agulhas Current on the south-east coast, and the cold, northward-flowing Benguela Current on the west coast.The region can be divided into four major marine biogeographic provinces -cool temperate, warm temperate, subtropical and tropical -each of which has its own assemblage of species.Coastal phylogeographic breaks between provinces have been identified at three major localities that in most cases coincide with the disjunctions between the provinces: south-west coast (westernmost -Cape Point, easternmost -Cape Agulhas), south-east coast (southernmost -Algoa Bay, northernmost -Wild Coast) and northern east coast (St Lucia).
FIGURE 3 :
FIGURE 3:A hypothesis explaining how phylogeographic breaks associated with the disjunctions between marine biogeographic provinces can arise in the absence of absolute dispersal barriers.The example presented here is based on molecular dating and fossil data of the coastal snail Nassarius kraussianus.31Saldanha and Olifants River: approximate south-western distribution limits; Blombos Cave: Last Glacial Maximum fossil site.
The findings to date indicate that the dispersal direction may differ amongst taxa.Whereas dispersal in Nyctiphanes spp.and P. herdmani was most likely from southern Africa to the north-eastern Atlantic, Merluccius spp., Engraulis spp.and Palinurus spp.most likely originated in the Northern Hemisphere. | 2018-11-17T10:01:01.005Z | 2011-05-26T00:00:00.000 | {
"year": 2011,
"sha1": "9ed22741f2523ceedde1e38f494ffa2057d7f266",
"oa_license": "CCBY",
"oa_url": "https://sajs.co.za/article/download/10034/14432",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b9203c7637406a0609b488ee0486f9883f233dff",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
41742286 | pes2o/s2orc | v3-fos-license | PRENATAL EXCLUSION OF HERLITZ SYNDROME BY ELECTRON MICROSCOPY OF FETAL SKIN BIOPSIES
Two women had each borne a child who had died of Herlitz syndrome, i.e., epidermolysis bullosa atrophicans generalisata gravis. In subsequent preg nancies, the women requested prenatal diagnosis. Sam ples of skin from the two fetuses were obtained al feto scopy in the 19th week of gestation. In both cases the disorder could be ruled out prenatally on the basis of ultrastructural demonstration of the regular presence of normal hemidesmosomes with well-developed sub-basal den se plates at the dermo-epidermal juoction. The infants were subsequently born and had normal skin, the sites of fetal skin biopsies showing oo scarring.
Case 1
Neither the woman nor her husband were aware of any skin disease in their farnilies. Their first child, born 1974, is a healthy boy without any skin abnormalities. The second child, a gir! bom 1979, had blisters around the nails and umbilicus at birth. She also had oral muco sal Jesions. New blisters appeared continuously. Herlitz syndrome was diagnosed on the basis of the electron microscopic findings of junctional blister formations and hypoplasia of hemidesmosomes. Death of the infant within 19 days of birth, from sepsis, despite massive antibiotic medication, confirmed the grave prognosis of 13-832806 the Herlitz syndrome. Post-mortem skin biopsies again revealed juactional cleavage.
Both parents must be regarded as heterozygous caniers of the Herlitz gene, which is transmitted in an autosomal reccssive pattem. The parents were informed of the I: 4 risk of their having a child affected by the disease.
In the third pregnancy, the woman requested prenatal diagnosis. In the 19th week of gestation (menstrual weeks), fetal skin specimens were obtained al fetoscopy. A preliminary report on this case has been published (11).
Case2
This case resembled case 1, with a female baby suffering from Herlitz syndrome and succumbing at the age of 5 months. In the 19th week of a second pregnancy, fetal skin specimens were taken at fetoscopy.
Controls
The fetuses of 5 women who were to undergo elective abortion by hysterotomy in the 16th to 21st week of gesta tion served as controls. Tmmcdialcly prior to th hystero tomy, fetal skin biopsy specimens were taken at feto scopy.
The examinations were done after obtainiog informed consent from the women and the approval of the Ethical Committee at the University Hospital in Lund.
Fetal skin sampling
A sharp trocar and cannula (diameter 2.2 mm) were in troduced percutaneously into the uterus under local anesthesia (Fig. I). The trocar was withdrawn and, in the 2 cases of diagnosis, 20 ml of amniotic fluid was aspirated for chromosomal analysis and a-fetoprotein determina tion. A 1.7 mm diameter "Needlescope" (Dyonics, Inc.) was inserted through the cannula. A suitable site for skin biopsy was chosen (thigh, buttock or back). The cannula was gently placed against the fetal skin at the selected site. The fetoscope was withdrawn, a biopsy forceps (2x2 mmjaws) was passed down the cannula and a biopsy specimeo was obtained "blind".
Electron microscopy
The samples were placed immediately in fixation solution freshly prepared according to Peracchia & Mittler (12). This consisted of 3 % glutaraldehyde in 0. I mol cacodylate buffer solution, pH 7.4. panially oxidizcd by adding 6 drops of 30% hydrogen peroxide to 25 ml buffered glutaraldehyde solution and stirring for JO min before use. Fixation was performed at room temperature for 2 h in this solution and continued after changing to 3 % buf fered glutaraldehyde without hydrogen peroxide at room temperature. The biopsies were sent to Heidelberg and there processed as described elsewhere (1).
Contro/s
Of 17 biopsy specimens obtained from the control group, only 2 consisted of skin, one from each of the 2 fe tu ses in the 21 st week of gestation (see Table J). Of the remaining 15 biopsy specimens, 9 consisted of fetal membranes; 3, of myometrium; and 3, of trophoblast. The electron microscopy examination in the control group was thus confined to only two skin specimens.
Hemidesmosomes, fully developed or still devel oping, were found along the dermo-epidermal junc tion. This region is of special interest since blisters form within that area in most types of epidermo ly�is bullosa. The ultrastructure of the dermo epidermal junction was consistent with that in post natal skin, although not fully developed in all de tails.
Case I Altogether 8 biopsy samples were obtained f r om the thighs and buttocks at fetoscopy in the 19th week of gestation. Two of the specimens turned out to be fetal membranes. Thus 6 biopsies of fetal skin were available for electron microscope ex amination.
Completely normal conditions comparable to those in control skin were found. There was no indication of junctional separation. At the dermo epidermal junction, the basal lamina was thus con tinuous. The hemidesmosomes showed their nor mal ultrastructure, including well-developed sub basal dense plates. Complete hemidesmosomes were less common than after birth. This was not different from the control samples studied in paral lel, indicating that the dermo-epidermal junction at the 19th to 21st week of gestation is still under going development. The constant demonstrability of well-developed sub-basal dense plates in all hemidesmosomes cut perpendicularly was taken as evidence of the normal, non-Herlitz condition of the fetus. The time needed for processing the biopsies and for diagnostic electron microscopy was 7 days (9-16.7. 1980), After an uneventful pregnancy, a healthy male infant was bom in due course (41st week of gesta tion). After birth, he was carefully examined; no scars or sequelae were visible at the sites of the fetal biopsies. For confirmatory studies, two biop sies were taken on day 2, one knife biopsy and one shave biopsy. Normal ultrastructural skin development for a newborn was found in both biopsies. The child is now I year 7 months old and is developing normally.
Case2
Altogether 8 biopsy specimens were obtained from the thighs at fetoscopy in the 19th week of gesta tion. Examination under the light microscope showed that only 2 of the samples were of skin; the remaining 6 were fetal membranes, myomet rium and trophoblast (see table).
No split or cleft formation was found between the epidermis and the connective tissue, and the dermo-epiderrnal junction showed normal ultra structural features (Fig. 2) similar to those in case I . The hemidesmosomes were present in the same frequency as in the control specimens, and they regularly presented a well-developed sub-basal dense plate. Thus, the Herlitz syndrome could be ruled out regarding this fetus too. The time needed for prenatal diagnosis was 7 days (5-12.5. 1981).
There were no early complications following the fetoscopy, but from the 26th week of gestation the woman was hospitalized because of intermittent leakage of amniotic fluid. Labour started in the 33rd week, and a healthy boy weighing 2.02 kg was born by cesarean section. No scars were seen at the si tes of fetal biopsy. The infant is now 11 months old and is developing normally.
DISCUSSION
It is much easier to demonstrate a disease when specific changes such as blisters are present, than it is to exclude it with certainty. Although blister formation can occur in the Herlitz syndrome even during fetal life, most Herlitz babies are bom with intact skin and develop blisters first some hours or even days after brith. Blisters appear in dusters in most epidermolyses, can occur in some regions only, or be fortuitously absent at the time and site of biopsy. Blisters can also be produced by the sampling of the fetal skin. Therefore, reliable cri teria for abnormality or normality are a necessary basis for the safe exclusion of the disease in a high risk pregnancy.
The Herlitz syndrome offers such criteria, the most important of which is the hypoplasia of hemi desmosomes which has been proved constantly demonstrable in homozygous carriers of this reces- (3). This hypoplasia includes a numerical reduction of hemidesmosomes and complete ab sence of the sub-basal dense plates below the basal celJ plasma membrane in the space of the lamina rara. Such hypoplastic hemidesmosomes represent points of minor resistance and explain the site of separation within the junction area, i.e., in the space of the lamina rara.
Hemidesmosomes appear at the dermo-epidermal junction at about the 12th week of gestation (9,10). in both quantity and size during the subsequent weeks of fetal development (1,9), prenatal dia gnosis becomes more reliable with progressing pre gnancy. Therefore, it would appear most advisable to do prenatal evaluation of a fetus at risk for Her Litz syndrome or related skin disorders between the 19th and 21st week of gestation. Even then suffi cient time remains for legal termination in case of an affected fetus since, in our experience, the time needed for the preparation procedure and ultra structural analysis is about 7 days after receipt of the biopsies.
The "blind" biopsy procedure is the conventional method for obtaining fetal skin specimens in utero (I, 2, 4, 5, 7, 8, 11). This method involves a risk of erroneously collecting fetal membranes (amnion, chorion) or placental and uterine wall fragments instead of skin (see table). This can occur if the tip of the cannula inadvertently slips off the fetus and onto the fetal membranes lining the uterine wall or lining the placcnta, f r om which the biopsy specimen is then removed. This happened especial ly in case 2. The damage to the amniotic sac caused by such unsuccessful biopsy attempts was probably responsible for the intermittent leakage of amniotic fluid that occurred in this woman f r om the 26th week of gestation and for her premature delivery in the 33rd week. Using the "blind" technique in a case for prenatal diagnosis, Elias et al. (5) ob tained only one single biopsy specimen and even that turned out to be a piece of the amniotic mem brane; 4 weeks later, they repeated the fetoscopy with success. Thus, there is an urgent need for a set of instruments that makes it possible to per form the biopsy under direct vision. | 2018-04-03T05:08:20.257Z | 1983-05-01T00:00:00.000 | {
"year": 1983,
"sha1": "a6ac65e592343e0d0ea757cdbdc5a7866eaca518",
"oa_license": "CCBYNC",
"oa_url": "https://medicaljournalssweden.se/actadv/article/download/7852/11328",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "bb0baeeb2c00c381b22e54f7641f766c3c5e40f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14622602 | pes2o/s2orc | v3-fos-license | Kaluza-Klein Theory without Extra Dimensions: Curved Clifford Space
A theory in which 16-dimensional curved Clifford space (C-space) provides a realization of Kaluza-Klein theory is investigated. No extra dimensions of spacetime are needed:"extra dimensions"are in C-space. It is shown that the covariant Dirac equation in C-space contains Yang-Mills fields of the U(1)xSU(2)xSU(3) group as parts of the generalized spin connection of the C-space.
Introduction
There is more to spacetime than usually envisaged in special or general relativity. Even at the classical level, besides the bosonic coordinates it is customary to include Grassmann odd coordinates into the game (see, e.g. [1]). They provide a description of spinning degrees of freedom. An alternative way [2]- [7] of extending spacetime is to consider the corresponding Clifford space (shortly C-space) generated by basis vectors γ µ . A point of Cspace is described by a set of multivector coordinates (s, x µ , x µν , ...) which altogether with the corresponding basis elements (1, γ µ , γ µν , ...) form a Clifford aggregate or polyvector X.
It is well known [8,9] that the elements of the right or left minimal ideals of Clifford algebra 0 † A revised version of this paper will appear in Physics Letters B.
can be used to represent spinors. Therefore, a coordinate polyvector X automatically contains spinor as well as bosonic coordinates. In refs. [10,11] it was proposed to formulate string theory in terms of polyvectors, and thus avoid using a higher dimensional spacetime.
Spacetime can be 4-dimensional, whilst the extra degrees of freedom ("extra dimensions") necessary for consistency of string theory are in Clifford space.
In this paper we propose to go even further: 16-dimensional curved Clifford space can provide a realization of the Kaluza-Klein idea [10]. We do not need to assume that spacetime has more than four dimensions. The "extra dimensions" are in Clifford space.
We will first investigate some basic aspects of the classical general relativity-like theory in C-space. Then we pass to quantum theory and rewrite the Dirac-like equation in curved C-space and show that the corresponding generalized spin connection contains Yang-Mills fields describing fundamental interactions.
Although other authors in a number of very illuminating and penetrating papers [15] have investigated unified models of fundamental interactions within the framework of Clifford algebra, they have not fully employed the concept of Clifford space, together with the C-space metric, affine and spin connection, polyvector-valued wave function [6], which all enable to formulate a Kaluza-Klein like theory in 16-dimensional Clifford space defined over 4-dimensional spacetime. As far as I know this is a novel approach (see also refs. [12,13,10,14]).
2 Clifford space as a generalization of spacetime . Since pioneering works by Hestenes [16], Clifford algebra has been extensively investigated (see e.g. refs. [17]- [22]). Some researchers [2]- [7] proposed to replace spacetime with a larger geometric structure which is based on Clifford algebra. This has led to the concept of Clifford space (shortly C-space).
Suppose we have an n-dimensional space V n , not necessarily flat. At every point x ∈ V n we have a flat tangent space, its basis being given in terms of n orthonormal vectors γ a , a = 1, 2, ..., n satisfying the Clifford algebra relations where η ab is a pseudo-Euclidean metric whose signature is kept arbitrary at this stage. The basis vectors γ a form a local basis in V n and they generate the Clifford algebra C Mn . The basis of the latter algebra is given by the set , γ a 1 , γ a 1 a 2 , ..., γ a 1 a 2 ...an } , a 1 < a 2 < ... < a r , r = 1, 2, ..., n where γ a 1 a 2 ...ar ≡ γ a 1 ∧ γ a 2 ∧ ... ∧ γ ar ≡ 1 r! [γ a 1 , γ a 2 , ..., γ ar ] is the wedge product. From a local basis {γ a } we can switch to a coordinate basis {γ µ } according to the where e µ a = γ µ · γ a is the vielbein field.
The coordinate basis vectors satisfy where g µν is the metric of V n . We may use γ µ as generators of Clifford algebra with the basis where γ = 1 and γ µ 1 ...µr ≡ γ µ 1 ∧ γ µ 2 ∧ ... ∧ γ µr . Since γ µ and g µν depend on position, we have different Clifford algebras C Vn at different points x ∈ V n . The continuous set of all those algebras over a domain of V n forms a manifold C Vn (x) which is usually called Clifford bundle or Clifford manifold.
In this paper we propose to introduce a more general Clifford manifold (see also [3,6,12]). Let us start from the flat Clifford space with basis (2). We then perform transition to a curved Clifford space with basis {γ M } by means of the relation where e M A is the fielbein field in C-space. The latter relation is more general than (3).
From the basis elements γ M we can define the metric of C-space according to Here ' ‡' denotes the reversion, that is the operation which reverses the order of the generators γ a (for example, γ ‡ a 1 a 2 a 3 = γ a 3 a 2 a 1 ), whilst '*' denotes the scalar product between two Clifford numbers A and B The quantities γ M , e M A , G M N are now assumed to depend on position in C-space which can be parametrized by C-space coordinates vector fields In C-space the multivector grade is relative to a chosen basis, and a coordinates transformation in C-space in general changes the grade of γ µ 1 ...µr . Thus even if an object appears as a 1-vector with respect to a coordinate basis γ µ , it is a polyvector (a superposition of multivectors) with respect to the local basis γ a .
We have thus a curved Clifford space (C-space). A point of C-space is described by The tetrad field is given by the scalar product e M A = γ ‡ M * γ A . The multivector coordinates s, x µ 1 , x µ 1 µ 2 , ..., x µ 1 ...µn provide a description of oriented r-dimensional areas. In refs. [7] a physical interpretation was given, namely that the multivector coordinates can be used to describe extended objects, such as closed branes.
On the realization of Kaluza-Klein theory in curved Clifford space
The basic idea of Kaluza-Klein theory is that spacetime has more than four dimensions.
The extra dimensions of curved spacetime manifest as gauge fields describing the fundamental interactions. Instead of introducing extra dimensions, we can investigate a theory which starts from 4-dimensional spacetime and then generalize it to curved Clifford space.
Let us first consider the equation of geodesic in curved C-space. We can envisage that physical objects are described in terms of x M = (s, x µ , x µν , ...). The first straightforward possibility is to introduce a single parameter τ and consider a mapping where X M (τ ) are 16 embedding functions that describe a worldline in C-space. From the point of view of C-space, X M (τ ) describe a worldline of a "point particle": at every value of τ we have a point in C-space. But from the perspective of the underlying 4-dimensional spacetime, X M (τ ) describe an extended object, sampled by the center of mass coordinates X µ (τ ) and the coordinates X µ 1 µ 2 (τ ), ..., X µ 1 µ 2 µ 3 µ 4 (τ ). They are a generalization of the center of mass coordinates in the sense that they provide information about the object's 2-vector, 3-vector, and 4-vector extension and orientation 1 .
The dynamics of such an object is determined by the action HereẊ J ≡ dX J /dτ is the derivative with respect to an arbitrary monotonically increasing parameter τ , and Γ M JK is the connection defined according to 2 The above relation is a generalization [12] of the well known relation [16].
When the derivative ∂ M acts on a polyvector A = A N γ N we obtain the covariant derivative D M acting on the components A N : Here the A N are scalar components of A, and ∂ M A N is just the ordinary partial derivative with respect to X M : The derivative ∂ M behaves as a partial derivative when acting on scalars, and it defines a connection when acting on a basis {γ M }. It has turned out very practical 3 to use the easily writable symbol ∂ M which -when acting on a polyvector-cannot be confused with partial derivative.
When inspected from the 4-dimensional spacetime, the equation of geodesic (12) contains besides the usual gravitation also other interactions. They are encoded in the metric components G M N of C-space. Gravity is related to the components G µν , µ, ν = 0, 1, 2, 3, while gauge fields are related to the components G µM , where the indexM = ν assumes 12 possible values, excluding the four values ν = 0, 1, 2, 3. In addition, there are also interactions due to the components GMN , but they have not the property of the ordinary Yang-Mills fields.
If we now consider the known fundamental interactions of the standard model we see that besides gravity we have 1 photon described by the abelian gauge field A µ , 3 weak gauge bosons described by gauge fields W a µ , a = 1, 2, 3, and 8 gluons described A c µ , c = 1, 2, ..., 8. Altogether there are 12 gauge fields.
Interestingly, the number of mixed components of the C-space metric tensor G M N coincides with the number of gauge fields in the standard model 4 . For fixed µ, there are 12 mixed components of G µM and 12 gauge fields This coincidence is fascinating and it may indicate that the known interactions are incorporated in curved Clifford space.
Good features of C-space are the following: Those degrees of freedom are in principle not hidden from our direct observation, therefore we do not need to compactify such "internal" space. Let Φ(X) be a polyvector valued field over coordinates polyvector field X = x M γ M : where γ A , A = 1, 2, ..., 16, is a local (flat) basis of C-space (see eq. (2)) and φ A the projections (components) of Φ onto the basis {γ A }. We will suppose that in general φ A are complex-valued scalar quantities.
Instead of the basis {γ A } one can consider another basis, which is obtained after multiplying γ A by 4 independent primitive idempotents [9] such that Here a i , b i , c i are complex numbers chosen so that P 2 i = P i . For explicit and systematic construction see [9,23].
By means of P i we can form minimal ideals of Clifford algebra. A basis of left (right) minimal ideal is obtained by taking one of P i and multiply it from the left (right) with all 16 elements γ A of the algebra: Here I L i and I R i , i = 1, 2, 3, 4 are four independent minimal left and right ideals, respectively. For a fixed i there are 16 elements P i γ A , but only 4 amongst them are different, the remaining elements are just repetition of those 4 different elements.
Let us denote those different elements ξ αi , α = 1, 2, 3, 4. They form a basis of the i-th left ideal. Every Clifford number can be expanded either in terms of γ A = (1, γ a 1 , γ a 1 a 2 , γ a 1 a 2 a 3 , γ a 1 a 2 a 3 a 4 ) or in terms of ξ αi = (ξ α1 , ξ α2 , ξ α3 , ξ α4 ): In the last step we introduced a single spinor indexà which runs over all 16 basis elements that span 4 independent left minimal ideals. Explicitly, eq. (20) reads Eq. (20) or (21) represents a direct sum of four independent 4-component spinors, each living in a different left ideal I L i . In ref. [6] it was proposed 5 that the polyvector valued wave function satisfies the Dirac equation in C-space: The derivative ∂ M is the same derivative introduced in eqs. (13) and (14). Now it acts on the object Ψ which is expanded in terms of the 16 basis elements ξÃ, which in turn can be written as a superposition of basis elements γ A of Clifford algebra. The action of ∂ M on the spinor basis elements ξà gives the spin connection: Using the expansion (21) We may now use the relations and where the operation S ≡ Tr 0 takes the scalar part of the expression and then performs the trace. We normalize ξà so that (25) is fulfilled. By means of (25) we can project eq.
Yang-Mills gauge fields as spin connection in C-space
Let us define generators of the transformations (i.e., local rotations in C-space) according We also have Σ AB = f AB C γ C , where f AB C are constants.
A generic transformation in C-space which maps a polyvector Ψ into another polyvector Ψ ′ is given by Here α AB and β AB , or equivalently α A = f CD A α CD and β A = f CD A β CD , are parameters of the transformation.
In general, eq. (30) allows for the transformation which maps a basis element γ A into a mixture of basis elements. In particular, we have the following three interesting cases: This is the transformation which preserves the structure of Clifford algebra, i.e., it maps the basis elements γ A into another basia element γ A ′ of the same Clifford algebra.
(ii) α AB = 0, β AB = 0. Then we have This is the transformation which maps a basis spinor ξ αi into another basis spinor ξ ′ αi belonging to the same left ideal: This is the transformation that maps right ideal into the right ideals: In general, for the transformation (30) we have where This transformation, in general, mixes right and left ideals. Eq. (38) can be considered as matrix equation in the space spanned by the generalized spinor indicesÃ,B: where U is a 16 × 16 matrix, whilst ψ and ψ ′ are columns with 16 elements. From (37), (38) it follows that U =R ⊗Ŝ T , whereR andŜ are 4 × 4 matrices representing the Clifford numbers R and S. That is, U is the direct product ofR and the transposeŜ T ofŜ, and it belongs, in general, to the group GL(4, C) × GL(4, C). The group is local, because the basis elements γ A entering the definition (32) depend on position X in C-space according to the relation analogous to (13), and also the group parameters α A , β A in general depend on X.
We now require that the C-space Dirac equation is invariant under the transformations (30), (39): After using eq.(23) we then find 6 This is the transformation for the generalized spin connection (i.e., the connection in Cspace). In matrix notation 7 this reads We see that Γ M transforms as a non abelian gauge field. The most general gauge group here is 8 GL(4,C) × GL(4,C). As subgroups it contains for instance SL ( In the special case of free fields, the C-space Dirac equation (22) or explicitly, A particular solution is 7 The objects are considered as matrices in the generalized spinor indicesÃ,B,C,D. 8 The group GL(4,C) is subjected to further restrictions resulting from the requirement that the transformations (30) should leave the quadratic form Ψ ‡ * Ψ invariant. So we have ψ ′ ‡ * Ψ ′ = ψ ′ ‡ Ψ ′ S = S ‡ Ψ ′ ‡ R ‡ RΨS S = Ψ ‡ Ψ S = Ψ ‡ * Ψ, provided that R ‡ R = 1 and S ‡ S = 1. Explicitly, the quadratic form reads Ψ ‡ * Ψ = ψ * à ψBzÃB, where zÃB = ξ ‡Ã * ξB is the spinor metric.
where u αi satisfies (γ o p o + γ µ p µ + γ µν p µν + γ µνρ p µνρ + γ µνρσ p µνρσ )u αi (p o , p µ , p µν , p µνρ , p µνρσ ) = 0 (46) A spinor ψ αi incorporates, besides the linear momentum excitations, also the area and volume modes, determined by p µν , p µνρ , p µνρσ . Those extra modes take into account the extended nature of the object. For a nice description of this latter concept on an example of the quenched minisuperspace propagator for p-branes see ref. [25].
However, in the interactive case (i.e., in curved C-space), we have the set of coupled equations (28) in which there occurs the C-space spin connection Γ M . Using eq. (23) we can calculate the curvature according to where This is the relation for the Yang-Mills field strength. From the curvature we can form the invariant expressions, for instance R M NÃB (γ M ‡ * ξÃ)(γ N ‡ * ξB) and R M NÃB R M NÃB which can be used in the action as the kinetic term for the fields Γ MÃB .
Using eq.(29) we can express the spin connection in terms of the generators 9 : Inserting (49) into (48) we obtain where C BC A are the structure constants of the Clifford algebra: The C-space Dirac equation (28) can be split according to where M = (µ,M), andM assumes all the values except M = µ = 0, 1, 2, 3.
(ii) The Yang-Mills fields A µĀ γĀ, where we have split the local index according to A = (a,Ā). ForĀ = o (i.e., for the scalar) the latter gauge field is just that of U(1) group.
We see that the C-space spin connection contains all physically interesting fields, including the antisymmetric gauge fields which occur in string and brane theories.
Conclusion
The theory that we pursue here 10 seems to be a promising candidate for the unification Such a fresh approach to unification which takes into account the ideas from various fash- 10 See also references [12,13,10,14]. ionable theories, e.g., Kaluza-Klein theory, Clifford algebra, string and brane theory (branes sampled by Clifford numbers), is in my opinion very promising and deserves further more detailed investigation. | 2014-10-01T00:00:00.000Z | 2004-12-21T00:00:00.000 | {
"year": 2004,
"sha1": "3f9fbe255c5d7486ede9af992c449a417789af12",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0412255",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2b20e0172bb3ec79e2858e26676b4a524515070a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
39528618 | pes2o/s2orc | v3-fos-license | The application of high-field magnetic resonance perfusion imaging in the diagnosis of pancreatic cancer
Abstract Pancreatic cancer is the fourth leading cause of cancer death in the world. It is a disease of insidious progression and high lethality. The present study was to investigate the diagnostic value of high-filed magnetic resonance (MR) perfusion imaging in pancreatic cancer. Thirty-three patients with suspected pancreatic cancer were recruited in our study and underwent routine MR imaging. When compared with para-tumoral and normal tissue, the pancreatic lesions showed significant lower slope, peak enhancement (PE), and signal enhancement ratio (SER) as well as higher time to peak (TTP). Para-tumoral tissue was found to have significantly lower slope and PE, slightly higher TTP than normal tissue. MR perfusion imaging displays hemodynamic alterations in both pancreatic cancer and surrounding pancreatic tissue, and provides indirect assessment of tumor vascularity. In conclusion, high field MR perfusion imaging has important clinical significance in early diagnosis of pancreatic cancer.
Introduction
Pancreatic cancer (PC) is the fourth leading cause of cancer death in the world. It is a disease of insidious progression and high lethality, with a 5 years survival rate of only 6%. [1] PC has remained challenging to treat with few patients eligible for resection and median survivals of 6 to 12 months for those with metastatic diseases, despite use of multiagent chemotherapy. [2,3] It typically spreads rapidly and is seldom detected in early stage because of its insidious onset. As a result, most patients, when diagnosed, are at advanced stages, and complete surgical resection is usually precluded. [4,5] The poor prognosis of patients with PC is attribute to the lack of effective means of early diagnosis. Only 5% to 10% of patients are candidates for potentially curative resection at the time of diagnosis. [6] From this perspective, a throughout understanding of this lethal disease targeting early detection is required. Intra-tumor hemodynamics, or tumor perfusion, provides useful information in understanding pathological background of cancers. [7] In particular, high field magnetic resonance perfusion imaging, or perfusion weighted imaging (PWI), is such a noninvasive method introduced as to assess intra-tumor hemodynamic changes recently. Thus, in this study, we used high field magnetic resonance perfusion to evaluate the hemodynamic alterations in pancreatic cancer. We aimed to evaluate the microscopic pathology changes of pancreatic cancer and investigated the diagnostic value of perfusion imaging in patients with pancreatic cancer.
Patient demographics
Between February 2011 and September 2012, 33 patients with suspected pancreatic cancer (19 males and 14 females; age range: 41-76 years; median age: 56.2 years) in our hospital were retrospectively studied. All these patients underwent 3.0 Tesla MR perfusion imaging as part of their MR scan protocol. These patients presented with upper abdominal pain or abdominal discomfort. Among them, 17 patients experienced jaundice, 8 patients had left lower back pain, and all patients had certain degree of weight loss. Twenty-four lesions were located in the head of pancreas, and 9 lesions were in the body or tail of pancreas. The diagnosis of pancreatic cancer was confirmed by surgery and postoperative pathology in 19 patients. Clinical manifestations, elevated tumor biomarkers and imaging findings were considered to make the diagnosis in the remaining 14 patients. Twelve patients had liver metastasis at the time of presentation. The sixth edition of the tumor-node-metastasis (TNM) classification of the International Union against Cancer for pancreatic cancer (2009 version) was used to classify these patients: stage I (2 patients), stage II (11 patients), stage III (15 patients), and stage IV (5 patients). [8] The study was approved by the Research Ethics Committee of Henan University. Informed consent was obtained from all patients.
MR scanner and pulse sequences
Magnetic resonance imaging (MRI) was performed using a 3.0 Tesla superconducting MRI scanner (MAGNETOM Verio, Siemens Healthcare, Erlangen, Germany) with a phased-array body coil. No patients had MRI contraindications such as cardiac pacemakers or ferromagnetic surgical implants. Fasting for 6 to 8 hours was required in all patients, and all ferromagnetic items were removed before examination. Patients were also instructed how to breathe and hold breath in order to cooperate during examination. The scan included ranges from upper border of diaphragm to lower border of kidney when patient was in supine position. Conventional transverse and coronal scans of upper abdomen were firstly performed. Imaging included precontrast transversal 3D T1-weighted fat-suppressed volume interpolated body examination (VIBE) (TR/TE 3.92/1.39 ms), transversal T2weighted fat-suppressed blade (TR/TE 3900/110 ms), diffusionweighted imaging (b = 50, b = 800), and coronal T2-weighted half fourier acquisition, single shot, fast spin echo sequence (TR/TE 1100/90 ms). Slice thickness 5 mm with a gap of 1 mm was used. The MR perfusion imaging was then performed, aiming at pancreatic lesions predetermined by conventional scans in each patient. A 0.2 mmol/kg bolus of gadodiamide-DTPA was rapidly administered manually (at a rate of approximately 3.0 mL/s) by 1 investigator via dorsal hand vein or median cubital vein. Immediately afterward, a 20-mL saline flush was administered at the same injection rate. Dynamic scanning started after the initiation of contrast bolus injection with 2D turbo fast low angle shot sequences (TR 347 ms, TE 2.08 s; TI 178 ms; slice thickness 5 mm; interslice gap 1.5 mm; field of view 400 mm; flip angle, 8°; matrix, 192 Â 192; band width 900 HZ/PX). Fifty continuous 2.08 seconds acquisitions were acquired with 6 slices in each scan and a total of 300 images were obtained. During perfusion imaging, bellyband was used and thoracic breathing was recommended to reduce breath-induced artifacts. Subsequent contrast-enhanced scan was performed with transverse T1WI fatsuppressed VIBE sequence (TR/TE 3.92/1.39 ms) and breath hold acquisitions were acquired to cover the pancreas.
MR imaging acquisition and data postprocession
Fifty images displaying the largest tumor part were selected and sent to mean curve software (Siemens) for postprocession. Regions of interest (ROI) were manually delineated in pancreatic lesions, peritumoral tissue, normal tissue, as well as aortic region.
Peritumoral tissue was defined as pancreatic tissue within 5 mm from the lesions. Pancreatic tissue beyond 5 mm from the lesions was defined as normal tissue. Aortic region was delineated in reference to signals from the lumen of abdominal aorta. Care was taken in covering the largest possible region excluding adjacent organs and large vessels. For pancreatic lesions, ROIs were placed to cover solid components of lesions. After ROIs placement, time-intensity curves (TIC) and related intensity data were automatically created at the click of function key "curve." Semiquantitative analysis of TICs was also performed using perfusion parameters including slope, peak enhancement (PE), time to peak (TTP), and signal enhancement ratio (SER), which were calculated from given signal intensity data.
Statistical analysis
Continuous variables were expressed as mean ± SD (standard deviation) and compared using a 2-tailed unpaired Student t test; categorical variables were compared using x 2 or Fisher analysis. All statistical evaluations were carried out using SPSS software (Statistical Package for the Social Science, version 12.0, SPSS Inc, Chicago, IL). F test was used to compare means of slope, PE, TTP, and SER among different tissues. If a significant difference was observed, student -newman -keuls (SNK)-q test was further applied for pairwise comparisons. Besides, mean slope, PE, TTP, and SER of pancreatic lesions were also compared in respect to clinical stages (Stage I, II vs Stage III, IV) with 2 independent sample t test. P value <.05 was considered statistically different.
Results
Routine MR imaging revealed lesions with evident abnormal intensity in 28 patients. These lesions, without distinct borders, were located either in the head or in the body/tail of pancreas. The solid component demonstrated a slightly low signal intensity on T1-weighted images, a slightly high intensity on T2-weighted images, a high intensity on T2-weighted fat-suppressed images, and a high intensity on diffusion-weighted imaging. In the remaining 5 patients, MR images showed diffuse enlargement of pancreas body and tail with low intensity. Necrosis was noted in 19 lesions, which showed patchy high-intensity signal on T2-weighted images. For metastases, 12 patients had multiple liver metastases, 15 patients had lymph node metastases, and 11 patients had superior mesenteric venous or portal venous invasion. The TIC of pancreatic lesions demonstrated gradual slow enhancement without obvious peak. For normal tissue, the TICs showed early rapid enhancement and washout pattern; for paratumoral tissue, post-peak plateau or slow rise was observed after early rapid enhancement (Figs. 1-3). In regard to perfusion parameters, lesions had significantly lower slope, PE, and SER as well as higher TTP than other tissue (P < .05); paratumoral tissue showed lower PR and TTP than normal tissue. The detailed information of perfusion parameters was shown in Table 1. In addition, similar patterns (gradual slow enhancement) of pancreatic lesions were found in patients across clinical stages. It was notable that the necrotic regions demonstrated approximately a flat curve on TIC. Between stage 1/2 and stage 3/4, no significant differences in slope, TTP, and SER were found (Tables 2 and 3).
Discussion
First-pass contrast-enhanced MR perfusion imaging, one of the most common modalities, is used in our study. This technique takes advantage of local intensity changes induced by firstpass contrast and acquires a series of dynamic images with fast-imaging sequences by monitoring intensity changes at fixed slices. More specifically, when paramagnetic contrast flows through tissue capillary bed, increased intravascular magnetic susceptibility induces alterations of local magnetic environment. www.md-journal.com A significant T1/T2 shortening then occurs due to induced resonance frequency changes and proton spin dephasing of hydrogen protons in close proximity. [9] Thus, amplified signal intensity on T1-weighted images or reduced intensity on T2weighted images is expected. In addition, the intensity changes of certain slice over time can be evaluated using the so-called TICs, which are based on intensity changes obtained from a series of dynamic images. A variety of mathematical models are available to calculate relevant perfusion parameters from TICs. In perfusion imaging, it is notable that first-pass data are used. During that phase, the intensity changes are least influenced by diffusion, as contrast remains exclusively in the vessels and greatest gradient across capillary walls is achieved. Therefore, the TICs and relevant parameters, based on acquired first-pass data, are a good reflection of real tissue perfusion and micro-vessel distribution. [10] In 1991, Ichikawa et al [11] first performed perfusion weighted MR imaging of the upper abdomen in 61 patients. Afterwards Coenegrachts et al [12][13][14] applied this technique in patients with pancreatitis and found a significant difference in perfusion parameters in the multiple comparison among patients with acute pancreatitis, chronic pancreatitis, and healthy volunteers. They also used perfusion imaging in patients with pancreatic cancer, and demonstrated lower perfusion in pancreatic lesions compared with normal pancreas tissue. In respect to normal pancreatic tissue, Bali et al's study [15] showed that different regions of pancreas, namely head, body, and tail of pancreas, may have different perfusion parameters. However, similar regional perfusion difference was not observed in studies from Chinese authors. [16] One group of these Chinese authors also investigated perfusion parameters of pancreatic lesions, nonlesion regions, and normal pancreatic tissue. Paired comparison showed that there was a significant difference between any 2. The authors then concluded that the perfusion difference between lesions and nonlesion regions may suggest the extent of invasion, while the difference in TTP between nonlesion regions and normal pancreatic tissue indicated the existence of potential malignancy. [17] Furthermore, Tajima et al [14] found out that TIC and TTP from dynamic contrast-enhanced MRI (DCE-MRI) provided reliable information to differentiate pancreatic cancer from tumor-forming pancreatitis. The TTP of the former was often beyond 2 minutes and the latter between 1 and 2 minutes. The TTP of normal pancreatic tissue was less than 1 minute.
In perfusion imaging, the perfusion changes in tumor are used to evaluate intra-tumor vascularity alterations in vivo. [18] Several perfusion parameters are now available for semi-quantitative analysis. The slope of TIC, correlated with vessel number and vascular permeability, reflects degree of tissue vascularity. TTP, the time required to achieve peak, provides comprehensive overview of both blood flow and blood volume. Another commonly used parameter, SER, has highly positive correlation with tissue perfusion and acts as a good reflection of blood flow. [19] In normal pancreatic tissue, a homogenous enhancement is usually expected for its evenly-arranged glandular tissue, intact endothelium, and rich blood supply. However, for pancreatic cancer, the degree and pattern of enhancement is far more complicated. Pancreatic cancer often has poor vascularization and distinct micro-capillary patterns from other tumors. Histologically, pancreatic cancer cells are interspersed among fibrous mesenchyme (the predominant component of pancreatic cancer) and remaining pancreatic tissue, and their relative percentage vary depending on the aggressiveness of the cancer. As a result, the unique distribution of these components in pancreatic lesions contributes to the overall enhancement pattern. [20] In our study with 33 patients, MR images showed lowest perfusion in pancreatic lesions, which was further confirmed by lowest SER, slope, PE, and highest TTP. This decreased blood flow and volume could be partially explained by local changes involving focal fibrosis and peripheral vessel sclerosis. Increased vascular permeability, increased blood flow resistance, and decreased blood flow rate were also responsible for the perfusion changes. [21] Our study also found that paratumoral tissue had lower SER, slope, PE, and higher TTP than normal tissue, which suggested possible cancer cell invasions in paratumoral region. The result was not surprising, as pancreatic cancer is highly invasive and paratumoral tissue is often found involved at initial diagnosis. Our results were consistent with that of Villringer and Belliveau. [22] In addition, the hemodynamic difference between pancreatic lesions and paratumoral tissue, as shown by perfusion parameters, implied the extent of local tumor invasion. By the same token, the difference between paratumoral and normal tissue suggested that potential pathology changes might already Table 3 Comparisons of perfusion parameters across clinical stages. .010 .000 .017 Table 1 Comparison of perfusion parameters among pancreatic lesion, peritumoral tissue, normal tissue, and aortic region. Our study further investigated the effect of clinical stages on relevant perfusion parameters. No difference, however, was found between lesions in stage 1/2 and stage 3/4. This finding implied that intra-tumor blood volume, blood flow, and transit time were not directly related to clinical stages.
Regions
The high field MR perfusion imaging has its advantages over conventional DCE-MRI. Although DCE-MRI is widely used in evaluating overall blood supply to the pancreatic cancer, it cannot provide accurate information about intra-tumor microcirculations or hemodynamic changes. Perfusion imaging, on the contrary, directly shows perfusion alterations in tumor tissue and serves as a noninvasive tool to assess micro-capillary distribution in vivo.
Currently, CT scans and DCE-MRI are both reliable methods in the diagnosis of pancreatic cancer. But MRI is preferred imaging modality for primary pancreatic cancer. [23] In general, MRI has several advantages, such as multifunction, multiplane imaging, high soft tissue resolution, radiation-free, and traumafree. [24] Besides, simultaneous anatomical and functional display is available in MR perfusion imaging, and repeated examinations can be performed to monitor therapy efficacy. Apart from clearer delineation of pancreatic lesions attributable to high soft tissue resolution and sharp contrast, high field MR perfusion imaging also provides useful information about intra-tumor perfusion and hemodynamic changes. In that perspective, perfusion imaging is expected to increase tumor detection rate and improve qualitative diagnostic accuracy. [13] In addition to aiding diagnosis, the use of PWI as a noninvasive method to evaluate tumor angiogenesis in vivo might also aid in therapy selection, response prediction, and efficacy monitoring. [25,26] There are several limitations of this study: the sample size is too small in this study, and further studies with larger sample size are needed to confirm the present results. Since the high field MR perfusion imaging has its advantages over DCE-MRI and CT scans, further comparsion studies should be performed to confirm which modality is superior in early diagnosis of pancreatic cancer.
In conclusion, high field MR perfusion imaging has important clinical significance in early diagnosis of pancreatic cancer. | 2018-04-03T04:34:36.803Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "d38cbceeca8a7d5a74a07d6c6ccb439a28008f6b",
"oa_license": "CCBYND",
"oa_url": "https://doi.org/10.1097/md.0000000000007571",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d38cbceeca8a7d5a74a07d6c6ccb439a28008f6b",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
137917605 | pes2o/s2orc | v3-fos-license | Effect of the photoinitiator system on the properties of a dental material based on a hybrid polymer
Objetivo: Nesse estudo, foram avaliados os efeitos de dois diferentes protocolos de cura sobre as propriedades de um compósito usando um polímero híbrido como resina dentária. Material e Métodos: Dois compósitos diferentes foram preparados, um contendo uma mistura de TEGDMA/Bis-GMA (50:50) e, outro contendo uma mistura de TEGDMA/p-MEMO (50:50), [p-MEMO: precursor oligomérico inorgânico]. Ambos compósitos foram reticulados com lucirin e canforoquinona. Os compósitos foram preparados com 70% em massa de carga inorgânica. Resistência flexural foi avaliada com uma máquina de testes universal e o grau de conversão calculado por espectroscopia na região do infravermelho. Um picnômetro a gás hélio foi usado para obter os dados de contração de polimerização. Testes de sorção foram feitos e microscopia eletrônica de varredura foi usada para avaliar efeitos deletérios sobre as superfícies das resinas. Resultados: A amostra constituída com TEGDMA/p-MEMO reticulada com lucirin (L-T/p) apresentou os melhores valores das propriedades monitoradas. Conclusão: Lucirin é o sistema fotoiniciador mais adequado para compostos dentários contendo polímeros híbridos. AbstRAct
Effect of the photoinitiator system on the properties of a dental material based on a hybrid polymer Venter SAS et al.
INtRoDuctIoN
A restorative dental composite is mainly constituted of three different parts: a) an organic matrix to give flexibility and good handling for the cavity to be completely filled and to connect the dispersed phase (silica); b) inorganic inclusions, which represents 65-85 % of the composite, to improve the mechanical properties and give dimensional stability,; c) a photoinitiator system that produces free radicals that will initiate the polymerization process of the monomer by using a radiation source, leading to the formation of the threedimensional networks [1].
The stiffness of the composite is due to the presence of aromatic rings.A faster curing process and less intense polymerization shrinkage are the result of its high molecular weight and bifunctionality.The higher the molecular weight, the lower the volumetric shrinkage, as those monomers, with relatively high molecular weight, have low numbers of polymerizable groups per volume unit [3].Furthermore, these properties also provide a three-dimensional network with greater hardness than the acrylate materials previously used [2].
The high viscosity of bis-GMA (800 to 1200 Pa.s) limits the addition of filler particles.To overcome this drawback, the combination of diluent monomers to reduce the viscosity of the matrix is frequently used.This allows for the addition of higher amounts of inclusion leading to enhanced mechanical properties.Figure 1b also shows the chemical structure of the diluent monomer triethyleneglycol dimethacrylate (TEGDMA).
TEGDMA has a low viscosity and forms mixtures with bis-GMA in various mass proportions in order to obtain heavily loaded composites that have increased hardness and mechanical strength.The diluent property of TEGDMA may be related to flexibility of its chains which have ether linkages that allow free rotation of methacrylate groups [3].However, the addition of diluent monomers also causes an undesirable effect: the increase of polymerization shrinkage [4,5].
Research on related organic-inorganic hybrid polymers, also called Organically Modified Ceramics -Ormocer®, suggests the application of these materials as an alternative to the composite conventionally used in dental restoration [6][7][8].
The properties of hybrid polymers are complementary, combining the mechanical stability and chemical inertness of ceramics with the flexibility inherent to the organic polymers [9,10].The compound (3-methacryloyloxypropyl)trimethoxysilane (MEMO) can be used as a monomer precursor to produce dental resins due to its methacrylate groups, also shown in Figure 1c.
Organically modified ceramics can be found in commercial resins used for dental restoration such as Admira ® (Voco, Germany), a product that has been available on the market for over a decade and, which possesses good mechanical properties such as flexural strength as well as adequate adhesion to teeth [11,12].This study evaluated the possibility of using an inorganic oligomeric precursor, the polycondensed (3-methacryloyloxypropyl) trimethoxysilane, hereafter called p-MEMO as dental resin.To achieve this goal, p-MEMO was mixed with a diluent monomer and tested using two different photoinitiator systems, i.e., camphorquinone and lucirin.The properties evaluated were flexural strength (FS in MPa), degree of conversion (DC in %), and polymerization shrinkage (PS in %).Aspects related to leaching of the composites were monitored via water sorption (Wsp in μg.mm -3 ) and water solubility (Wsl in μg.mm -3 ) tests and by micrographs of the surface of the cured samples.
Preparation of composites
Specific amounts of the monomers were weighed in petri dishes, and the photoinitiator system was subsequently added, as shown in Table 1.
For curing the composites, 0.8 % w/w of camphorquinone and 3.2 % w/w of DMAEMA related to the mass of the monomers were used.Lucirin (Norrish type 1 photoinitiator system), in an amount of 1.68% in relation to the weight of the organic matrix, was used as well.These amounts of photoinitiators were calculated so that there was the same molar amount in all formulations.After that, silica (70% of the mass of the composite) was weighed and added in small portions to the Petri dish, and the mixture was homogenized manually with a stainless steel spatula.
FTIR: Degree of monomer conversion
To assess the degree of conversion, the samples were irradiated for 180 s (3 pulses of 60 s) on each side with a light-emitting diode Effect of the photoinitiator system on the properties of a dental material based on a hybrid polymer Venter SAS et al.
(LED) photopolymerization unit (440-480 nm, 1200 mW.cm -2 ) inserted into a mold that was 2.0 mm thick and 7.0 mm in diameter for samples containing camphorquinone.Only the degree of conversion of samples C-T/p and C-B/p was already measured, as described by Venter et al [13].For the samples containing lucirin, the same cure protocol was used, but the curing unit was a halogen bulb (400-500 nm, 420 mW.cm -2 ).This irradiation protocol was chosen to ensure a maximum degree of conversion of the samples.The irradiated samples were left in a desiccator for 24 h and were pulverized to produce KBr pellets.The spectra in the infrared region of the non-photopolymerized resins were generated on NaCl windows.A Bomem spectrophotometer MB-100 Hartmann & Braun was used.To calculate the degree of conversion, the reduction in peak intensity related to the stretching of the aliphatic C = C bond was compared to the internal referential band of C = O bond stretching.The degree of conversion (DC) was obtained according to Equation 1, where R = intensity of the band at 1640 cm-1 / intensity of the band at 1720 cm-1 [14].
Flexural strength
The flexural strength of the dental materials was evaluated according to ISO 4049, using specimens in the form of rods with dimensions of 25 mm x 2 mm x 2 mm.After preparation, a specimen was immersed in water at 37 ºC for 24 h.For the trials, the samples were dried with soft paper and led to a testing machine (Lloyd Instruments LR 10K plus).
Polymerization shrinkage
Polymerization shrinkage was measured by the difference in volume of the composites before and after irradiation.For this analysis, a helium gas pycnometer (Multipycnometer, Quantachrome Instruments) was used.Initially, disposable aluminum cups were prepared and their volume was measured.Uncured samples (1) were then added to these containers and the volume was measured again.The volume of a non-polymerized sample (V m ) was obtained by deducting the value of the aluminum cup´s volume.The cup with the uncured sample was then removed from the measuring chamber, and the curing procedure was performed with 6 pulses of 60 seconds.The cured sample was then left in a lightproof desiccator for 24 h.Soon after, the measurement of the cured sample was performed by again deducting the volume of the aluminum cup, the cured sample volume (V p ) was obtained.The dimensional change or polymerization shrinkage (PS) was then determined by the relation between the average values of the polymerized (V p ) and monomeric materials (V m ).Thus, the change undergone by a material during curing was determined by applying Equation 2.
Water sorption and solubility tests
The preparation of composites for sorption and solubility testing in water followed the ISO standard 4049.Portions of the uncured composite were added in a single increment (to avoid the formation of bubbles) to a stainless steel cast, measuring 1 mm in thickness and 15 mm in diameter.A Mylar® strip was placed on top of the mold (surface) to avoid adhesion with the glass plaque (5 mm thick), which was pressed against the mold/composite, in order to obtain a smooth surface.The irradiation was performed according to standard recommendations (ISO 4049), as follow as: each sample had one central pulse and eight overlapping pulses on the surface in both sides of the specimens using the photopolymerization unit early described in FTIR monomer conversion section.
After irradiation, the samples were placed in a lightproof desiccator under controlled temperature (37 °C) and kept until a constant mass (m 1 ) was achieved.The volume of the discs was calculated by measuring the diameter and thickness with the aid of a digital caliper.Each Effect of the photoinitiator system on the properties of a dental material based on a hybrid polymer Venter SAS et al.
disc was placed in a hermetically sealed container with 10 mL of distilled water and then left for 168 h in a temperature-controlled bath at 37 °C.Excess water was removed with absorbent paper.The samples were then blown with N2 on both sides.The masses were recorded again (m 2 ).The procedure of placing samples in a desiccator until they reached constant mass (m 3 ) was repeated.All specimens were measured under controlled temperature and humidity (23 0C, 68%).
For the calculation of sorption and solubility, Equations 3 and 4 were used.
Results
Table 2 shows the results of the properties evaluated of the samples with two different cure protocols.
For the samples cured with CQ were observed low values of degree of conversion compared with lucirin ones.ISO 4049 states 50 MPa as a minimum in flexural strength to dental composites.Can be observed that L-T/p showed the best value among the samples.No significant difference was observed in polymerization shrinkage in samples containing TEGDMA/ bis-GMA regardless the photopolymerization system used, and L-T/p showed the best performance again.All composites have similar and satisfactory sorption and solubility data.
DIscussIoN
For the samples with CQ, high degrees of conversion in both formulations were observed.This fact is probably related to the use of the diluent monomer TEGDMA.A characteristic of this diluent is the high mobility of its chains, causing the undesirable cyclization.This phenomenon produces a high degree of conversion, but it does not reflect an increase in mechanical properties because such bonds occur between same-chain segments or segments of chain already densely crosslinked, leading to microgel domains [15].
Figure 2 shows FTIR graphs of the samples with TEGDMA and p-MEMO.
As described before, Figure 2 shows a larger degree of conversion when lucirin was used.This statement can be related to a more remarkable decrease in C = C peak intensity in Graph (B) when compared with Graph (A).It should be highlighted that baseline corrections were performed before the intensity values were recorded [16].The same behavior was observed in samples containing the conventional monomers TEGDMA/bis-GMA (plots not shown).
The degrees of conversion for the lucirincured composites were superior compared to the CQ-cured samples with the same formulation, Table 2.These data were expected since the photobleaching process of lucirin produces two reactive species and the pyramidal geometry of the phosphonyl radical provides the electron in a more effective way to methacrylate-end polymerization [17,18].
The high values of flexural strength may be indicative of the degree of crosslinks between different segments in the polymer network and thus associated with a better structured three-dimensional network [19,20].The low values obtained for the samples cured with camphorquinone suggest that the TEGDMA chains do not link different segments in the structure of the polymer matrix due to the cyclization features previously discussed.The flexural strength of the L-T/p composite showed the highest value among all tested formulations.Unlike those observed in the formulations containing camphorquinone, these samples' microgel domains were not being formed due to the ability of lucirin to bind effectively in curing inorganic oligomeric precursors or conventional monomers.
Figure 3 shows the p-MEMO structure.
We suggest that this higher methacrylate group availability of the pMEMO structure compared to bis-GMA led to a lower TEGDMA cyclization ratio in the L-T/p sample.
High polymerization shrinkage values are associated with poor adhesion of the restorations as well as the formation of micro cracks in the composites, which can cause secondary caries [21].The value of about 5% for the sample C-T/B sample was in agreement with data in literature and can be due to the high amount of TEGDMA [22].
The formation of an inorganic network via siloxane bonds [Si-O-Si] that occurs in the polycondensed p-MEMO suggests lower polymerization shrinkage of the composites due to the pre-existence of an independent threedimensional network (an intrinsic characteristic of p-MEMO) [23].However, the C-T/p sample appeared to be contrary to this statement.As explained by Möszner et al., methacrylate groups present in hybrid polymers reduce mobility due to the intrinsic three-dimensional structure of these materials, as the camphorquinone is not suitable for photocuring [24].Thus, it can be inferred that an even more intense crosslinking occurs with the chains of TEGDMA in a C-T/p composite, which could explain the high value of polymerization shrinkage in this sample.The low polymerization shrinkage presented by the L-T/p sample can be related to the steric hindrance of chains resulting from the inorganic network formed by the siloxane precursor.Effect of the photoinitiator system on the properties of a dental material based on a hybrid polymer Venter SAS et al.
As previously discussed, the use of lucirin is not suitable to promote rather polymerization between TEGDMA molecules, thus leading to less-concentrated microgel domains when compared to the camphorquinone-cured sample.
The ISO standard 4049 requires 40 μg.mm-3 (sorption) and 7.5 μg.mm-3 (solubility) for commercial composites.Due to the threedimensional oligomeric structure of the inorganic precursor p-MEMO, lower sorption and solubility values were observed in the C-T/p samples.One aspect that may contribute to higher sorption values in the formulation containing bis-GMA (C-T/B) is related to the presence of hydroxyl groups in its structure (Figure 1).Another remarkable issue was the lowest solubility value in the L-T/p sample.This fact may be related to the linkage between TEGDMA and p-MEMO due to the lucirin be a more efficient photocuring system than camphorquinone.SEM micrographs of cured samples with CQ are shown in Figure 4.
The C-T/p resin showed lower sorption and solubility values when compared to the C-T/B resin.On the other hand, the presence of cracks provided evidence that the polymeric matrix was being leached during the sorption processes.Because p-MEMO consists of inorganic oligomers of high molecular mass, it could be suggested that the material loss was due to the solubilization of the unreacted TEGDMA monomers or TEGDMA-rich microgel domains.This feature was not observed in the micrographs of composites containing bis-GMA because camphorquinone can bind the chains of these monomers with TEGDMA chains, forming crosslinks.
Comparing the two compositions, sorption tests showed lesser deleterious effects on the sample containing p-MEMO than the composition containing conventional monomers (Table 2).The oligomeric feature of the inorganic hybrid precursor, associated with a more efficient crosslinking by using lucirin, resulted in solubility values below those observed in the CQ-cured samples.SEM micrographs of the samples cured with lucirin (shown in Figure 5), which presented solubility values lower than those observed in the CQ-cured samples, revealed no cracks in the composite surface.This result Effect of the photoinitiator system on the properties of a dental material based on a hybrid polymer Venter SAS et al.
suggests a lesser concentration of microgel domains (rich in TEGDMA), showing that lucirin is able to promote crosslinking between chains of p-MEMO and TEGDMA.
coNclusIoNs
Due to the generation process of free radicals and the geometry of such radicals, one can suggest that the use of lucirin is a more effective photoinitiator in systems containing inorganic oligomeric precursors compared to camphorquinone.The formation of cracks observed in the C-T/p sample may be related to leached unreacted TEGDMA monomers.This suggests that CQ presents lower activity in p MEMO-TEGDMA bonds.When cured with lucirin, samples containing p-MEMO in combination with the diluent monomer showed properties comparable to commercial dental composites.Still, this study suggests further investigations for future development of dental materials with this technology.
Figure 3 -
Figure 3 -Chemical structure of the inorganic oligomeric precursor p-MEMO.
Figure 4 -
Figure 4 -SEM micrographs (scale 5 μm) of the cured resins with camphorquinone C-T/p (A) before and (B) after of sorption process; C-T/B (C) before and (D) after of sorption process.
Figure 5 -
Figure 5 -SEM micrographs (scale 5 μm) of composites cured with lucirin L-T/p (A) before and (B) after of sorption process; L-T/B (C) before and (D) after of sorption process.
Table 2 -
Degree of conversion, flexural strength, polymerization shrinkage, sorption and solubility evaluated in cure samples with camphorquinone and lucirin Effect of the photoinitiator system on the properties of a dental material based on a hybrid polymer Venter SAS et al. | 2019-04-29T13:08:58.918Z | 2016-03-14T00:00:00.000 | {
"year": 2016,
"sha1": "040962f7f3fdf831a927041038d250070c50a1c0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.14295/bds.2016.v19i1.1233",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "56e8224e9e9e43ab54ff9ca85938dda03a388cf5",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
14971759 | pes2o/s2orc | v3-fos-license | Identification of a large protein network involved in epigenetic transmission in replicating DNA of embryonic stem cells
Pluripotency of embryonic stem cells (ESCs) is maintained by transcriptional activities and chromatin modifying complexes highly organized within the chromatin. Although much effort has been focused on identifying genome-binding sites, little is known on their dynamic association with chromatin across cell divisions. Here, we used a modified version of the iPOND (isolation of proteins at nascent DNA) technology to identify a large protein network enriched at nascent DNA in ESCs. This comprehensive and unbiased proteomic characterization in ESCs reveals that, in addition to the core replication machinery, proteins relevant for pluripotency of ESCs are present at DNA replication sites. In particular, we show that the chromatin remodeller HDAC1–NuRD complex is enriched at nascent DNA. Interestingly, an acute block of HDAC1 in ESCs leads to increased acetylation of histone H3 lysine 9 at nascent DNA together with a concomitant loss of methylation. Consistently, in contrast to what has been described in tumour cell lines, these chromatin marks were found to be stable during cell cycle progression of ESCs. Our results are therefore compatible with a rapid deacetylation-coupled methylation mechanism during the replication of DNA in ESCs that may participate in the preservation of pluripotency of ESCs during replication.
INTRODUCTION
Pluripotent embryonic stem cells (ESCs) are highly proliferative cells that can expand indefinitely. This unlimited expansion is sustained by their self-renewal capacity, which relies on a high fidelity of the genome and the epigenome transmission during deoxyribonucleic acid (DNA) replication (1,2). The self-renewal capacity and the plasticity to differentiate into all the cell types of an adult organism are orchestrated and balanced by a unique protein interaction network. The network is centred by the pluripotent transcription factors OCT4, NANOG and SOX2, which act in a coordinated manner with chromatin modifying complexes (1,3). These complexes include Polycomb repressor complexes (PRC) 1 and 2, BRG1 associated factors (esBAF) complex and the nucleosomal remodelling and deacetylase (NuRD) complex (1,4).
With the aim to elucidate the functionality of these complexes, intensive efforts have been undertaken to understand precisely where these epigenetic complexes are positioned within the genome in ESCs using chromatin immunoprecipitation combined with massive parallel sequencing (CHIPseq) (5)(6)(7). However, less is known on the dynamics of these interactions, in particular during cell cycle progression. This question is especially relevant for ESCs, which display a rapid cell cycle with a shortened G1 phase and a dominant DNA replication phase (2,8).
Very recently, a novel technique was developed to isolate proteins on nascent DNA (iPOND). Using highly proliferative transformed cells, the iPOND technology has enabled the isolation of proteins already known to be associated with the replication fork (9), as well as, in combination with mass spectrometry, the identification of new replication associated factors in HEK293T cells (10,11). Although core replication proteins were consistently identified, such transformed cell lines are predicted to display abnormalities defining the tumour cell state, including alterations of chromatin regulatory proteins. Therefore, results from these studies lend limited insights of replication associated proteins in ESCs which display unique characteristics compared to other cell types such as a high fidelity
Western blotting
The samples were analysed by western blot as described previously (18), and the data was quantified using Image Lab software (v4.0.1; Bio-Rad). The antibodies are listed in Supplementary Table S3.
Isolation of proteins on nascent DNA (iPOND)
The cells were pulsed for 10 min with 100 M of the thymidine analogue, ethynyl deoxyuridine (EdU). For the chase experiments, the pulse was followed by extensive washing with phosphate-buffered saline (PBS) + 100 M thymidine (Sigma) and incubation in serum-free media with 100 M thymidine. Subsequently, the cells were fixed in 1% paraformaldehyde (PFA) for 10 min at room temperature (RT) and quenched with 0.125 mM glycine (pH 7) for 5 min at RT. The cells were harvested, pelleted by centrifugation (720 × g, 10 min at 4 • C), and lysed in lysis buffer (ChIP Express kit, Active Motif) for 30 min at 4 • C. Lysates were passed 10× through a 21-gauge needle, and the nuclei were pelleted by centrifugation (2400 × g, 10 min at 4 • C), washed with PBS + protease inhibitor cocktail (PIC; Roche), then subjected to Click reaction for 30 min at RT with 0.2 mM Biotin-azide (Invitrogen). The Click reaction is based on organic chemistry reaction by which an organic azide reacts to a terminal acetylene. The nucleotide exposed ethynyl residue of EdU is derivatized by a copper-catalyzed cycloaddition reaction forming a covalent bound between the EdU and the biotin. The nuclei were re-pelleted by centrifugation (2400 × g, 10 min at 4 • C), washed with PBS + PIC, suspended in shearing buffer (ChIP Express kit, Active Motif) and sonicated (Bioruptor, Diagenode) for 15 min at high intensity (30-s/30-s on/off pulses). The lysates were cleared by centrifugation (20 800 × g, 20 min at 4 • C), diluted 1:1 with blocking buffer (1% Triton X-100, 2 mM EDTA [pH 8], 150 mM NaCl, 20 mM Tris-HCl [pH 8], 20 mM beta-glycerol phosphate, 2 mM sodium orthovanadate, PIC and 2 mg/ml salmon-sperm DNA [ssDNA]), then incubated with pre-equilibrated Dynabeads M-280 Streptavidin (Invitrogen) for 30 min at 4 • C. The beads were washed two times with blocking buffer (without ssDNA) and twice with high-salt blocking buffer (containing 500 mM NaCl). Finally, the beads were suspended in Laemmli buffer. For mass spectrometry (MS) analysis of iPOND samples, 30-40 × 10 6 ESCs were lysed in 2 ml of Lysis Buffer, incubated in 2 ml of Click reaction buffer, sheared in 2 ml of shearing buffer, incubated with 1 ml of Dynabeads M-280 Streptavidin and resuspended in 200 ml of Laemmli buffer 2×.
Two-step (IP-iPOND) purification
Cells were trypsinized, pelleted by centrifugation (150 × g, 5 min at 4 • C) and lysed in hypotonic buffer A (10 mM HEPES [pH 7.9], 10 mM KCl, 1.5 mM MgCl 2 , 0.34 M sucrose, 10% glycerol, 1 mM dithiothreitol [DTT], 10 mM beta-glycerol phosphate, 1 mM sodium orthovanadate and PIC) + 0.1% Triton X-100 for 5 min at 4 • C. Nuclei were collected by centrifugation (1300 × g, 4 min at 4 • C) and washed with PBS. Click reaction was then performed using 0.2 mM Biotin-azide for 30 min at RT. Nuclei were washed with PBS and lysed in buffer B (3 mM EDTA, 0.2 mM EGTA, 1 mM DTT, 10 mM beta-glycerol phosphate, 1 mM sodium orthovanadate and PIC) for 30 min at 4 • C and the lysates were sonicated for 10 min at low intensity (30-s/30-s on/off pulses) and cleared by centrifugation (15 000 × g, 20 min at 4 • C). Two aliquots were taken for DNA extraction and dot-blot analysis, and the remainder was incubated overnight with 10 g normal goat IgG or goat anti-HDAC1 antibody. Dynabeads Protein G (50 l; Invitrogen) that were pre-blocked with PBS + 2 mg/ml ssDNA were added to the samples for 1 h at RT. Beads were washed 4× with PBS + 2 mg/ml ssDNA, and the immunocomplexes were eluted with 50 g of anti-HDAC1 competitor peptide (Santa Cruz Biotechnology) for 2 h at RT. Eluates were then incubated with Dynabeads M-280 Streptavidin for 30 min at RT, washed 3× with PBS + 2 mg/ml ssDNA, and suspended in Laemmli buffer.
Detailed methods, including FACS analyses, immunofluorescence, dot-blot analysis, DNA purification, immunoprecipitation, antibodies (Supplementary Table S3), mass spectrometry procedures and procedures for data analysis can be found in Supplementary Experimental Materials and Methods.
Efficient purification of proteins associated to nascent DNA in ESCs
Protein-protein interaction and immunolocalization studies have provided remarkable impetus for the identification of several DNA replication components (19)(20)(21). However, these strategies have been hampered since they are restricted to specific proteins or complexes and depend on the availability and specificity of antibodies. A recently developed technology to isolate proteins on nascent DNA (iPOND) in an unbiased way overcomes these limitations (9). The iPOND technique utilizes the rapid incorporation of the thymidine analogue 5-ethynyl-2 -deoxyuridine during DNA replication, covalent cross-linking between DNA and proteins, the addition of a biotin moiety to the incorporated EdU using mild conditions and streptavidin-biotin affinity to capture sheared EdU-labelled chromatin. The procedure was originally reported to yield the isolation of ∼0.5% of the replication-associated protein PCNA (9). The low efficiency might be a limiting factor which at least partly can be compensated by increasing cell numbers to identify low abundant proteins (10,11). However, the long incubation time with streptavidin-coupled agarose beads of the initial method can result in a high background and the requirement of large number of cells limits the usefulness as a standard biochemical technique.
To increase the purification efficiency, we set up conditions to use streptavidin-coupled superparamagnetic beads instead of streptavidin-coupled agarose beads ( Figure 1A). This modification not only reduced the required incubation time (from overnight to 30 min) and decreased non-specific interactions, but also resulted in a total recovery of around 4% of total PCNA after a short EdU pulse of 10 min ( Figure 1B, Supplementary Figure S1A). The increased efficiency in the isolation of nascent DNA was also confirmed by the purification of chromatin-bound proteins, like heterochomatin protein 1 (HP1), the histone variant H2AX or histone H3 ( Figure 1B). Of note, the relative enrichment of the replication associated protein PCNA was greater than that of the chromatin-related proteins, supporting an efficient capture of specifically nascent DNA ( Figure 1B).
In order to monitor the amount of EdU-labelled DNA, we introduced a sensitive dot blot assay to allow for its quantification from sheared DNA samples (Supplementary Figure S1B, S1C and S1D). Accordingly, this dot blot assay was systematically used in all the purifications allowing the comparison of, not only untreated pulse and pulsechased cells, but also when cells were pulsed in different experimental conditions, therefore, increasing the versatility of the technology. Combined, these modifications of the iPOND technology should open for the method as a standard biochemical technique to study chromatin composition at replication sites.
Identification of the protein network associated to nascent DNA in ESCs
With the aim for an unbiased identification of all protein complexes enriched specifically at nascent DNA in ESCs, we used the modified iPOND technology in combination with high-resolution LTQ-Orbitrap mass spectrometry. To differentiate between proteins specifically associated with nascent DNA, such as the sliding clamp protein PCNA, and constitutive chromatin-bound proteins, such as histone proteins, we compared proteins isolated after a short EdU pulse of 10 min, when the recently replicated DNA is labelled, with proteins isolated after a short EdU pulse followed by a 90-min chase when the replication fork is expected to progress, leaving behind the labelled DNA ( Figure 1C). Comparison of protein samples in silver stained polyacrylamide gels loaded with unlabelled and EdU-labelled samples collected either directly after a 10-minpulse or following a 90-min chase showed a high specificity and sensitivity of the technique for isolation of proteins associated with EdU-labelled DNA from ESCs ( Figure 1D).
As a validation of our experimental conditions, we used well-known protein markers of both the replication fork and mature chromatin. As shown in Figure 1E, while the replication associated proteins PCNA and RPA32 were efficiently isolated only from pulsed cells, histone H3 was equally purified from both pulse and pulse-chased cells. We also used the level of acetylation of histone H4 at lysines 5 and 12 (H4K5Ac and H4K12Ac, respectively) as a marker, since high levels are characteristic for newly synthetized histones, which are rapidly deacetylated upon deposition (22). Indeed, H4K5Ac and H4K12Ac were significantly enriched in the pulse sample compared to pulse-chase (Supplementary Figure S2A), according with the process of chromatin maturation (22). Consistent with a normal progression of the replication fork, we did not observed any increase in the phosphorylation of H2AX at serine 139 (␥ H2AX), an early marker of DNA damage, on cells pulsed and pulse-chased with EdU (Supplementary Figure S2B).
Proteins associated with nascent DNA were isolated from gels in four biological replicates and used for identification by high-resolution mass spectrometry. Proteins identified in at least two out of four independent experiments were included for further analysis. A total of 207 proteins were considered as nascent DNA-bound proteins in ESCs since they showed a relative higher enrichment on Mascot Score in the samples pulsed with EdU (Figure 2A and Supplementary Table S1) in comparison with the corresponding pulse-chased samples (Supplementary Table S1). Some of the proteins identified (57 out of 207) have been previously reported to be linked to DNA replication (Supplementary Table S1); in addition, only 44 of the 207 proteins detected in ESCs have been linked to nascent DNA in previous iPOND studies on HEK-293T cells (10,11) (Supplementary Figure S3 and Supplementary Table S1), supporting the existence of a large number of novel proteins with putative roles during DNA replication in ESCs.
To get further insight into the nascent DNA-associated proteins in ESCs, we examined their potential relationships using the STRING database ( Figure 2A). We first manually classified the proteins according to the best-known Cells were incubated with a short pulse of EdU, which was then incorporated into nascent DNA. The DNA and associated proteins were cross-linked (i), cells were lysed (ii), and labelled DNA was conjugated to a biotin group by a Click reaction (iii) and then fragmented by sonication (iv). Labelled DNA fragments were isolated using streptavidin magnetic beads (v) and eluted using Laemmli Buffer (vi). Eluted samples were analysed by western blot or high-resolution mass spectrometry. ESC, embryonic stem cells. ESC−EdU+, embryonic stem cells pulsed with EdU. (B) Input and the isolated proteins on nascent DNA (iPOND) from ESCs non-pulsed (−) and pulsed 10 min (10 ) with EdU were analysed by western blot using the indicated antibodies. The sonicated DNA was analysed by agarose gel electrophoresis (GelRed). The histogram shows the relative fold recovery of the indicated proteins by iPOND. (C) Schematic representation of EdU labelled (red) replication fork progression after a short pulse of EdU. (D) Nascent DNA was isolated using the modified iPOND technique from ESCs incubated with EdU as indicated. The samples were separated by SDS-PAGE, and the gel was silver stained. Molecular weights (kDa) for the protein marker (M) are indicated. (E) ESCs were pulsed with EdU for 10 min. When indicated, pulsed cells were chased for 90 min after washing out the EdU from the media. The input (Inp) and the iPOND samples were analysed by western blot using the indicated antibodies. function, resulting in a network with a total of 11 functional clusters that were centered around the DNA replication cluster ( Figure 2A). As expected, gene ontology (GO) analysis revealed an enrichment of proteins associated with DNA replication ( Figure 2B), and four DNA replication proteins appeared among the top five proteins with most putative partners identified in our data set ( Figure 2C).
iPOND mass spectrometry of fibroblasts (NIH 3T3 cells) (Supplementary Table S1) revealed that nascent DNA bound proteins constituting the DNA replication cluster were found to be broadly represented in both ESCs as well as in mouse fibroblasts (Supplementary Figure S4). The cluster included components or subunits identified for virtually all the replisome activities, including the helicase complex, topoisomerase, DNA polymerases, the forkprotecting complex, the sliding clamp protein (PCNA), the clamp loader complex (RFC), the RPA complex, nucleases, ligases, RNases and histone chaperones ( Figure 2D and Supplementary Figure S4). These results confirm a high conservation of the core replication module between cell types and further validate the sensitivity of our methodology to isolate replication-associated proteins.
GO term analysis of the proteins identified in ESCs profiling also showed a significant enrichment of proteins annotated to functionally affect cell cycle progression and/or genomic stability ( Figure 2B). To further explore these associations, pair-wise analysis was carried out on data from several genome-wide siRNA studies performed in HeLa and U2OS cells for the identification of genes involved in cell division, genomic stability and in ESCs for identification of pluripotency genes (Table 1). From the 207 proteins, 45 and 28 were previously found to affect cell cycle progression and/or genomic stability in siRNA screens, respectively (Table 1). However, only proteins affecting the progression during S-phase were significantly enriched in the iPOND-MS data set (Table 1), supporting a functional link between the proteins identified and the replication of cells. Most strikingly, a number of proteins were also found in our data set that participates in the control of ESC pluripotency (Table 1). These significantly enriched proteins include among others the YY1 DNA-binding protein that mediates DNA targeting of the chromatin modifying polycomb repressive complex 2 (PRC2) (23,24), the catalytic subunit of the protein phosphatase 4 (PPP4C) that affect histone acetylation (25), and the NuRD complex subunit methyl-CpG-binding domain 3 (MBD3) that regulates transcriptional heterogeneity in ESCs (7). Hence, these results reveal that synthesis of new chromatin and the maintenance of pluripotency during cell division might be intimately linked. nullnull
Differential recruitment of MMR proteins at nascent DNA
In addition to the DNA replication cluster, the protein network at nascent DNA in ESCs encompasses 10 functional clusters (Figure 2A). The DNA repair cluster included proteins such as members of the mismatch repair system (MMR: MHS2, MSH6, MLH1) and the doublestrand break repair protein MRE11A. These proteins were confirmed to be enriched at nascent DNA in ESCs by western blot with specific antibodies ( Figure 3A). The enrichment on MMR proteins at nascent DNA concurs with previous studies in yeast (26) and the more recent iPOND mass-spectrometry data on HEK-293T cells (10,11). These results suggest that DNA repair systems could be an integral part of the replication machinery across different cell types, although analysis of the mutation frequency (12) and total proteins levels between fibroblasts and ESCs (Supplementary Figure S5) indicate quantitative differences in MMR system activity in ESCs and somatic cells.
In order to enable the comparison between different cell types, ESCs were differentiated by few days of LIF (Leukemia inhibitory factor) withdrawal. We observed that the repair protein MLH1 was markedly decreased at nascent DNA upon differentiation, while similar amounts of PCNA, histone H3 and newly assembled histone H4 (H4K5Ac and H4K12Ac) were isolated from ESCs and differentiated ESCs samples ( Figure 3B and C). This result confirms that the association to replication sites of the proteins not belonging to the DNA replication core can be regulated in a cell type-dependent manner. Moreover, the results indicate that the high DNA replication fidelity ob-served in ESCs in contrast to differentiated cells (12)(13)(14) could be explained molecularly by a distinctive recruitment of MMR proteins at nascent DNA.
The HDAC1-NuRD complex is enriched at nascent DNA in ESCs
The class I histone deacetylase protein HDAC1 was among the top five proteins with most known interacting partners identified by STRING analysis in the ESC screen ( Figure 2C). The protein was also identified in fibroblasts (Supplementary Table S1) and confirmed to be present in nascent DNA not only in ESCs but also in differentiated ESCs (Figure 3A and 3C), indicating that it could play a central role in chromatin organization during replication.
HDAC1 is known to be a part of several multiprotein complexes, including the chromatin remodelling complexes NuRD, SIN3 and CoREST (27), which have been classically associated to the regulation of transcription in many mammalian cell types. To address if HDAC1 specific protein complexes are associated to nascent DNA in ESCs, we used a two-step purification protocol to isolate specifically HDAC1-containing nucleosomes from nascent DNA in native conditions combined with high-resolution mass spectrometry for the identification of the bound proteins ( Figure 4A). ESCs pulsed with EdU for 10 min were collected and lysed in non-fixative conditions. Cell extracts were sonicated to obtain chromatin with a size comparable to mononucleosomes ( Figure 4A, GelRed). The extracts were used for affinity purification with a specific anti-HDAC1 antibody. The purified immunocomplexes were eluted with a competitor peptide and re-purified with streptavidin magnetic beads using the modified iPOND technology. Analysis of the immunoprecipitation-iPOND (IP-iPOND) by massspectrometry showed the association of HDAC1 with all the components of the NuRD complex (28), including the ATPase CHD4, lysine-demethylase 1 (LSD1), MBD3, the metastasis-associated gene 1, 2 and 3 (MTA1-3) proteins, and retinoblastoma-binding proteins 4 and 7 (RBBP4 and RBBP7) and the GATA zinc finger domain-containing protein 2A (GATAD2A and GATAD2B) ( Figure 4B and Supplementary Table S2). The interaction of HDAC1 with the NuRD components at nascent DNA was confirmed by IP-iPOND followed by western blot analysis with specific antibodies (Supplementary Figure S6).
These previous data identifies all NuRD complex proteins by iPOND as well as HDAC1 IP-iPOND in ESCs, suggesting that NuRD is enriched in ESCs. The enrichment at nascent DNA of the key subunits of the NuRD complex in ESCs was confirmed by iPOND and western blot ( Figure 4C and D). Interestingly, the key subunits of NuRD complex, CHD4 and HDAC1, displayed a remarkably greater association with nascent DNA in ESCs, as compared to fibroblasts (NIH3T3 cells) ( Figure 4E). Isolation of equivalent amounts of nascent DNA by iPOND was confirmed by the detection of comparable amounts of PCNA and histone H3 ( Figure 4E). These results suggest a cell-type dependent regulation of NuRD complex association with nascent DNA. The functional interaction of the complex was further confirmed by the knockdown of HDAC1 using small interfering RNA (siRNA). Depletion of HDAC1 did Table 1. Pair-wise comparison between iPOND-MS data set and genetic screens designed for identification of proteins involved in pluripotency, cell cycle progression and genome instability. not affect the rate of EdU incorporation in ESCs ( Figure 4G), consistent with previous studies showing that the rate of replication in not substantially different between control and HDAC1 knockout ESCs (29). Furthermore, we did not observe changes in the total amount of NuRD components ( Figure 4F, input). However, the recruitment of NuRD components to nascent DNA was severely compromised upon HDAC1 depletion ( Figure 4F, iPOND). Quantification of proteins isolated by iPOND and normalized to isolated PCNA confirmed this conclusion ( Figure 4H). Combined, these data evidences a recruitment of functional NuRD complex at nascent DNA in ESCs. Our proteomic data also revealed the association of HDAC1 with other chromatin remodelling complexes that have been shown to play a role in the restoration of transcriptional repressed heterochromatin, such as the NoRC complex (30), the WICH complex (31), the DNMT1/PCNA complex (32) and the SMARCAD1/PCNA complex (33) ( Figure 4B). Combined, these data point to HDAC1 as a hub for a protein network at nascent DNA required for epigenome maintenance during replication of ESCs.
NuRD complex interacts with the hemimethylated DNAbound protein UHRF1
The iPOND analysis also showed the presence of the epigenetic regulator UHRF1 protein specifically at nascent DNA ( Figure 3A) and in the HDCA1 complexes associated with nascent DNA (Figure 4C). The association of these proteins with nascent DNA is functionally linked to fork progression, as revealed by iPOND on ESCs treated by aphidicolin, a reversible inhibitor of the DNA polymerase complex. Acute inhibition of the replication fork progression by aphidicolin reduced association of the NuRD subunits CHD4, HDAC1 and LSD1, as well as UHRF1, while increasing the damage-recognition protein RPA32 (Supplementary Figure S7). The interaction of these proteins appears to be highly dynamic, because when DNA replication fork progression was restored upon aphidicolin removal, the association of the NuRD complex at nascent DNA was normalized (Supplementary Figure S7). UHRF1 was previously shown to bind hemimethylated DNA in association with PCNA at replication sites (32) and to interact with HDAC1 at the p21 gene promoter (34). By RNAi, the recruitment of UHRF1 to the replicating DNA was shown to depend on the presence of HDAC1 ( Figure 4F and G). Interestingly, immunoprecipitation of UHRF1 from soluble extracts of ESCs in the presence of ethidium bromide to exclude an indirect DNA-mediated interaction, showed HDAC1 as well as the main component of the NuRD complex, CHD4, in the immunocomplexes ( Figure 5A), suggesting that UHRF1 and NuRD are part of the same protein complex. Furthermore, the protein stability of UHRF1 was markedly reduced when the NuRD complex was compromised by CHD4 RNAi, as seen by the estimation of the half-life of UHRF1 in the presence of the translation inhibitor cycloheximide (Figure 5C and D) and the restoration of normal levels in CHD4 siRNA samples in the presence of the proteasomal inhibitor MG132 ( Figure 5E). These data provide a functional relationship between NuRD and the replication-associated protein UHRF1 that could participate in re-establishing histone epigenetic marks and regulate chromatin organization following replication fork passage in ESCs.
Deeply repressed heterochromatin is rapidly restored upon replication fork passage in ESC
New and old histones are rapidly deposited on nascent DNA after replication fork passage (35). New histones need to acquire the same posttranslational modification pattern as the parental in order to maintain the epigenetic code across cell division. However, several reports using cell cycle synchronized HeLa cells conclude that, in contrast to acetylation of new histones, which is rapidly adjusted upon deposition, the methylation of the new histones, including H3K9 trimethylation (H3K9me3), is delayed and not fully restored until the G1 phase of the next cell cycle (36)(37)(38). Thus, histone methylation appears uncoupled from replication in HeLa cells. A consequence of this transient unbalance between histone deacetylation and methylation, which normally affects transcription in an opposite ways, may underlie observed oscillations of gene activity across cell cycle phases.
To investigate whether histone deacetylation and methylation are functionally linked after fork passage in ESCs, we used valproic acid (VPA), a specific inhibitor of class I HDACs with high affinity for HDAC1 (39). VPA holds promise in regenerative medicine, since it has shown to be a potent inducer of pluripotency from somatic cells (40,41). ESCs were treated with a short pulse (30 min) of VPA at low concentration with an EdU pulse during the last 10 min. Nascent chromatin was thereafter purified by iPOND and selected histone modifications were analysed by western blot with specific antibodies ( Figure 6A). As expected, acute treatment with VPA led to a marked increase of H3K9Ac levels, both in the input and at nascent DNA ( Figure 6A and B). Interestingly, H3K9 mono-and trimethylation was markedly reduced specifically on nascent DNA, in contrast to the input chromatin that remained unaffected. However, the other major repressive histone mark, the methylated lysine 27 at histone H3 and its acetylated form was not significantly affected. These results show that VPA has pronounced effects on the deposition of epigenetic marks during DNA replication, and suggests that in ESCs, HDAC1 could act at nascent DNA by regulating the rapid deacetylation of H3K9 in ESCs, which is necessary for their subsequent methylation during replication. A rapid H3K9 deacetylation-coupled methylation mechanism is predicted to maintain stable levels of H3K9 modification during cell cycle progression. To test this hypothesis, we set up a system to isolate ESCs in different phases of the cell cycle. ESCs were stably transfected with the fluorescent cell cycle indicator human Geminin (amino acids fused to the green fluorescent protein mAG1 (monomeric Azami-Green1) (FUCCI system) (42) ( Figure 6C). This indicator is expressed in a cell cycle-dependent manner, being absent during G1 phase, gradually expressed in S phase and rapidly degraded at the end of M phase ( Figure 6D) (42), allowing for the fluorescence-based separation of three populations of cells. ESCs in G1, S and late S/G2/M were prospectively isolated by fluorescence-activated cell sorting (FACS) (Figure 6E), and levels of H3K9Ac and HeK9me3 were assessed by western blot analysis. Unlike HeLa cells (36)(37)(38), no major alterations of H3K9me3 were observed when cells in G1 (green− ), S (green+) and late S/G2/M (green+++) cell cycle phases were compared ( Figure 6F). Furthermore, H3K9Ac was largely unchanged with only a modest reduction in cells in late S/G2/M (green +++) (Figure 6D). These results suggest that, in contrast to tumour cells, the heterochromatin marker H3K9me3 is stably maintained across the cell cycle phases in ESCs.
DISCUSSION
In this study we have profiled the proteins associated with the replication fork in ESCs with the assumption that ESCs contain in parts a unique protein interaction network and combined with our studies on their behaviour during DNA replication we conclude that: (i) Nascent DNA of ESCs is enriched with proteins associated with pluripotency; (ii) the recruitment at replication sites of proteins not belonging to the replisome, such as the MMR proteins and NuRD complex subunits, varies in a cell-type specific manner; (iii) complexes involved in determining the unique epigenetic landscape of ESCs, including the HDAC1-NuRD complex, are dynamically associated with chromatin during replication progression; and (iv) restoration of the repressive epigenetic mark H3K9me3 in ESCs is very rapid and coupled to a HDAC1-dependent deacetylation process.
Despite that two independent laboratories studied HEK-293T cells and used the same iPOND method (10,11), only 19 proteins were found in both studies from a total of 52 and 84 identified proteins, respectively. In the present study, 37 of 52 and 19 of 84 proteins identified in (10) and (11), respectively, were also found in our data (Supplementary Table S1). The differences between the studies could be caused by variations of the iPOND methodology, the amount of cells used in the studies (3.5 × 10 9 by Fernandez-Capetillo group, 2.7 × 10 8 by Cortez group and 3 × 10 7 in the present study) and/or differences associated with the cell type studied (HEK-293T versus ESCs). Limitations in sensitivity resulting in false negative as well as unspecific noise leading to false positive might also contribute. Nevertheless, among the proteins identified, 75% of the replisome proteins and 60% of the DNA repair proteins identified in ESCs were common also to HEK-293T cells (10,11). These include among others the catalytic subunit and primase (POLA1 and PRIM2) from the polymerase alpha complex, which initiates the DNA synthesis; subunits of the DNA polymerases delta and epsilon (POLD1, D3 and E), which extend the synthesis of DNA at leading and lagging strand; the clamp loader complex replication factor C replication factor complex (RFC1-5); the subunits of the the CAF1 (CHAF1A and B) and FACT (SUPT16h, SSRP1) histone chaperon complex; the MCM helicase complex (MCM2, 3, 4, 6 and 7); and mismatch repair proteins (MSH2, 3 and 6). Considering that our and these previous studies on HEK293 cells are based on an unbiased method, which covers all active DNA replication from early to late S phase, and on a hypothesis-free proteomic approach, the high overlap in functional clusters confirms the existence of a core of replication associated proteins shared between different cell types. In addition to shared core proteins with HEK293, our side-by-side comparison of ESCs and fibroblasts suggest the existence of variable modules that associate with DNA in a cell-type dependent manner, like the NuRD complex. We believe that further validations of candidate proteins in different cell types will clarify the common and unique proteins at nascent DNA.
While the iPOND technique opens for an unbiased identification of the proteome associating with replicating DNA, some limitations should be considered. The iPOND technique fails to capture the dynamics of DNA replication along the S phase, and therefore it is not possible to ascertain that all the proteins identified in our or other studies are present at replication sites at the same time. However, a combination of iPOND with preparative flow cytometry techniques would allow for studies of the dynamics of the proteome around nascent DNA at selected stages of the S phase. Moreover, the identification of proteins associated with nascent DNA by iPOND does not necessarily evidence a physical interaction between them. The use of alternative methods, such as Förster Resonance Energy Transfer (FRET), or adaptations of iPOND such as performed in our two-step purification procedure (IP-iPOND), will be crucial as a systematic strategy to confirm coexistence of different components at the same nascent DNA.
Our study identified a large number of proteins associated with nascent DNA in ESCs that encompasses different functional clusters. As expected, the DNA replication cluster was enriched in our data set as well as an unexpected presence of a number of other functional clusters, including metabolic enzymes, ribosomal proteins and structural proteins. Although more experimental data is required to independently confirm the presence of these proteins at nascent DNA in ESCs, previous work has already pointed out to a possible functional relationship with DNA replication. For instance, the metabolic cluster includes enzymes of the mevalonate pathway, such as phosphomevalonate kinase and hydroxymethylglutaryl-CoA synthase. The inhibition of the mevalonate pathway by statins is known to cause a rapid block of DNA synthesis both in transformed cell lines and ESCs, which is followed by a loss of pluripotency (43,44). These effects are reversed by the addition of mevalonate, however, the mechanism of action of statins on DNA replication and the molecular mechanism by which mevalonate reverses these effects are unknown. Our results showing an interaction of these enzymes at the nascent DNA warrant for further studies identifying their role for mainte-nance of pluripotency across the cell cycle. Ribosomal proteins have previously been shown to have functional and physical association with DNA replication proteins in bacteria (45,46). In mammals, the ribosomal protein S27L is recruited to DNA breaks upon DNA damage and modulates the DNA damage response in human colorectal cancer cells (47). Our finding of the ribosomal protein S27L at nascent DNA and the close relation of DNA repair response pathway and DNA replication support a putative functional role of this protein for genome stability during replication in ESCs. Furthermore, biochemical studies in HeLa cells indicate that the lamina-associated polypeptide 2 isoform beta, an inner nuclear membrane protein and seemingly other splicing isoforms, including the alpha isoform identified in our study, regulate the initiation of DNA replication (48). Hence we have identified a connection between DNA replication with disparate unexpected cellular functions that opens for future studies resolving their functional roles in the replication machinery.
Although epigenetic modifications in ESCs are highly specific and underlie the plasticity necessary for gene expression to support development while retaining pluripotency (1), little is known on whether genetic structures selected by chromatin modifying enzymes are targeted during replication and/or in mature chromatin (49). Here, we show that HDAC1-NuRD complexes are enriched at chromatin during replication in ESCs. Moreover, we show that, in addition to HDAC1, the NuRD subunit CHD4 associates with UHRF1. These results point to a previously unidentified multi-enzyme complex ensuring epigenetic memory in ESCs. The abundance and versatile nature of protein complexes at nascent DNA identified in the present study reveals the importance of replication-associated protein complexes for establishment and maintenance of stem cell characteristics in ESCs.
The sequential recruitment of chromatin modifying enzymes often reflects a temporal ordered modification of chromatin. For instance, the observed interdependence between NuRD and PRC2 to propagate epigenetic marks (7,50,51) suggests a hierarchical relationship, by which deacetylation of H3K27Ac by NuRD is required to recruit PRC2 to the target sequences in order to methylate H3K27 (50,51). Similarly to H3K27 methylation, NuRD activity seems important for deacetylation of H3K9 that promotes methylation and the formation of silent chromatin containing H3K9me3 (51,52). Our finding of a H3K9 methylation associated to DNA replication is consistent with the identification of G9a/EHMT2, which associates with UHRF1 (34,53), and SETDB1 methyltransferases at the replication fork in our and other studies (54,55). The failure of H3K9 methylation during replication after VPA treatment suggests that licensing by deacetylation precedes and is required for methylation and the formation of silenced chromatin. Hence in contrast to somatic cells, our results support a mechanism for restoring H3K9me3 in ESCs that is very rapid, and which could participate in preventing the unscheduled expression of repressed chromatin.
Finally, we envisage that our results showing the existence of cell-type variations in replication associated proteins across different types of cells will encourage new directions of research in other fields, including cancer biol-ogy. Tumour and normal cells display different behaviour in the DNA repair response (56) and furthermore it is possible that changes in replication coupled mechanisms necessary for successful fork progression participate in therapy resistance and therapy-driven evolution of tumour recurrence, as exemplified recently for instance in glioma (57,58). An identification of mechanism participating in tumour evolution and therapy resistance could open for the identification of new targets for cancer therapy. | 2016-05-12T22:15:10.714Z | 2014-05-22T00:00:00.000 | {
"year": 2014,
"sha1": "4a1ef69e372802a229375627ec1ae570392562c8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/nar/gku374",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a1ef69e372802a229375627ec1ae570392562c8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
7562176 | pes2o/s2orc | v3-fos-license | Hemothorax following Uncomplicated Endoscopic Variceal Sclerotherapy and Ligation for Esophageal Varices
Endoscopic variceal sclerotherapy and ligation are standard treatment modalities used for the management of esophageal varices. Reportedly, sclerotherapy and ligation are associated with complications such as hematuria, pulmonary thrombus formation, pleural effusion, renal dysfunction, and esophageal stenosis. However, hemothorax following sclerotherapy and ligation has not yet been reported. We treated a patient who presented with liver cirrhosis and polycythemia vera and later developed hemothorax following the above-mentioned procedures. An 86-year-old man diagnosed with liver cirrhosis due to chronic hepatitis type B and alcohol abuse underwent variceal sclerotherapy using ethanolamine oleate to treat his esophageal varices. Oozing from the esophageal varices continued even after the sclerotherapy procedure; therefore, we performed endoscopic variceal ligation. The patient developed left-sided hemothorax within 24 h after treatment of his varices, and an emergency thoracotomy was performed. A pulmonary ligament of the left lung was bulging and ripping because of mediastinal hematoma, and oozing was noted. Cessation of bleeding was noted after the laceration of the left pulmonary ligament had been sutured. Ours is the first case of hemothorax reported in a patient following an uncomplicated procedure of sclerotherapy and ligation.
Introduction
Endoscopic variceal sclerotherapy and ligation are common treatment modalities used to control and manage esophageal variceal bleeding [1,2]. Despite the proven efficacy of both sclerotherapy and ligation for the management of acute variceal bleeding [1,2], ligation is the first-line therapy because of its safety and ease of use [3]. However, variceal sclerotherapy and ligation are associated with several adverse effects. While chest pain, pulmonary embolism, renal dysfunction, and esophageal stenosis are known to occur as major adverse events associated with sclerotherapy [4][5][6], ligation could lead to complications such as esophageal laceration, transient dysphagia, chest pain, esophageal stricture, and ulcer-related bleeding [7].
There is growing concern regarding pulmonary complications associated with sclerotherapy, and nonhemorrhagic pleural effusion has been reported after variceal sclerotherapy [8]. A retrospective study reveals that the incidence rate of pleural effusion is about 27% [8]. Of note, hemothorax reportedly occurs in patients following a traumatic accident [9]. Occurrence of massive hemothorax has been reported after blunt trauma leading to injury to the inferior phrenic artery [9]. However, there is not much information regarding the occurrence of hemothorax in association with uncomplicated esophageal variceal sclerotherapy.
We report the occurrence of hemothorax in a patient diagnosed with esophageal varices following an uncomplicated esophageal variceal sclerotherapy and ligation procedure. The patient presented with liver cirrhosis and polycythemia vera with concomitant esophageal varices. After undergoing endoscopic variceal sclerotherapy and ligation, he complained of dull left-sided thoracic pain. Based on the findings of a computed tomography (CT) examination, he was diagnosed as having left-sided hemothorax. Ours is the first report to describe a case where endoscopic variceal sclerotherapy and ligation possibly contributed to the development of hemothorax in a patient.
Case Presentation
An 86-year-old man diagnosed with liver cirrhosis due to chronic hepatitis type B and alcohol abuse was investigated for the presence of esophageal varices at the time of a followup visit to the Department of Gastroenterology at our hospital. He had a history of left-sided intramuscular hemorrhage of unknown etiology, a year prior to presentation. He had been diagnosed with polycythemia vera at the age of 74 years, but he did not relate any remarkable family history. A physical examination revealed he was 150 cm tall and weighed 55 kg. An examination of his palpebral conjunctiva did not reveal an anemic state, and his bulbar conjunctiva did not show signs of icterus. His heart and respiratory sounds were normal, and his liver and spleen were not palpable. A laboratory workup revealed a red blood cell count of 5.36 × 10 6 /μL, a hemoglobin level of 13.8 g/dL, and a platelet count of 42.8 × 10 4 /μL, and his prothrombin time was prolonged in 60%. However, his von Willebrand factor (vWF) was normal. The serum aspartate aminotransferase and alanine aminotransferase levels were elevated to 44 and 40 U/L, respectively; however, the serum albumin, total cholesterol, and triglyceride levels were decreased (Table 1). His hepatic reserve showed a Child-Pugh class B.
We performed an upper gastrointestinal endoscopy to assess for any gastrointestinal complications associated with liver cirrhosis. The endoscopy showed erythema and cherry red spots in the lower part of the esophagus (Fig. 1a). On the second day after admission, he underwent endoscopic variceal sclerotherapy with injection of ethanolamine oleate into the variceal veins to prevent bleeding from the esophageal varices (Fig. 1b, c). After the endoscopic variceal sclerotherapy, the patient developed epigastric abdominal pain and reported tarry stool. The following day, we performed an upper gastrointestinal endoscopy to identify the source of the bleeding leading to the tarry stool. A huge hematoma was detected at the puncture site through which the sclerotherapy had been administered, and we performed endoscopic variceal ligation using an O-ring (Fig. 1d, e). A laboratory workup revealed that his red blood cell count had decreased to 4.6 × 10 6 /μL, his hemoglobin level had dropped to 11.7 g/dL, and his platelet count was significantly increased to 98.8 × 10 4 /μL, for which he received urgent blood transfusion.
The following day, the patient complained of severe left-sided dull pain, and had difficulty with breathing. An emergency enhanced CT revealed a massive left-sided pleural effusion, which was suspected to be caused by extravasation from vessels present along the left lung ligament (Fig. 2a). Another laboratory workup revealed that his red blood cell count had further decreased to 3.4 × 10 6 /μL, his hemoglobin level had further dropped to 8.9 g/dL, and the platelet count had risen to 95.1 × 10 4 /μL.
On the third day of hospitalization, the patient underwent an emergency thoracotomy, which revealed massive bloody pleural effusion and a huge hematoma in the left thoracic cavity (Fig. 2b). A pulmonary ligament was bulging and ripping because of the mediastinal hematoma, and oozing was noted. The responsible vessel for hemothorax was not clearly identified. We sutured the lacerated portion of the left pulmonary ligament. After removal of the huge hematoma, he received blood transfusion, and no further bleeding was observed. A further laboratory workup revealed that his red blood cell count had returned to 3.7 × 10 6 /μL, his hemoglobin level had returned to 10.7 g/dL, and the platelet count was 31.3 × 10 4 /μL. The patient was placed in the intensive medical care unit for observation and medical management.
On the eighth day of hospitalization, he was transferred to the general ward. A CT examination showed that his left-sided massive pleural effusion had decreased (Fig. 3a), and endoscopy revealed thrombus formation in the variceal veins in addition to the presence of a post-banding ulcer after use of the elastic O-ring (Fig. 3b). He was discharged from the hospital 10 days after surgery for evacuation of the mediastinal hematoma.
Discussion
Pulmonary embolism, renal dysfunction, and esophageal stenosis are known to be major adverse effects associated with sclerotherapy [4][5][6]. Variceal ligation is associated with complications such as esophageal laceration, transient dysphagia, chest pain, esophageal stricture, and ulcer-related bleeding [7]. However, hemothorax following uncomplicated esophageal variceal sclerotherapy and ligation has not yet been reported.
Life-threatening hemothorax has been reported due to injury to the inferior pulmonary ligament after trauma [9]. A patient hit by a car is known to have developed active extravasation of the contrast medium [9]. Massive hemothorax has been reported due to inferior phrenic artery injury after blunt trauma. However, in our case, there was no trauma reported during sclerotherapy and ligation, nor was there any injury to the pulmonary ligament.
To date, only 1 case of hemothorax has been reported following a sclerotherapy procedure performed for esophageal varices [10]. The patient is known to have developed leftsided bloody pleural effusion within 12-72 h after sclerotherapy. The site of bleeding that led to the hemothorax following esophageal variceal sclerotherapy remains unclear. It was hypothesized that hemothorax reflects the severity of inflammation after paravariceal extravasation of the sclerosant. Alternatively, the patient could have had abnormally dilated vessels on the outer wall of the esophagus secondary to portal hypertension. However, in our case, paravariceal extravasation of the sclerosant was not detected. Sclerotherapy itself might induce portal hypertension associated with thrombosis in the treated veins. Changes in hemodynamic status might be a mechanism that contributes to the development of hemothorax after variceal sclerotherapy and ligation.
Polycythemia vera is associated with bleeding primarily involving the skin and mucous membranes, suggesting defective primary hemostasis [11]. Although gastrointestinal hemorrhage occurs less frequently, it could be severe, and is often associated with use of aspirin [12,13]. This type of a bleeding pattern is consistent with qualitative or quantitative defects in platelets or the presence of von Willebrand disease. Some episodes of hemorrhage may be directly or indirectly related to concomitant thrombotic complications. Previous reports have indicated that bleeding gastric and esophageal varices usually result from portal hypertension associated with thrombosis of abdominal veins [14]. In the present case, the patient had related a history of left-sided intramuscular hemorrhage. His platelet count had increased after esophageal variceal sclerotherapy and rose further after esophageal variceal ligation. Furthermore, he showed a drop in his platelet count with no further episodes of bleeding observed following surgery for treatment of hemothorax and hematoma evacuation. Previous reports have indicated that an elevated platelet count may be associated with an abnormal vWF multimer distribution in plasma [15]. An elevated platelet count did show a correlation with a decrease in the largest multimers of plasma vWF [15]. An inverse correlation is known to exist between the proportion of large vWF multimers and platelets. These findings indicate that an increased platelet count following sclerotherapy might induce qualitative defects in platelets, leading to a greater tendency towards bleeding.
In conclusion, we described a case of hemothorax in a patient following uncomplicated endoscopic variceal sclerotherapy and ligation for the management of esophageal varices. The uncomplicated hemothorax could be attributed to portal hypertension caused by sclero- Fig. 1. Endoscopic findings showing tense and nodular varices with cherry red spots at the locus inferior (a). Endoscopic variceal sclerotherapy was performed via intravariceal injection of 5% ethanolamine oleate using a 25-G needle injector (b). Endoscopic varicelography shows the 5% ethanolamine oleate injected into the veins (c). After an endoscopic sclerotherapy procedure performed for the esophageal varices, the endoscopic findings show a blood clot covering the ruptured esophageal varices (d). e Endoscopic variceal ligation. | 2018-04-03T02:06:05.921Z | 2017-09-15T00:00:00.000 | {
"year": 2017,
"sha1": "1a2ba93e09f190374896d213287aca2434a82a99",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/480378",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a2ba93e09f190374896d213287aca2434a82a99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5326948 | pes2o/s2orc | v3-fos-license | Valuation Model for Adding Energy Resource into Autonomous Energy Cluster
With the availability of distributed generation (DG), clusters that can autonomously manage their energy profile are emerging in the power grid. These autonomous clusters manage their load profiles by orchestrating their energy resources, such as DG, storage, flexible energy consuming appliances, etc. The performance of such an autonomous cluster depends on the composition of its energy resources. In this paper, we study how the performance of a cluster is affected by adding energy resources such as generating units, storage systems or consuming appliances. First, we characterize the energy resources by parameters that describe their relevant properties. Afterwards, we describe a comprehensive set of performance indicators of a cluster that capture the economical, environmental, and social aspects. We present a model that shows how the energy resources influence the performance indicators of the cluster. We have tested our model with a case study, revealing its effectiveness to evaluate the value added by an energy resource to a cluster.
Introduction
The electricity power system is in transition.Driven by the growing need for clean, reliable and affordable electricity supply, more renewable and distributed energy sources are penetrating into the distribution power grid, i.e., close to the end-consumers [1][2][3].For instance, according to European parliament, all new buildings that will be built after 2019 will have to produce energy on site [4].In addition to the generation capability, distributed electricity storage systems are also becoming available [5][6][7][8][9].Moreover, massive presence of electric vehicles is anticipated, that will have huge impact on the distribution grid [10][11][12].In parallel with these trends, significant efforts is being made to develop intelligent solutions that could help to coordinate the system [13,14].
The availability of distributed generation, the flexibility provided by distributed storage and other flexible devices, as well as the accessibility of intelligent mechanisms to coordinate these resources make local matching of supply and demand more appealing.With more resources becoming locally available and with the growing intelligence of coordination, the lower parts of electricity power grid tend to become energy autonomous.Accordingly, various types of autonomous clusters are developing in the power system, namely virtual power plants [15], microgrids [3], autonomous networks [16], energy communities [17], etc. Common to these forms of clusters is that they autonomously manage their resources and exchange power bidirectionally with the rest of the power grid.
A synthetic neighborhood autonomous cluster is shown in Figure 1.The cluster consists of different types of energy resources.The energy resources include different power sources, electricity storage systems, and different types of appliances in and around the houses that consume electricity.The energy resources in the cluster can be coordinated using appropriate strategies to achieve a desired performance.The autonomous cluster is also connected to the external grid, that enables it to exchange power with the rest of the grid in a bidirectional way.
It is desirable to optimize the performances of autonomous clusters with regards to economical, environmental, and social values.The performance measures depend on the composition of the energy resources in the cluster, since the energy resources have different contributions to the different performance measures of the cluster.Therefore, finding the right composition of the energy resources plays a significant role to obtain the desired performance of the cluster.To find the right composition of the energy resources in the cluster, the influence of each energy resource on the performances of the cluster need to be clearly identified.
In this paper, we present a novel study that investigates how the energy resources in an autonomous cluster influence the performance of the cluster.We consider a generic autonomous cluster.To do this, we identify the characteristics of the energy resources that influence the performance of the cluster.Further, we describe a comprehensive set of relevant performance indicators of the cluster, and then model how these performance indicators are influenced by the characteristics of the energy resources.This enables us to model the value added to the cluster by adding an energy resource.
The rest of this paper is organized as follows.We present the related work in Section 2. After presenting the characteristics of the energy resources in Section 3, we present our model of the performance indicators of a cluster in Section 4. In Section 5, we present our case study that is used to test our model.Finally, the concluding remarks are presented in Section 6.
Related Work
With the trend of increasing availability of distributed energy resources, various forms of autonomous clusters have been proposed.A Virtual Power Plant (VPP) [15] is a collectively managed cluster of distributed power sources.A Microgrid [3] is a low voltage distribution system comprising of distributed generations, storage systems and controlled loads that are coordinated to achieve a controllable operation either as an island or connected to the power grid.Autonomous network [16] is a part of the power grid but its behavior is more or less independent from the rest, and its primary aim is optimizing its normal operation.An energy community [9] is a cluster of prosumers that exchange power with the rest of the system as a single unit.
In autonomous clusters, desirable performances are achieved by orchestrating their energy resources.Cost, emission and reliability/robustness are common performance indicators in the power system.There are a couple of works in the literature that attempt to optimize some of these performance indicators on specific systems, a review of which is provided in [18].However, a comprehensive model that evaluates the performance indicators of a cluster in terms of the properties of its constituent energy resources is missing, and this work attempts to fill this gap.
In this work, we propose a model that evaluates the value gained by adding an energy resource to an autonomous cluster.In addition to the common performance indicators mentioned before, we propose two relevant performance indicators of an autonomous cluster, namely independence and convenience, that also capture other performance aspects of a cluster as will be described later.
Characterizing the Energy Resources
In this paper, energy resource of a cluster refers to generation unit, energy storage system, or consuming appliance that is part of the cluster.Energy resources have different characteristics that influence the performance of the cluster.In this section, eight characteristics are identified, namely cost, emissions, failure rate, responsiveness, controllability, predictability, availability, and convenience.These characteristics are described subsequently.
Cost
An energy resource has a fixed cost which represents the investment cost incurred to install it.Over a given period of time T, the fixed cost can be translated to depreciation cost.Depreciation costs are the costs due to the value degradation of the energy resources as a result of aging and usage.Depreciation of an energy resource depends on how intensively it is used, i.e. if it is used more intensively, it depreciates faster.Thus, the depreciation cost of an energy resource over an interval of time T is obtained by multiplying its fixed cost f c by its depreciation over T D , as shown in Equation (1).In addition, an energy resource has a variable cost that is associated with its operation.The variable cost of an energy resource over a period var T c can be obtained as shown in Equation (2), where c v is its average cost of supplying a unit energy, and E is the total amount of energy supplied by the energy resource in the time interval T.
Emission
An energy resource usually has green-house gas emission associated with it, that could be divided into fixed emission and variable emission.The total fixed emission f m is the emission associated with manufacturing and installation process of the energy resource.The variable emission is the emission resulting from the operation of the energy resource.
Similar to the cost, the depreciation and variable emissions over period T ( dep m and var m ) of an energy resource can be obtained as shown in Equations ( 3) and (4), respectively, where f m is the fixed emission of the energy resource, and v m is the emission of the energy resource per unit of the energy it supplies.
Failure Rate
Failure rate expresses the probability of failure of an energy resource.Given an expected rate of failure per year , a continuous probability distribution function can be used to model the failure probability.Commonly, the exponential distribution function is used.Accordingly, the probability that a failure occurs within a time duration of T can be expressed as:
Predictability
The predictability of an energy resource indicates how accurately its power supply or demand can be forecasted.
means that the prediction is 30% confident that at 4 t the value will be within ±5 uncertainty interval from the expected value.
Based on these, we quantify the predictability factor r of an energy resource as shown in Equation ( 6), where T is the length of the time period over which the prediction is made, and U is the capacity of the energy resource.r is computed by integrating the prediction uncertainty interval P over the time period T and all coefficients of reliability P , and then normalized by U. The normalization is done so that r gives the amount of uncertainty per capacity of the energy resource.A lower predictability factor r indicates a higher predictability.
Availability
Availability of an energy resource tells whether it is available for use when it is needed.This characteristic also captures the usefulness of its availability.For example, a power source that is available at periods of surplus production but not available at the times when there is deficiency of supply has low availability.When quantifying the availability, two parameters of importance are the expected availability tells when the energy resource is available,
A t represents the level of uncertainty of the availability.
To capture the usefulness of the availability of an energy resource, we propose an availability factor that de-pends on the situation in the cluster.We denote the situation in the cluster by S, which represents the amount of extra power production/consumption needed based on whether there is shortage/surplus of power generation in the cluster, normalized by the largest instantaneous power demand in the cluster.In order to determine the availability of an energy resource to supply power, S is computed based on the need of extra power supply, whereas to determine the availability of an energy resource to consume/store power, S is computed based on the need of extra power consumption.Accordingly, we propose to compute the availability factor a over a period of time T as shown in Equation (7).
Controllability
Controllability refers to the extent to which the power supply/consumption of an energy resource can be controlled.In our case, controlling means making it produce or consume a required amount of electricity on demand.
For example, the charging rate of a battery storage can be tuned below the maximum possible charging rate.The controllability of an energy resources is subject to its inherent constraints.For instance, charging of a storage is constrained by its maximum charging rate, state of charge, and storage capacity.Therefore, we propose to measure controllability b as the length of the interval over which the power supply/consumption of an energy resource can be varied, as restricted by its inherent constraints.
Responsiveness
Responsiveness represents the duration of time it takes the energy resource to respond to a power production/ consumption request from the cluster.Some energy resources respond in few seconds, while others do in few minutes or more.For example, a battery storage can respond to a request in few seconds, while a fuel cell responds in a couple of seconds to minutes.Thus, responsiveness of an energy resource, x, is expressed as the length of the time interval between receiving the request and responding to the request.
Convenience
Convenience refers to the perception of people about an energy resource regarding its disruption of their comfort.Comfort can have various dimensions such as noise, visual disturbance, etc.For example, installing wind turbines in a residential neighborhood could lead to visual disturbance.People can have different opinion about the importance of a comfort dimension.The importance can be rated with integers ranging from 0 to 3.An energy resource can be evaluated against each comfort dimension with a score ranging from say 1 to 10.Therefore, convenience can be measured by surveying the opinion of the people about the importance and score of each comfort dimension.Afterwards, convenience factor v is computed as shown in Equation ( 8), where j h and j l are the importance and score, respectively, of the th j comfort dimension.
Performance Indicators of a Cluster
In this work, a cluster is a general term that refers to a part of the power grid that autonomously manages its own resources and is capable of exchanging power bidirectionally with the rest of the power grid.The performance of a grid cluster could be evaluated by comprehensively considering the economical, environmental, societal aspects.This approach enables a holistic evaluation of the cluster.Accordingly, we present a comprehensive set of performance indicators that cover the economical, environmental and societal aspects.These performance indicators include cost, emission, robustness, independence, and convenience.
The value gained by adding an energy resource into the cluster depends on the precedence of usage of the energy resources in the cluster.For example, in case of excess power production, using a flexible load to match demand and supply could be given priority compared to storing the excess power in a battery storage.Thus, adding a flexible load to a cluster could alter the contribution a previously existing battery storage makes to the cluster.When the contribution of the energy resources change, the performance indicators of the cluster might change as well.
Next, we will present the performance indicators together with how they are influenced by the characteristics of the energy resources.
Cost
Evaluating the cost of a cluster is very relevant because it affects the payments of the consumers to purchase electricity.The cost of the cluster per kWh C in time interval T is computed as shown in Equation (9).The terms in the square bracket make up the net cost of the cluster in T. It consists of the total depreciation and variable cost of all the N energy resources in the cluster (obtained from Equations ( 1) and ( 2 To evaluate the impact of adding a new energy resource on the cost of the cluster, Equation ( 9) should be recomputed with the new energy resource incorporated into the cluster.Thus, the difference between the original cost and the new cost represents the added value of the new energy resource on the cost of the cluster.
Emission
In line with the growing environmental concerns, reducing the emission of green house gases associated with electricity system needs to be minimized.Emission measures the cleanness of electricity from green house gases.The emission of a cluster incorporates the emissions associated with its energy resources.We assume that a cluster is responsible for the emission associated with the energy it supplies both for local consumption and export.
Accordingly, we quantify the emission of a cluster per unit kWh M in time interval T as shown in Equation (10).The quantity in the square bracket represents the total emission in period T associated with the cluster.The total emission is then divided by the sum of the energy supplied from the cluster, both for local consumption and export, in time interval T.
The impact of adding a new energy resource on the emission of the cluster can be evaluated by recomputing Equation (10) with the new energy resource included, in a similar way it was done for cost.
Robustness
An energy cluster needs to supply reliable power to the end-consumers, hence it is desirable to minimize the chance of power outage.We express robustness of a cluster in terms of the chance of power outages the consumers experience.We consider three possible causes of power outage.The first is the scenario when a producing energy resource fails and there are no other energy resources to cope with the reduction in supply; the second cause is a big and rapid fluctuation of the supply/consumption from the expected values that the cluster could not cope up with; and the third one is the situation when the demand is higher than the maximum power supply.
We define three vulnerability measures of a cluster corresponding to these causes of power outage, namely, failure vulnerability, fluctuation vulnerability and powershortage vulnerability.The failure vulnerability failure of a cluster depends on the probability of failure of each energy resources as well as the potential impact of failure of each energy resource on possibility of power outage of the cluster i , as shown in Equation (11).i represents the probability that the failure of an energy resource i leads to power outage in the cluster.Clearly, i depends on the composition of the cluster.
The impact of simultaneous failures of multiple energy resources can be obtained by multiplying the product of the failure rates of the individual energy resources by the probability that their combined failures lead to power outage.This combined effect can be added to Equation (11), but the chance of simultaneous failures of multiple energy resources is practically very small.
The fluctuation vulnerability of a cluster depends on its maximum fluctuation tolerance, which is determined based on the technique used to overcome fluctuations.In conventional power system, three stages are involved to overcome big fluctuations, namely primary, secondary and tertiary control stages [19].When a fluctuation arises, the primary control is initiated, whereby highly responsive energy resources are used to cope with the fluctuation within short period of time (a few to several seconds).Afterwards, the secondary control stage takes over (in a couple of seconds to a minute) the primary control using the less time responsive resources, and the resources used in the primary stage are freed.Finally, the tertiary control takes over and brings the system back to an equilibrium position, thereby freeing the resources used in the secondary control stage.
For each control stage, the system has a fixed assimilate capacity to absorb fluctuations.If the fluctuation exceeds any of these assimilation capacities, then power outage could result.Thus, the maximum absorbable fluctuation max can be expressed as the minimum of the assimilate capacities in the three stages (Equation ( 12)).Thus, we compute the fluctuation vulnerability of the cluster over a period T (Equation ( 13 max primary secondary tertiary min , , On the other hand, power-shortage vulnerability over a given period T can be computed by Equation ( 14), integrating over period T the probability that demand exceeds supply.
shortage 0 demand > supply d Finally, the overall vulnerability of the cluster is obtained by adding the individual vulnerabilities together (Equation ( 15)).Then, the overall robustness of the cluster is computed as the inverse of the overall vulnerability (Equation ( 16)).
failure fluctuation shortage A cluster can have a certain level of tolerance for the occurrence of power outage.For example, a single power outage per year could be tolerable in a cluster.We refer to the maximum vulnerability that is tolerated by the cluster as power outage tolerance .Thus, the condi- tion in Equation ( 17) should always be maintained.
When a new energy resource is added to the cluster, the values of the parameters in Equations ( 11)-( 16) could change.For instance, the failure rate, availability, controllability, responsiveness and predictability of the energy resource could affect the vulnerability of the cluster.Hence, the change in robustness R gives the added value of the new energy resource.
Independence
A cluster may depend on the rest of the power grid for various reasons.When the imported electricity is cheaper than the local electricity supply from its own power sources, then the cluster might resort to importing electricity from the rest of the power grid even though the demand can be supplied locally.We refer to this optional kind of dependency as economical dependency.On the other hand, when the local demand exceeds the maximum capacity of the local supply, the cluster is forced to import electricity.We refer to this kind of dependency as mandatory dependency.The independence performance metric addresses the mandatory dependence of the cluster on the rest of the grid.
There could be various reasons why a cluster would minimize its mandatory dependence on the rest of the power grid.For instance, if the cluster is largely dependent on the rest of the grid, then disturbances in the rest of the grid could have larger impact on the cluster.Accordingly, we represent independence as one performance indicator of a cluster.We employ two types of metrics to capture the mandatory dependence of a cluster on the rest of the grid, namely aggregate dependence and instanta- , as shown in Equation (18).
Instantaneous dependence instantaneous D captures the dependence of a cluster on the rest of the grid in terms of the instantaneous power imported.Let X be the maximum mandatory instantaneous power that is imported from the rest of the grid in period T, and let Y be the average power consumed in the cluster in the same period.Then, instantaneous D is computed as the ratio of the two (Equation ( 19 The characteristics of the energy resources such as predictability, controllability, responsiveness and availability affect the independence of the cluster.For example, if a cluster has more predictable energy resources, then the possible supply shortages can be predicted early enough, and hence the controllable energy resources can be appropriately managed to locally compensate the supply shortage, thereby reducing dependence on the external grid.The impact of adding a new energy resource can be computed in the same fashion as it was done for the previous cluster performance indicators.
Convenience
Convenience of a cluster measures the perception of the people about the suitability of the energy resources to maintain their comforts as mentioned in Section 3.8.As shown in Equation (20), the convenience of a cluster
V can be obtained by summing up the individual convenience k v of all the energy resources in the cluster, that were calculated using Equation (8).Thus, the value added by adding a new energy resource can be obtained by recomputing Equation (20) with the new energy resource incorporated in the cluster.
A Case Study
In order to verify the theoretical model developed in the preceding section, we present a simplified case study.The clusters used in our case study are modeled based on the design of the green village project of the TUDelft [20].The green village project aims at building a sustainable village at TUDelft campus based on green en-ergy and intelligent technological developments.
For our case study, we make three variants of clusters with different compositions, that are simplified versions of the green village design.The first cluster, cluster1, represents a regular cluster whose composition is shown in Table 1.The second cluster, cluster2, is a modified version of cluster1 which is obtained by removing the battery.Similarly, the third cluster, cluster3, is obtained by modifying cluster1 such that the quantity of both wind turbine and solar PV are reduced by half, and the power capacity of the battery is increased to 50 kW.
We evaluate the gain with respect to cost and robustness by adding a storage system on the three clusters.As stated earlier, the value gained by adding an energy resource into the cluster depends on the precedence of usage of the energy resources in the cluster.While different precedence strategies are possible, we adopt a simple one.The precedence of usage of resources assumed for the three clusters is as follows.If there is shortage of supply, then power is supplied from storage.If the storage supply alone cannot cope up with the shortage, then additional power is supplied from the fuel cell.If the shortage exceeds the combined capacity of the storage and the fuel cell, then the power is imported from the rest of the grid.On the other hand, if there is surplus production of power, then storage is used to store it.If the surplus production exceeds the storage capacity, then power is exported to the rest of the grid.
To accurately test our theoretical model, stochastic data, such as the mean and the standard deviation, are needed to model the distribution of the profiles of the energy resources.Since these kinds of stochastic data are difficult to obtain, we resort to a simplified alternative method whereby the data about the profiles of the energy resources are approximated based on empirical data.We employ the Renewable Energy Grid Simulator (REGS) [21] for this purpose.
The REGS tool takes as input the average load, the average electricity from the wind turbine, and the average electricity from solar PV, and outputs the corresponding time series profile of the load, wind energy supply, and solar energy supply over a period of time.The outputs of the simulator are tuned by intelligent pattern learning from a rich empirical data about load and renewable energy supply patterns in The Netherlands from the year 2000 to 2010, which is obtained from Tennet1 .Using the outputs of the REGS as input to our model, we apply the aforementioned precedence of the usage of our resources.
Figure 2(a) shows the gain obtained on the cost of the cluster by adding battery storages of different storage capacities and power capacities to the three clusters described before.The storage capacities used are 100 and 250 kWh, while different power capacities ranging from 1 to 80 kW are used.The power capacity of the battery refers to the maximum charging/discharging rate of the battery.
As can be observed from the figure, adding a battery yields the largest gain in cluster2 (the cluster without storage) compared to doing the same for the other clusters.In cluster2, the imbalance in demand and supply is compensated by the fuel cell and the transaction with the external grid because it does not have a storage.After a battery is added to this cluster, the imbalance is primarily compensated by the battery, thereby significantly reducing the expensive cost of fuel cells and the imported power.
On the other hand, moderate cost gain is observed for cluster1 (the regular cluster) after adding a battery.The moderate gain stems from the fact that the cluster already had a battery that could compensate part of the power imbalance, and the remaining imbalance is compensated by fuel cells and transactions with the rest of the grid.Thus, the extra added battery will be used to cope with the imbalance that remain after using the existing battery, thereby leading to a smaller gain.
In both cluster1 and cluster2, the gain in cost first rises rapidly with increasing the power capacity of the added battery and later saturates even though the battery capacity is increased further.Moreover, the gain in cost saturates at a smaller power capacity when the battery storage capacity is smaller, and vice versa.Thus, given a fixed storage capacity of a battery, the benefit of the battery can be improved by increasing the power capacity of the battery to a certain extent.However, increasing the power capacity beyond a certain level does not yield further gain because the storage capacity of the battery is a constraint to the maximum power that can be stored.Hence, a battery with optimal combination of storage capacity and power capacity need to be chosen.
On the contrary, cluster3 (a cluster with renewable energy reduced by half and larger storage capacity) did not show any gain by adding a battery.This cluster has lower variability in the supply side because of its lower composition of the variable renewable sources.Thus, the comparatively small surplus production from the renewable sources can already be completely absorbed by its larger battery storage capacity, and then supplied later when there is shortage of supply.Accordingly, there is no remaining potential to reduce the use of fuel cells and power imports from the external grid.Therefore, adding an additional battery does not reduce cost as it will not be used any way.increase in the number of days it takes before occurrence of a power outage.As can be observed, adding a battery did not improve the robustness of cluster3.Given the low composition of the variable renewable sources, the existing battery can already provide enough flexibility that could be used to store the surplus productions of the renewable sources and reuse it later to improve the robustness of the cluster.Hence, adding a new battery does not improve the robustness because there is no extra surplus production to store and reuse.On the other hand, adding a battery yielded larger robustness improvement in cluster1 (the regular cluster) than in cluster2 (the cluster with no battery).Although this sounds counter intuitive, it can be explained as follows.Batteries are used to improve robustness if they are not being used at full capacity when the events (failure, fluctuation, or power-shortage) occur in the cluster.At the occurrence of these events, the reserve capacity of the batteries can be exploited to minimize the vulnerability of the cluster.Cluster1 already has a battery, hence the probability that the newly added battery is used at full capacity is smaller.Hence, the new added battery will have larger reserve capacity that could be used to improve robustness of the cluster.Whereas, cluster2 did not have a battery, and thus the newly added battery is more likely to have smaller reserve capacity, thereby leading to smaller robustness improvement.
The results in Figures 2(a) and (b) clearly confirm that the value gained by adding an energy resource to a cluster depends on the composition of the cluster, as well as the precedence of the usage of its energy resources.Thus, our proposed valuation model enables the operator of the cluster to wisely choose the appropriate energy resources that could be added to achieve the desired performance improvement.Similar simulations could be repeated with the other performance indicators of the cluster.
Discussions and Conclusions
In this paper, we have developed a valuation model for evaluating the value gained by adding an energy resource into an autonomous energy cluster.Our model presents a characterization of energy resources using wide range of parameters, namely cost, emission, failure rate, predict-ability, availability, controllability, responsiveness, and -convenience.Moreover, comprehensive set of performance indicators of a cluster, that relate to environmental, economical and social values, are considered and modeled.
Based on this model, the impacts of adding an energy resource into a cluster is analyzed.We also presented a case study to test our proposed theoretical model which endorsed the strength of the model to evaluate the value an energy resource adds to a cluster.Our model also reveals that the value added by an energy resource depends both on the composition of the cluster and the precedence of the usage of energy resources in the cluster.
Developing appropriate stochastic data that better capture the behaviors of the energy resources could help to analyze the benefits of the valuation model more thoroughly.Further, more realistic and synthetic test cases could be employed to evaluate the proposed valuation model.
Our proposed valuation model can be used as a basis to design optimal composition of a cluster, whereby certain energy resources are added to or removed from the cluster depending on their impact on the desirable performance indicators.
)), the cost of the yearly electricity import im C , and the benefit obtained from the yearly electricity export ex C .The total net cost is then divided by the sum of the energy supplied from the cluster, both for local use and for export fluctuation at time t, t , depends on the profile of the entire energy resources in the luster.
neous dependence.Aggregate dependence aggregate D refers to the volume of mandatory electricity imported from the rest of the power grid * im E over a period of time compared to the total electricity consumed in the cluster over the same period cons E
Figure 2 (
b) shows the effect of adding batteries (with storage capacity of 250 kWh and different power capacities) on the robustiness of the three clusters under consideration.Improvement in robustness is measured by the | 2017-12-16T02:55:00.132Z | 2013-08-19T00:00:00.000 | {
"year": 2013,
"sha1": "ca01e39e6e961820e40090ac7e985102edb913c1",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=35991",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ca01e39e6e961820e40090ac7e985102edb913c1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
237321016 | pes2o/s2orc | v3-fos-license | Predicting molecular mechanisms, pathways, and health outcomes induced by Juul e-cigarette aerosol chemicals using the Comparative Toxicogenomics Database
Graphical abstract
Introduction
Though electronic cigarettes were introduced as aids for smoking cessation more than a decade ago (Dinardo and Rome, 2019), youthtargeted flavors, packaging and marketing have contributed to epidemic levels of teen vaping in recent years (Farzal et al., 2019). Health consequences of exposure to vaping chemicals and molecular mechanisms underlying vaping-related illnesses are largely unknown.
Juul e-cigarettes are a compact, closed system nicotine-delivery device that were introduced in 2015 and quickly became the most popular vaping device with year-over-year growth of nearly 700% and a 50% market share (Kavuluru et al., 2019), prompting concern that their popularity among youth is a public health crisis (Walley et al., 2019). Because of their dominant market position and popularity among youth, we chose Juul e-cigarettes as the primary subject of this analysis. We set out to investigate the gene interactions, phenotype, pathway, and disease associations of chemicals detected in aerosols generated by heating Juul e-cigarettes connected to a digital puffing machine (Talih et al., 2019) or a Human Puff Profile Cigarette Smoking Machine (Reilly et al., 2019). Chemicals detected by these two methods include nicotine, acetaldehyde, formaldehyde, free radicals, crotonaldehyde, acetone, pyruvaldehyde, and particulate matter. Additional evidence for the presence of nicotine, acetaldehyde, formaldehyde, reactive oxygen species, acetone, pyruvaldehyde and particulate matter have also recently been described in Juul aerosols (Muthumalage et al., 2019, Mallock et al., 2020, Azimi et al., 2021. Concentrations of these chemicals in Juul emissions are listed in Table 1. CTD is a public scientific resource, wherein PhD-level scientists manually curate the scientific literature for data on chemicals, genes and diseases and integrate this data with select public data sets to help determine the molecular mechanisms underlying chemicallyinfluenced diseases (Davis et al., 2019). The external data sets integrated with CTD-curated data include OMIM, MeSH, NCBI Gene, GO, KEGG and Reactome Pathway databases (Davis et al., 2009, Davis et al., 2019. Thus, when CTD biocurators curate data providing direct evidence that a chemical interacts with a gene, that data is linked to other associated gene attributes, such as annotated molecular functions, cellular location, phenotypes, pathways, and diseases. We distinguish between the concepts of phenotype and disease, designating a phenotype as a biological outcome that is not inherently a disease, such as an alteration in blood pressure, whereas hypertension is a disease. This operational distinction facilitates integration of chemicalinduced phenotypic and disease outcomes from the literature, and provides insight into the pre-disease state (Davis et al., 2018). CTD also generates transitive chemical-disease inferences by integrating independently curated chemical-gene, gene-disease, and chemical-disease interactions. Therefore, a previously unrecognized relationship may become evident when a direct chemical-gene statement is combined with a direct gene-disease statement to generate a chemical-disease inference (inferred via the shared gene). CTD statistically ranks these inferences to facilitate hypothesis development (King et al., 2012).
To facilitate identification of intermediate steps in the pathway from a chemical exposure to a disease outcome, CTD can be used to computationally link Chemical-Gene interactions with Phenotypes and Disease outcomes ("CGPD-tetramers"). These tetramers represent building blocks that can be assembled into larger chemical-induced pathways to design potential mechanism/mode-of-action (MOA), progressing from subcellular to system-wide processes (Davis et al., 2020). As several recent publications describe multiple impacts of e-cigarettes on pulmonary endpoints (Thirión-Romero et al., 2019), we used CTD to analyze relationships among Juul aerosol chemicals, interacting genes, phenotypes and respiratory illnesses. Such interactions provide intermediate points that may represent key events in the building of chemical-directed pathways linking Juul e-cigarettes to respiratory diseases.
CTD data version and web tools
Analysis was performed using CTD public data available in October 2020 (revision 16329). CTD is updated with new content on a monthly basis; consequently, counts described herein may change over time. CTD's public analytical and visualization tools were used (http://ctdbase.org/tools/) in subsequent analysis, including Batch Query, Set Analyzer, MyVenn, and Chemical-Phenotype Interaction Query. Default values were used for corrected p-values (threshold 0.01). For all data downloads, a filter was used to return data for exact input query terms only.
Chemical relationships in CTD
CTD's 'Batch Query' tool (http://ctdbase.org/tools/batchQuery.go) was used to retrieve chemical-disease relationships by inputting the list of NAFFCAPP chemical terms, selecting 'Disease Associations' as output, and chemical-disease relationships were sorted by direct evidence for marker/mechanistic relationships among the respective chemicals and diseases. To retrieve chemical-gene relationships, the NAFFCAPP chemical terms were used as input in the 'Batch Query' tool, and curated gene interactions were downloaded. Output was sorted and duplicate genes were eliminated to yield 8,256 unique genes that were subsequently used as input in CTD's 'Set Analyzer' tool (http://ctdbase.org/tools/analyzer.go) and queried for enriched pathways, as previously described (Davis et al., 2013). Statistical enrichment of a pathway indicates that the fraction of genes annotated to it in a test set is significantly larger than the fraction of genes annotated to it in the genome.
Determination of key event relationships
CTD tools 'Batch Query' and 'MyVenn' (http://ctdbase.org/tools/ myVenn.go) were used to identify relationships that link NAFFCAPP chemicals to respiratory outcomes for three respiratory tract diseases: Pulmonary Fibrosis (D011658), Asthma (D001249), and Lung Neoplasms (D008175). Specifically, direct disease associations obtained from CTD's 'Batch Query' of the NAFFCAPP chemical set were sorted by disease name, and genes in the inference network that infer the respective chemicals to these diseases were combined into a single data set of 295 genes. These genes support the direct curated 'M' (marker/mechanism) relationship between Juul aerosols and these specific diseases, and identify potential molecular initiating events. Prioritized phenotypes were determined as those that are independently associated with the NAFFCAPP chemicals and the respiratory disease examples by combining output from CTD's 'Batch Query' and 'MyVenn' analysis tools. A batch query was performed using the four chemicals (nicotine, acetaldehyde, formaldehyde, and particulate matter) that show direct relationships to the disease examples as input and selecting curated phenotype associations as output, resulting in 552 unique phenotypes. Secondly, GO/phenotype terms that are annotated to each of the 295 genes were downloaded (940 phenotypes). 'MyVenn' was used to compare phenotypes annotated to the chemicals with phenotypes annotated to the genes, selecting 'Other' as input type, to identify 248 prioritized phenotypes common to both sets. CGPD-tetramers are novel information blocks that link Chemical-Gene interactions with Phenotype and Disease outcomes. They are computationally generated by integrating five independently curated data sets in CTD: chemical-gene interactions, chemical-phenotype interactions, gene-GO/phenotype associations, chemical-disease associations, and gene-disease associations (Davis et al., 2020). We constructed CGPD-tetramers for the NAFFCAPP set of chemicals with respect to the three respiratory tract diseases (pulmonary fibrosis, asthma, and lung neoplasms). Shared chemicals, genes and phenotypes among the CGPD tetramers were compared to help elucidate potential mechanistic pathways.
There were 81 phenotypes common to the 248 prioritized phenotypes and the 112 phenotypes shared among CGPD tetramers for pulmonary fibrosis, asthma, and lung neoplasms. CTD's "Chemical-Phenotype Interaction Query" tool was used to select phenotypes that have been annotated to Respiratory System.
Disease associations of Juul aerosol chemicals
The NAFFCAPP set of chemicals detected in Juul aerosols have direct relationships with 400 diseases in CTD, which can be grouped to 35 disease categories, with some diseases mapping to more than one category (e.g., lung neoplasms maps to both respiratory tract diseases and cancers). The top five disease groups are cardiovascular, nervous system, respiratory tract, cancers, and mental disorders (Fig. 1A). Cardiovascular disease, the disease group with the highest number of relationships, includes 95 marker or mechanistic relationships between six chemicals and 69 cardiovascular outcomes such as hypertension, stroke, myocardial infarction and atherosclerosis. The highest number of chemical-disease relationships in this category is attributed to particulate matter, followed by nicotine. Examples of direct disease relationships in the category of nervous system diseases are Alzheimer Disease, Parkinson Disease, and seizures, with the greatest number of relationships for nicotine. Respiratory tract diseases such as lung neoplasms, pulmonary fibrosis, asthma, and pneumonia show more than 70 direct relationships with NAFFCAPP chemicals, with the highest number attributed to particulate matter. All eight of the NAFFCAPP chemicals contribute to direct relationships with one or more cancers, such as lung, breast and stomach neoplasms, with the greatest number of direct relationships for nicotine and formaldehyde. In the category of mental disorders, there are numerous direct associations between NAFFCAPP chemicals and autistic disorder, cognition disorders and depressive disorders, with the highest number of direct associations for nicotine. All individual marker or mechanistic chemical-disease relationships for these chemicals are provided (Supplemental Table S1).
Besides the top disease groups related to NAFFCAPP chemicals, individual chemicals were examined for disease associations and the total number of direct relationships (Fig. 1B). Nicotine is associated with the most curated disease interactions (2 0 2), followed by particulate matter (1 8 1). Formaldehyde and free radicals are associated with 50-100 diseases, while the remaining chemicals each have less than 20 curated disease relationships.
In addition to directly curated relationships between Juul aerosol chemicals and diseases, all eight NAFFCAPP chemicals have inferred relationships to diseases in CTD generated by integration of chemical-gene and gene-disease statements. These transitive inferred associations between NAFFCAPP chemicals and 3,262 additional diseases provide indirect evidence of chemical-disease relationships, and are statistically ranked with an inference score. While the greatest number of inferred associations overall are attributed to nicotine, formaldehyde and particulate matter, all NAFFCAPP chemicals contribute to these inferred relationships, providing rationale for further investigation.
Enriched data sets
Molecular pathways that may contribute to Juul aerosol-induced diseases were identified using CTD's 'Set Analyzer' tool. Collectively, the NAFFCAPP chemical set interacts with 8,256 genes. These genes were analyzed to survey the pathways annotated to them. Of 916 significantly enriched pathways, five of the top ten pathways are related to the immune system and/or signaling, while additional pathways include those related to metabolism and gene expression. Many similarities exist among the gene sets contributing to these pathways. For example, all genes annotated to Signaling by Interleukins (REACT:R-HSA-449147) belong to a subset of the genes annotated to Cytokine Signaling in Immune System (REACT:R-HSA-1280215), which in turn is a subset of genes contributing to the Immune System (REACT:R-HSA-168256) pathway. Genes that interact with nicotine, formaldehyde, and particulate matter account for 434/442 (98%) genes annotated to Cytokine Signaling in Immune System, yet genes that interact with all eight NAFFCAPP chemicals contribute to this pathway, and some genes, such as TNF, interact with all eight NAFF-CAPP chemicals. To provide insight into the pathways that may be affected by each of the NAFFCAPP chemicals, this analysis was repeated with genes that interact with each of the Juul aerosol chemicals individually. Genes annotated to each of the top 10 significantly enriched pathways were summed for each chemical (Table 2).
Developing mechanistic pathways for Juul aerosol-induced adverse outcomes
We leveraged CTD curated content with two methods to help prioritize relationships that link Juul chemical stressors with adverse respiratory outcomes, using pulmonary fibrosis, asthma, and lung neoplasms as specific examples of respiratory tract diseases. Table 2 Significantly Enriched Pathways of NAFFCAPP-interacting Genes. First, in CTD, there are direct relationships between NAFFCAPP chemicals and pulmonary fibrosis, asthma, and lung neoplasms via four chemicals: nicotine, formaldehyde, acetaldehyde, and particulate matter. In addition, these four chemicals collectively interact with 295 genes that also have direct relationships with these same diseases, enabling the identification of potential molecular initiating events in pathways that connect Juul aerosol chemicals with these respiratory tract diseases. To identify underlying biological processes that contribute to these gene-disease relationships, we compared the 552 phenotypes curated to these four chemicals and the 940 phenotypes annotated to the 295 genes. There were 248 phenotypes common to both sets that may contribute to underlying disease pathways; they are directly influenced by the chemicals in Juul aerosols and independently annotated to genes in the inference network that link vaping aerosols to these respiratory tract diseases (Fig. 2). These 248 phenotypes are prioritized for biological processes that may participate in the MOA of Juul aerosols; representative phenotypes include oxidative demethylation, T cell migration, and mucus secretion.
Second, we used CTD to predict molecular initiating events and phenotypes that link Juul aerosols to respiratory tract disease outcomes by generating computational associations between Chemical-Gene interactions and associated Phenotypes and Disease (CGPDtetramers), using NAFFCAPP chemicals and the same respiratory tract disease examples, pulmonary fibrosis, asthma, and lung neoplasms. This bioinformatics approach yielded 830, 505, and 1,401 CGPDtetramers for the three diseases, respectively (Supplemental Table S2). CGPD-tetramers for the three respiratory diseases share 112 phenotypes, of which 81 (72%) were identical to those found by filtering priority phenotypes for respiratory tract diseases using the first method. These 81 phenotypes were further restricted to those that have been previously annotated to 'Respiratory System' using CTD's Anatomy module, resulting in 65 highly prioritized phenotypes (Fig. 2). These 65 phenotypes are highlighted by several key relationships: first, they are shared among pulmonary fibrosis, asthma, and lung neoplasm CGPD-tetramers; second, they contain curated chemical-phenotype annotations with nicotine, acetaldehyde, formaldehyde or particulate matter; third, they contain curated gene-phenotype annotations in CTD with NAFP-interacting genes; and fourth, they are supported by imported GO annotations for NAFP-interacting genes that align with phenotypes in CTD. These phenotypes were ranked by frequency in which they appear in pulmonary fibrosis, asthma and lung neoplasm CGPD-tetramers, and the top 20 most commonly predicted phenotypes are shown (Fig. 3). In addition to important roles for cell proliferation and apoptosis, this analysis highlights potentially important roles for inflammatory response, response to oxidative stress, cell migration, cytokine production involved in inflammatory response and chemotaxis, linking Juul aerosols and respiratory tract diseases; these highly prioritized phenotypes represent potential candidate events in the MOA of Juul aerosol chemicals.
To further address molecular initiation events contributing to the prioritized phenotypes, genes affected by Juul aerosol chemicals were analyzed. A total of 20,073 chemical-gene interactions were downloaded, corresponding to 8,256 unique genes and 459 different types of chemical-gene interactions, such as chemical-induced changes in mRNA expression, protein phosphorylation, protein activity, and secretion. Curated interactions between the four chemicals NAFP and 197 genes prioritized as potential molecular initiating events were aligned with computationally-generated CGPDs (Supplemental Table S3). By integrating CTD chemical-gene interactions with these prioritized phenotypes, predictive mechanistic pathways can be constructed that associate nicotine, acetaldehyde, formaldehyde and particulate matter with these three respiratory outcomes (Fig. 4). For example, these four chemicals interact with 197 genes, in potential molecular initiating events represented by 26 genes that interact with all four chemicals and are annotated to one or more of the priority phenotypes. These phenotypes were mapped to subcellular, cellular or system processes that align with locations of potential key events in the sequential molecular pathways. Genes that are annotated to multiple phenotypes interrelate and connect phenotypes along intermediate pathways to the disease outcomes, helping to identify candidate events in predictive Juul MOAs.
Disease associations of Juul aerosol chemicals
This study investigates predictive disease associations of chemicals in Juul aerosols and underlying pathways. Using CTD, eight chemicals detected in Juul aerosols (nicotine, acetaldehyde, formaldehyde, free radicals/reactive oxygen species, crotonaldehyde, acetone, pyruvaldehyde, and particulate matter) were analyzed for interacting genes, intermediary phenotypes, pathways and disease associations. Top disease categories associated with these chemicals in CTD are cardiovascular diseases, nervous system diseases, respiratory tract diseases, neoplasms, and mental disorders, and several recent studies support associations between e-cigarettes and disease risks in these same categories, including thrombosis (Ramirez et al., 2020), ischemic stroke (Sifat et al., 2018), asthma (Clapp and Jaspers, 2017), cancer risk (Canistro et al., 2017), and depression (Leventhal et al., 2016).
Specific examples of Juul aerosol-induced disease parameters have been reported in humans, rats and mice. In a randomized crossover design, young, healthy, nonsmokers showed increased mean arterial pressure and heart rate, and decreased muscle sympathetic nerve activity following inhalation of Juul e-cigarettes, but not nonnicotine placebo e-cigarettes (Gonzalez and Cooke, 2021). Acute exposure to Juul aerosols led to impaired endothelial function in rats comparable to cigarette smoke (Rao et al., 2020). Three months of Juul aerosol exposure to mice induced dysregulation of glutamatergic system activity in mesolimbic brain regions, as evidenced by differential effects on several targets of the glutamatergic system in the nucleus accumbens and hippocampus (Alhaddad et al., 2020).
Chemicals contributing to disease associations.
Based on curated chemical-disease interactions in CTD, three chemicals emerge as key contributors to potential Juul-induced disease outcomes: nicotine, formaldehyde, and particulate matter (Supplemental Tables S1-S3). These three chemicals show mechanistic relationships with 364 diseases in CTD.
Recent evidence has shown that cytotoxicity of Juul aerosols strongly correlates with nicotine concentration (Omaiye et al., 2019). In CTD, nicotine is associated with 1,170 genes and 202 diseases. Nicotine has long been known to be addictive (National Academy of Sciences, 2018), as well as play a key role in the induction and progression of cardiovascular disorders (Balakumar and Kaur, 2009). Nicotinic acetylcholine receptors (nAChR) have been shown to regulate cell proliferation and inhibit apoptosis, key biological processes that are related to cancer (Gotts et al., 2019). Altered expression of the ACE2 protein (the putative receptor for the COVID-19 virus) in TH2 cells exposed to nicotine, in addition to changes in nicotinic receptor signaling and activation of inflammatory cytokines has led to recent speculation that nicotine exposure may increase cardiopulmonary risk to COVID-19 (Olds and Kabbani, 2020).
The second key contributor to potential Juul aerosol-induced outcomes, formaldehyde, interacts with 3,927 genes in CTD, detailed in 4,502 gene interactions, and shows marker/mechanistic relationships with 95 unique diseases. Formaldehyde is an established carcinogen, according to the International Agency for Research on Cancer (National Academy of Sciences, 2018). In long-term studies, formaldehyde has shown carcinogenic effects on various organs and tissues, and The four Juul aerosol chemicals: nicotine, acetaldehyde, formaldehyde, and particulate matter (NAFP) show marker or mechanistic associations with pulmonary fibrosis, asthma, and lung neoplasms, which are independently supported by inferred relationships via 295 genes that interact with one or more Juul aerosol chemicals, and are independently curated to one or more of the three diseases. As well, the four chemicals directly modulate 552 phenotypes, while the 295 genes are independently annotated to 940 phenotypes. There were 248 phenotypes common to both sets. Furthermore, chemical-gene-phenotype-disease (CGPD) tetramers were computationally generated among the four NAFP chemicals and pulmonary fibrosis, asthma, and lung neoplasms, resulting in 112 shared phenotypes among the three sets of CGPDs. Comparison between the two sets of phenotypes reveal an intersection of 81 phenotypes, of which 65 are annotated to respiratory system, and are associated with 197 genes. These genes represent potential molecular initiating events in the mode-of-action of Juul chemicals on respiratory outcomes. Fig. 3. The top 20 most common prioritized phenotypes for Juul aerosol chemicals associated with pulmonary fibrosis, asthma and lung neoplasms. CGPDtetramers were computationally generated for four chemicals in Juul aerosols (nicotine, formaldehyde, acetaldehyde, and particulate matter), interacting genes, intermediate phenotypes, and three respiratory tract diseases (pulmonary fibrosis, asthma and lung neoplasms). A total of 65 phenotypes were prioritized as shared among the CGPD-tetramers for the three target respiratory diseases, and annotated to the specific chemicals, genes in the inference network, and respiratory system, and the 20 most frequent phenotypes are presented as number of CGPD-tetramers per phenotype for each disease.
produced an increase in the total number of malignant tumors in experimental animals (Soffritti et al., 2002). In addition, formaldehyde has been shown to cause an increased risk of myeloid leukemia (Schwilk et al., 2010), and there is substantial evidence that it is capable of causing DNA damage and mutagenesis (National Academy of Sciences, 2018).
Particulate matter has long been recognized as a leading contributor to global disease burden , Costa, 2018, with impacts on cardiovascular and respiratory health (Fiordelisi et al., 2017, Losacco and Perillo, 2018, Rajagopalan et al., 2018. In CTD, particulate matter has been shown to interact with 4,739 genes, participate in 10,187 gene interactions, and contribute to 181 direct disease relationships with marker/mechanism evidence (child terms of particulate matter such as smoke, soot and dust were excluded from this analysis).
The remaining chemicals detected in Juul aerosols, acetaldehyde, free radicals, crotonaldehyde, acetone, and pyruvaldehyde are associated with some of the same diseases as nicotine, formaldehyde and particulate matter, but also show direct relationships with 39 unique diseases including diabetes mellitus type 1 and amyotrophic lateral sclerosis.
Individual chemical-gene interactions.
Analysis of genes that interact with NAFFCAPP chemicals identifies potential molecular initiating events along pathways of vapinginduced disease outcomes and the generation of testable hypotheses. For example, particulate matter has been shown to induce EGR1 expression leading to inflammatory cytokine production and mucus hyperproduction in airway epithelium via the NF-κB and activator protein pathways (Xu et al., 2018). Analysis of individual chemical-gene interactions of the NAFP chemical set in CTD (Supplemental Table S3) reveals that nicotine is also capable of inducing EGR1 mRNA and protein expression, while formaldehyde and acetaldehyde can increase EGR1 mRNA expression. This suggests that Juul pods with reduced nicotine (3% vs 5%) may continue to induce EGR1 and consequent downstream effects due to the formation of carbonyls acetaldehyde and formaldehyde.
Inspection of individual chemical-gene interactions can also lead to testable hypotheses of alternate e-liquids in vaping products. Aerosols generated by the Juul device with a modified e-liquid containing 60:40 propylene glycol:glycerol with and without citral (to compare with other commercially available e-liquids) showed significantly increased levels of formaldehyde and free radical production (Reilly et al., 2019). Data in CTD show that particulate matter, formaldehyde, and reactive oxygen species collectively interact with nine mucin genes (MUC1, MUC16, MUC19, MUC2, MUC3A, MUC4, MUC5AC, MUC5B, and MUCL3), altering both mRNA and protein expression. Thus, increases in formaldehyde and free radical production may also alter mucin production and downstream effects. These chemicalmediated changes in mucin expression provide mechanistic steps that may contribute to alterations in mucin secretion that are observed in cigarette and e-cigarette users (Reidel et al., 2018).
Potential mechanistic pathways
Contributing factors to underlying pathways between Juul aerosols and representative adverse respiratory outcomes were analyzed in Fig. 4. Predictive mechanistic pathways that relate Juul aerosol chemicals to representative respiratory outcomes, generated by integrating CTD content. Chemical-gene interactions between nicotine, acetaldehyde, formaldehyde and particulate matter and 197 genes represent potential molecular initiating events (MIE) that link the chemical toxicants to pulmonary fibrosis, asthma and lung neoplasms, and are represented by 26 genes that interact with all four of the chemicals. Nineteen phenotypes that are directly modulated by these chemicals and are annotated to genes they interact with represent potential intermediate steps along predictive mechanistic pathways, and align with intracellular, cellular, and system processes. All of the phenotypes were prioritized as key contributors to the pathways via four types of supporting evidence: 1) curated chemical-phenotype interaction 2) curated gene-phenotype annotation 3) imported gene-GO annotation 4) computational generation of chemical-gene-phenotype-disease tetramers. Phenotypes shown in bold italic were among the 20 most frequent phenotypes in computationally generated CGPD tetramers. Numbers in parentheses represent the total number of genes of the 197 potential MIEs associated with each phenotype, with associations designated by solid black lines. Curved gray arrows indicate phenotypes that are interrelated via shared genes. three ways: selection of significantly enriched pathways of the genes that interact with NAFFCAPP chemicals, determination of priority phenotypes that are annotated to NAFFCAPP chemicals as well as the genes they interact with, and computational generation of CGPDtetramers. These three methods were used to strengthen evidence for contributing events in the pathways, and to avoid missing terms that may not yet have all five lines of supporting evidence that are required to generate CGPD-tetramers. Significantly enriched pathways include several related to the immune system and cytokine signaling in the immune system.
Beyond the ubiquitous roles of cell proliferation and apoptosis to disease pathways, several phenotypes emerged as playing potential key roles in the link between Juul aerosols and respiratory outcomes: oxidative stress, inflammatory responses, and cell signaling. Biological processes related to oxidative stress emerged as having the most annotations to NAFFCAPP chemicals. These were supported by NAFFCAPPinteracting gene annotations and CGPD-tetramers. While numerous reviews detail the contributions of the oxidative stress pathway to lung diseases such as asthma and chronic obstructive pulmonary disease (COPD) (Barnes, 2017, de Groot et al., 2019, this work integrates these oxidative stress phenotypes in the broader context of potential upstream and downstream events in the pathway. Phenotypes that contributed the highest number of terms to CGPD-tetramers were related to inflammation and immune responses. Immune system (REACT:R-HAS-168256) was also the most significantly enriched pathway of all the NAFFCAPP-interacting genes.
Several lines of evidence support a potential role for aberrant signaling underlying Juul-induced adverse outcomes, including significant enrichment of signaling pathways of NAFFCAPP-interacting genes, changes in gene expression, protein activity and secretion of cytokines by NAFFCAPP chemicals, and computational generation of CGPD-tetramers that include numerous signaling phenotypes. Numerous cytokines and chemokines including CXCL8, IFNG, IL1B, IL2, IL4, IL6, IL10 and TNF interact with five or more of the chemicals in the NAFFCAPP set, suggesting that multiple chemicals contribute to the pathway and disease endpoints. Cytokine-cytokine receptor interaction (KEGG:hsa04060) is also a significantly enriched pathway with genes annotated to all eight NAFFCAPP chemicals, with nearly half (47%) of the genes annotated to this pathway overlapping with genes annotated to Signaling by Interleukins (REACT:R-HSA-449147). Importance of the cytokine-cytokine receptor interaction pathway is supported by clinical studies showing that it is one of four pathways that overlapped between comparisons of differentially expressed genes in nasal biopsies of e-cigarette users vs. non-smokers and cigarette users vs. non-smokers (Martin et al., 2016).
In addition to supporting evidence for priority phenotypes generated from pathway enrichment and chemical-and gene-phenotype annotations, CGPD-tetramers linking Juul aerosol chemicals to respiratory disease outcomes also revealed new priority phenotypes that can be tested. For example, 'memory' and 'learning' emerged as phenotypes associated with NAFFCAPP chemicals and adverse respiratory outcomes (Supplemental Table S2). Cigarette smoking has also been shown to negatively impact executive function in older adults, an effect that is synergized by lung diseases (Amini et al., 2020). Thus, CGPD-tetramers linking Juul aerosol chemicals to interacting genes and cognitive phenotypes can identify specific genes to further study and explore for smoking-induced and vaping-induced cognitive issues.
Integration of curated chemical-gene interactions in CTD with prioritized phenotypes can help construct predictive MOAs that link molecular initiating events with key events towards disease outcomes (Davis et al., 2018). Here, we show representative interactions between nicotine, acetaldehyde, formaldehyde and particulate matter with 26 genes that affect phenotypes along predictive disease pathways linking these chemicals to pulmonary fibrosis, asthma and lung neoplasm endpoints. These biological processes are directly influenced by one or more of the four chemicals and are independently associated with the same genes by GO annotations. Alignment of phenotypes from subcellular events to system-wide processes can highlight relationships between potential key events that can be tested as intermediate end points for risk assessment, and help to fill in mechanistic gaps between these chemicals and disease outcomes.
Strengths and limitations
Environmental exposures to toxic insults such as e-cigarette aerosols affect human health, but the mechanisms are largely unknown. CTD provides a unique resource that integrates primary literature on chemical-gene, chemical-disease, and gene-disease interactions, with phenotype information, GO annotations, pathway information, and exposure studies, culminating in over 45 million toxicogenomic relationships (Davis et al., 2021). Analysis of eight chemicals detected in Juul aerosols yielded interactions with 8,256 unique genes, described in over 20,000 chemical-gene interactions. CTD tools promote the analysis of these interactions in ways that can build mechanistic pathways and help to fill in molecular knowledge gaps between the chemical toxicants and disease outcomes.
While this study begins to look at the gene, phenotype, pathway and disease relationships associated with chemical constituents of ecigarette emissions, several limitations remain. Chemical composition of the inhaled aerosol and levels is still under investigation, and may depend on several factors. Several studies have shown that potentially toxic metals (nickel, chromium, lead, manganese and zinc) are detected in e-cigarette emissions, and may originate from the coils that heat the e-liquids as well as joints and wires (Aherrera et al., 2017, Olmedo et al., 2018), yet toxic metals were not assessed by Talih (Talih et al., 2019). Though acrolein has been detected in vaping aerosols at a concentration of 0.07-4.19 micrograms per 15 puffs (National Academies of Sciences et al., 2018), and in aerosols from initial and modified Juul devices in Europe (Mallock et al., 2020), acrolein in Juul emissions was not assessed by Reilly or detected by Talih, and may depend on the puffing regimen analyzed (Reilly et al., 2019, Talih et al., 2019. Further, this analysis does not take into account possible interactions among the chemicals studied. Besides chemical composition of the aerosol, levels of these chemicals and metabolites in e-cigarette users are also under investigation. Numerous studies have measured the concentration of nicotine in humans after vaping Juuls, ranging from 9.8 mg/ml plasma per 10 puffs (Maloney et al., 2021) to 31 ng/ml serum after 10 min puffing (Yingst et al., 2019). Biomarkers of exposure and cardiopulmonary injury were measured for acetaldehyde and formaldehyde in mice after propylene glycol:vegetable glycerin-derived (PG-VG) aerosol exposure (ingredients of e-cigarette liquids including Juul). PG-VG exposure significantly increased post-exposure urinary acetate (a metabolite of acetaldehyde), and exposure to formaldehyde or PG-VG-derived aerosol stimulated significant pulmonary irritation and endothelial dysfunction (Jin et al., 2021). These findings support the presence of these chemicals in vivo following exposure to Juul aerosols.
Effects of vaping are dependent on abundant factors in addition to the chemical constituents and type of device, including nicotine concentration (Juul is available with 3% and 5% nicotine strength: https://www.juul.com/resources/all-about-tobacco-menthol-juulpods), vaping patterns (such as length of inhale/exhale, puff volume, frequency of vaping and time to first puff), coil resistance, product age, composition, battery output (ohms), user age, weight, metabolism, health, and genetics (National Academies of Sciences et al., 2018). In addition to variations in device and user, analyses of chemical-gene-phenotype-disease associations are limited by CTD-curated content. CTD is updated monthly, and with continued curation of publications related to vaping chemicals, associations among these chemicals with phenotypes, pathways and diseases will continue to be updated.
Conclusions
We describe analysis of Juul aerosol chemicals in CTD, including disease associations, gene interactions, enriched phenotype and pathway relationships, and prioritized events along predictive pathways to representative respiratory adverse outcomes. Cardiovascular diseases, nervous system diseases, respiratory tract diseases, cancers, and mental disorders were the most abundant categories of disease associations, with the highest number of relationships attributed to nicotine, particulate matter and formaldehyde. Several predictive mechanistic pathways were generated, based on chemical-and geneannotated phenotypes in conjunction with CGPD-tetramers. Integration of CTD data and computational generation of CGPD-tetramers can help to fill molecular knowledge gaps and generate testable hypotheses to better understand the effects of Juul aerosol chemicals and the effects of vaping.
Declaration of competing interest
The authors declare no conflicts of interest with respect to financial interests, research, authorship, and/or publication of this article. | 2021-08-27T17:01:15.366Z | 2021-08-05T00:00:00.000 | {
"year": 2021,
"sha1": "d7c4c91dcfa3c338085388e84edd4614ebae2282",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.crtox.2021.08.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ceb4da16db0e697f01a55a11a87f907d00cc3e26",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
} |
10731738 | pes2o/s2orc | v3-fos-license | Universal Covertness for Discrete Memoryless Sources
Consider a sequence $X^n$ of length $n$ emitted by a Discrete Memoryless Source (DMS) with unknown distribution $p_X$. The objective is to construct a lossless source code that maps $X^n$ to a sequence $\widehat{Y}^m$ of length $m$ that is indistinguishable, in terms of Kullback-Leibler divergence, from a sequence emitted by another DMS with known distribution $p_Y$. The main result is the existence of a coding scheme that performs this task with an optimal ratio $m/n$ equal to $H(X)/H(Y)$, the ratio of the Shannon entropies of the two distributions, as $n$ goes to infinity. The coding scheme overcomes the challenges created by the lack of knowledge about $p_X$ by a type-based universal lossless source coding scheme that produces as output an almost uniformly distributed sequence, followed by another type-based coding scheme that jointly performs source resolvability and universal lossless source coding. The result recovers and extends previous results that either assume $p_X$ or $p_Y$ uniform, or $p_X$ known. The price paid for these generalizations is the use of common randomness with vanishing rate, whose length scales as the logarithm of $n$. By allowing common randomness larger than the logarithm of $n$ but still negligible compared to $n$, a constructive low-complexity encoding and decoding counterpart to the main result is also provided for binary sources by means of polar codes.
I. INTRODUCTION
We consider the problem illustrated in Figure 1, in which n realizations of a Discrete Memoryless Source (DMS) (X , p X ), with finite alphabet X and unknown distribution p X , are to be encoded into a vector Y m of length m. While m should be as small as possible, the vector Y m should not only allow asymptotic lossless reconstruction of X n but also be asymptotically indistinguishable, in terms of Kullback-Leibler (KL) divergence, from a sequence Y m emitted by a DMS \text{Hypothesis Test }\begin{cases} \mathcal{H}_0: \widetilde{Y}^m \text{ stems from } (\mathcal{Y},p_Y) \\ \mathcal{H}_1: \widetilde{Y}^m \text{ does not} \end{cases} X n < l a t e x i t s h a 1 _ b a s e 6 4 = " Q D c C n 9 / 9 d 6 F B K 3 2 7 Y O 5 B h q I r N u E = " > A A A B 6 n i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 K k k V F E 8 F L x 4 r 2 g 9 o Y 9 l s N + 3 S z S b s T o Q S + h O 8 e F D E q 7 / I m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 I J H C o O t + O y u r a + s b m 4 W t 4 v b O 7 t 5 + 6 e C w a e J U M 9 5 g s Y x 1 O 6 C G S 6 F 4 A w V K 3 k 4 0 p 1 E g e S s Y 3 U z 9 1 h P X R s T q A c c J 9 y M 6 U C I U j K K V 7 t u P q l c q u x V 3 B r J M v J y U I U e 9 V / r q 9 m O W R l w h k 9 S Y j u c m 6 G d U o 2 C S T 4 r d 1 P C E s h E d 8 I Y m < l a t e x i t s h a 1 _ b a s e 6 4 = " W h s l n d A r O 9 F u S I C b x 5 n + o G 8 k j G Q = " > A A A B 9 H i c b V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g q e x W Q f F U 8 O K x g v 2 Q d i 3 Z 7 L Q N T b J r k q 2 U p b / D i w d F v P p j v P l v T N s 9 a O u D g c d 7 M 8 z M C 2 L O t H H d b y e 3 s r q 2 v p H f L G x t 7 + z u F f c P G j p K F I U 6 j X i k W g H R w J m E u m G G Q y t W Q E T A o R k M r 6 d + c w R K s 0 j e m X E M v i B 9 y X q M E m M l v / P E Q h g Q k 9 5 P H k S 3 W H L L 7 g x 4 m X g Z K a E M t W 7 x q x N G N B E g D e V E 6 7 b n x s Z P i T K M c p g U O o m G m N A h 6 U P b U k k E a D + d H T 3 B J 1 Y J c S 9 S t q T B M / X 3 R E q E 1 m M R 2 E 5 B z E A v e l P x P 6 + d m N 6 l n z I Z J w Y k n S / q J R y b C E 8 T w C F T Q A 0 f W 0 K o Y v Z W T A d E E W p s T g U b g r f 4 8 j J p V M r e W b l y e 1 6 q X m V x 5 N E R O k a n y E M X q I p u U A 3 V E U W P 6 B m 9 o j d n 5 L w 4 X n ⇡ X n < l a t e x i t s h a 1 _ b a s e 6 4 = " n n r 9 7 4 S W R n T 3 N + z P I j o 3 J W G x N N o = " > A A A C A X i c b V B N S 8 N A E N 3 U r 1 q / q l 4 E L 4 t F 8 F S S K i i e C l 4 8 V r B t o E n L Z r N p l 2 5 2 w + 5 G L a F e / C t e P C j i 1 by the adversary follows from standard results on hypothesis testing, e.g., [2], [3]. Universal covertness generalizes and unifies several notions of random number generation and source coding found in the literature. For instance, 1) uniform lossless source coding [4] corresponds to known p X and uniform p Y ; 2) random number conversion and source resolvability [5], [6] correspond to known p X and no reconstruction constraint; 3) universal source coding [7] is obtained with known source entropy H(X) and without distribution approximation constraint; 4) universal random number generation [8] is obtained with known source entropy H(X), uniform p Y and without reconstruction constraint. Universal covertness may also be viewed as a universal and noiseless counterpart of covert communication over noisy channels [9]- [11]. Most importantly, universal covertness relates to information-theoretic studies of information hiding and steganography [12], [13], yet with several notable differences that we now highlight.
• The problem in [12] consists in embedding a uniformly distributed message into a covertext without changing the covertext distribution, under a distortion reconstruction constraint. Universal covertness omits the distortion reconstruction constraint but relaxes the assumption of uniformly distributed message; this is motivated by the fact that message distributions encountered in practice are seldom uniform, and even optimally compressed data is only uniform in a weak sense [14], [15]. We point out that the perfect undetectability requirement enforced in [12] is stronger than our asymptotic indistinguishability but largely relies on the presence of a long shared secret key. • The setting in [13,Section 4] is similar to universal covertness but does not address the problem of obtaining an optimal compression rate m/n and indistinguishability arXiv:1808.05612v2 [cs.IT] 17 Jun 2021 is only measured in terms of normalized KL-divergence. The extension in [13,Section 5] assumes that, unlike the adversary, the encoder only knows the entropy H(Y ) of the covertext and explicitly addresses the problem of estimating p Y from n samples of the DMS (Y, p Y ). We recognize that, in practice, the covertext distribution p Y should be estimated from a finite number of samples, which necessarily limits the precision of the estimation. We take the view that the samples are public and in sufficient number so that all parties obtain the same estimates within an interval of confidence whose length is negligible compared to the uncertainty when estimating p Y from m symbols of a DMS. • As in [12], [13], universal covertness relies on a seed, i.e., common randomness shared by the encoder and the decoder only; however, we shall see that the seed length used in our coding scheme is Θ(log n), which is negligible compared to n. This contrasts with seed lengths Θ(n log n) in [12] and Θ(n) in [13], although it is fair to mention that these larger key sizes enable perfect undetectability or perfect secrecy, which we do not require. Beyond the generalizations offered by universal covertness described above, we note that the special case of uniform lossless source coding for DMSs with unknown distributions, i.e., the case when p Y is uniform, is of particular interest to the design of secure communication schemes in settings where the uniformity of messages transmitted over a network is often a key assumption [16], [17].
The idea of our proposed coding scheme is to approach the problem of universal covertness in two steps. In Step 1, universal uniform lossless source coding is performed through a type-based source coding scheme that makes the encoder output almost uniform. In Step 2, source resolvability with the additional constraint that the input be reconstructed from the output is performed with the result of Step 1 as input, so that the output of Step 2 approximates a given target distribution and allows recovery of the input of Step 1.
We formally describe the problem in Section II. We study the special case of uniform lossless source coding for DMSs with unknown distribution in Section III. Building upon the results of Section III, we present our main result for universal covertness in Section IV. By allowing a larger amount of common randomness, whose rate still vanishes with the blocklength, we provide a constructive and low-complexity encoding and decoding scheme for universal covertness in Section V. Finally, we provide concluding remarks in Section VI.
A. Notation and basic inequalities
For a, b ∈ R + , we define a, b [ a , b ] ∩ N. For two functions f , g from N to R + , we use the standard notation f (n) = o(g(n)) if lim n→∞ f (n)/g(n) = 0, f (n) = O(g(n)) if lim sup n→∞ f (n)/g(n) < ∞, and f (n) = Θ(g(n)) if lim sup n→∞ f (n)/g(n) < ∞ and lim inf n→∞ f (n)/g(n) > 0. For two distributions p and q defined over a finite alphabet X , we define the variational distance V(p, q) x∈X |p(x)− n (X n , U d n )
Fig. 2. Universal covertness assisted with a seed (common randomness).
q(x)|, and denote the KL-divergence between p and q by D(p q) with the convention D(p q) = +∞ if there exists x ∈ X such that q(x) = 0 and p(x) > 0. Unless otherwise specified, capital letters denote random variables, whereas lowercase letters represent realizations of associated random variables, e.g., x is a realization of the random variable X. We denote the indicator function by 1{ω}, which is equal to 1 if the predicate ω is true and 0 otherwise. For any x ∈ R, we define [x] + max(0, x). For a sequence of random variables (Z n ) n∈N that converges in probability towards a constant C, i.e., for any > 0, lim n→∞ P(|Z n − C|> ) = 0, we use the notation p-lim n→∞ Z n = C. We will also use the following inequalities for KL-divergence, Eq. (1) is from [18], Eq. (3) is from [19], and Eq. (2) can easily be derived from the definition of the KL-divergence and Pinsker's inequality.
Lemma 1. Let p, q, r be distributions over the finite alphabet X . Let H(p) and H(q) denote the Shannon entropy associated with p and q, respectively. Let µ q min x∈X q(x). We have
B. Model for universal covertness
Consider a discrete memoryless source (X , p X ). Let n ∈ N, d n ∈ N, and let U dn be a uniform random variable over U dn 1, 2 dn , independent of X n . In the following we refer to U dn as the seed and d n as its length. As illustrated in Figure 2, our objective is to design a source code to compress and reconstruct the source (X , p X ), whose distribution is unknown, with the assistance of a seed U dn and such that the encoder output approximates a known target distribution p Y with respect to the KL-divergence. Definition 1. An (n, 2 dn ) variable-length universal covert source code for a DMS (X , p X ) with respect to the DMS (Y, p Y ) consists of • A seed U dn (with length d n ) uniformly chosen at random in the set U dn 1, 2 dn and independent of all other random variables; • An encoding function φ n : X n × U dn → Y m that takes as input the seed U dn and the sequence X n emitted by the DMS (X , p X ); • A decoding function ψ n : Y m × U dn → X n .
Remark 1. We assume that p X is unknown; hence, φ n and ψ n do not depend on prior knowledge about p X but are allowed to depend on the specific sequence of realizations of the DMS (X , p X ), i.e., (φ n , ψ n ) describes a variable-length code. Hence, m is a random variable that is a function of X n and is written as m(X n ) to emphasize this point.
The performance of a universal covert source code is measured in terms of (i) The average probability of error P[X n = ψ n (φ n (X n , U dn ), U dn )]; (ii) Covertness, i.e., the closeness of the encoder output to a target distribution p (iii) Its output length to input length ratio m(X n )/n, which should be minimized; (iv) The seed length d n , which should be negligible compared to n.
Definition 2. Consider universal covertness for a DMS (X , p X ) with respect to the DMS (Y, p Y ). A rate R is achievable if there exists a sequence of (n, 2 dn ) variablelength universal covert source codes such that We are interested in determining the infimum of all such achievable rates.
Remark 2.
Usually, for variable-length settings, asymptotic average rates are considered (see, e.g., [20], [21] in the context of random number generation), i.e., convergence in mean is considered for coding rates. In this paper, we consider convergence in probability for the rate m(X n )/n for convenience, which also implies convergence in mean since the ratio m(X n )/n will be bounded in our setting. Our results will also show that the length of the encoder output concentrates with high probability around its optimal value H(X)/H(Y ) for large n.
Remark 3. Note that in the covertness condition, the term is a random variable as the length m(X n ) of the encoder output is itself a random variable.
III. SPECIAL CASE: UNIFORM LOSSLESS SOURCE CODING
FOR DMSS WITH UNKNOWN DISTRIBUTION In this section, we study the problem described in Section II-B, in which p Y is the uniform distribution over Y. We refer to this special case as uniform lossless source coding for DMSs with unknown distributions. We build upon the solution proposed for this special case to provide a solution for the general case, i.e., arbitrary p Y , in Section IV.
The results of this section generalize and complement an earlier result for DMSs with known distributions [4], [22], [23] when fixed-length source coding is considered.
A. Definition of uniform lossless source coding
For sources with unknown distributions, the problem of uniform lossless source coding aims at jointly performing universal lossless source coding [7], [24] and universal randomness extraction [8]. More formally, universal uniform source coding is defined as follows.
Definition 3. An (n, 2 dn ) variable-length universal uniform source code is an (n, 2 dn ) variable-length universal covert code for a DMS (X , p X ) with respect to the DMS ({0, 1}, p U ), where p U is the uniform distribution over {0, 1}. We define its rate as m(X n )/n. Similar to a universal covert code, the performance of a uniform source code is measured in terms of (i) The average probability of error (ii) The uniformity of its output where p U Mn(X n ) is the uniform distribution over M n (X n ) 1, M n (X n ) with M n (X n ) 2 m(X n ) ; (iii) The rate, which should be close to H(X); (iv) The seed length d n , which should be negligible compared to n.
Definition 4. Consider universal uniform source coding of a DMS (X , p X ). A rate R is achievable if there exists a sequence of (n, 2 dn ) variable-length universal uniform source codes such that
B. Method of types
We here recall known facts about the method of types [7]. Let n ∈ N. For any sequence x n ∈ X n , the type of x n is its empirical distribution given by 1 n Let P n (X ) denote the set of all types over X , and T n X denote the set of sequences x n with type pX ∈ P n (X ). We will use the following lemma extensively.
Proof. See Appendix B.
Using Lemma 4, we deduce the following result unconditional on the type of the sequence to compress.
Lemma 5. Define p U (X n ) as the uniform distribution over 0, c n (X n ) − 1 .
Proof. See Appendix C.
From Lemma 5, which quantifies the uniformity of the encoder output φ (1) n (X n , U dn ) in terms of variational distance, we deduce the following result which now quantifies uniformity in terms of KL-divergence.
Lemma 6. We have Proof. See Appendix D.
We now prove (6). Let T n X (X n ) denote the type set to which X n belongs. Let H(X n ) denote the plug-in estimate of H(X) using X n [25]. The encoder output length is log|T n X (X n )|+β log n + 1 + γ n where (a) holds by definition of c n (X n ), (b) holds by Lemma 2 and because H(X n ) = H(pX (X n )), (c) holds by the definition of γ n . Hence, we conclude that (6) holds by [25].
DISTRIBUTION
Our coding scheme for universal covertness uses two building blocks, which are two special cases of the model described in Section II-B: (i) Uniform source coding for DMS with unknown distribution, studied in Section III; (ii) Source resolvability with the additional constraint that the input should be recoverable from the output, which corresponds to the case in which p X is known to be the uniform distribution.
A. Results
Building upon our construction in Theorem 1 we obtain the following results.
Theorem 2. There exists a sequence of (n, 2 dn ) variablelength universal covert source code for the DMS (X , p X ) with respect to the DMS (Y, p Y ), defined by the encoding/decoding functions (φ n , ψ n ) with encoder output length m(X n ), such that if one defines Y m(X n ) φ n (X n , U dn ), then Proof. See Section IV-B.
Proposition 2. The asymptotic rate p-lim n→∞ m(X n ) n in Theorem 2 is optimal.
Proof. See Appendix E.
B. Proof of Theorem 2
We first perform source resolvability with lossless reconstruction of the input from the output by means of "random binning" [5], [26], [27]. Note that standard resolvability results [5] do not directly apply to our purposes as they do not support the recoverability constraint of the input from the output.
Let m ∈ N to be specified later and define R Y H(Y )− , > 0, where H(Y ) is the entropy associated with the target distribution p Y . To each y m ∈ Y m , we assign an index B(y m ) ∈ 1, 2 mR Y uniformly at random. The joint probability distribution between Y m and B(Y m ) is given by, We then consider the random variable Y m that is distributed (15) and p U is the uniform distribution over 1, 2 mR Y . We thus have where the last equality holds by [26, Theorem 1] for any r > 0. Observe also that when y m is drawn according to p Y m |B(Y m )=b for some b, then b can be perfectly recovered from y m , since by (14), (15), we have 6 All in all, (17) and (18) mean that there exists a specific choice B 0 for the binning B such that, if b is a sequence of length mR Y distributed according to p U and y m is drawn according for any r > 0. By the triangle inequality, the result stays true if p U is replaced by a distribution p U that satisfies V( p U , p U ) = o(m −r ) for any r > 0. Note that the construction requires randomization at the encoder, however, the randomness needs not be known by the decoder. We now combine source resolvability with lossless reconstruction of the input from the output and universal uniform source coding as follows. Let n ∈ N, and consider a variable-length uniform source code obtained from Theorem 1 and described by the encoding/decoding pair (φ n , ψ n ) where φ n (X n , U dn ) = (φ (1) n (X n , U dn ), φ (2) n (X n , U dn )) as described in Section III-C and define M 1 φ n (X n , U dn ). We then define the length of our universal covert source encoder output such that m(X n )R Y = |M 1 |+|M 2 |+T for some T ∈ 0, R Y , where |·| denotes the length of a sequence. We also define the sequence M (M 1 C M 2 ), where denotes the concatenation of sequences and C is a sequence of T bits uniformly distributed.
Finally, the encoder of our universal covert source code forms Y m(X n ) by source resolvability as previously described with the substitutions b ← M , m ← m(X n ), so that the decoder of our universal covert source code determines from Y m(X n ) , in this order, M , then M 2 (since the length of M 2 is known to be γ n ), then |M 1 | (since M 2 reveals the type of the compressed sequence X n given the seed), then M 1 , and finally approximates X n using ψ n applied to (M 1 , M 2 , U dn ). Hence, (11) and (12) hold.
Remark 5. Note that the sequence C does not carry information and is only used to pad the sequence M such that |M |= m(X n )R Y .
Next, we have where (a) holds by definition of m(X n ), and (b) holds by definition of R Y . Hence, by Theorem 1, p-lim n→∞ m(X n )/n H(X)/(H(Y ) − ). Finally, since the encoder output of the universal source code is almost uniform, as described in Theorem 1, we have also obtained lim n→∞ V p Y m(X n ) , p ⊗m(X n ) Y = o(m −r ) for any r > 0, which implies (13) by Lemma 1.
V. A CONSTRUCTIVE AND LOW-COMPLEXITY CODING SCHEME
Theorem 1 provides a coding scheme for universal uniform source coding but the implementation of the coding scheme is intractable since it relies on the method of types. As for Theorem 2, it only provides an existence result (i.e., a nonconstructive coding scheme) for universal source covertness. We present in this section a constructive and low-complexity counterpart to Theorem 1 and Theorem 2 for a binary source alphabet, i.e., |X |= 2. The seed length required in our coding scheme will be shown to be negligible compared to the length of the compressed sequence but will be larger than the one in Theorems 1, 2.
In Definition 3, assume that the DMS (X , p X ) is Bernoulli with parameter p = 1/2, unknown to the encoder and decoder, and that the DMS (Y, p Y ) is such that |Y| is a prime number. Let n ∈ N * , N 2 n , and consider a sequence x LN of LN , where L ∈ N * will be specified later, independent realizations of (X , p X ) that need to be compressed. Let denote the concatenation of sequences, \ denote set subtraction, and H b denote the binary entropy. Also, define G n 1 0 1 1 ⊗N , the polarization matrix defined in [28], and define for any set I ⊆ 1, N , for any sequence X N , the subsequence
A. Coding scheme
Encoding: We proceed in three steps.
Step 1 is the estimation of p.
Step 2 corresponds to universal uniform source coding. It is performed with polar codes and generalizes both the coding scheme in [29], which cannot account for uncertainty on the source distribution, and the coding scheme in [30], which can only account for a compound setting.
Step 3 corresponds to source resolvability with lossless reconstruction of the input from the output. It is also performed with polar codes with methods similar to those used in [31] but with the additional difficulty that the exact length of the input is unknown to the decoder.
Step 1. Let t < 1/2 and define q N t , δ N −t . We also define a i iδ, i ∈ 0, q − 1 , a q 1, a −1 a 0 , and a q+1 a q such that {[a i , a i+1 ]} i∈ 0,q−1 is a partition of [0, 1]. We estimate p aŝ There exists i 0 ∈ 1, q such thatp ∈ [a i0−1 , a i0 ]. Next, we define p arg max Let I 0 be the binary representation of i 0 and form where K 0 is a sequence of uniform bits with length log(q) = O(log N ) that is shared by the encoder and decoder.
Step 2. Let X N (X N , X N , respectively) be a sequence of N independent Bernoulli random variables with parameter p (p, p, respectively). We perform universal uniform source coding on X N in this second step. Define U N X N G n , U N X N G n , U N X N G n , and for β < 1/2, δ N 2 −N β , define the sets We compress X N as where K is a sequence of |H X \V X | uniformly distributed bits shared between the encoder and decoder.
Step 3. We now repeat L times Step 2 and perform source resolvability with lossless reconstruction of the input from the output. We choose M N 2 and let Y M be a sequence of M independent and identically distributed random variables with distribution p Y . We define We apply Step 2 to L independent sequences X N to form A i , i ∈ 1, L . Note that this requires L sequences (K i ) i∈ 1,L of shared randomness between the encoder and the decoder. We denote the concatenation of these L compressed sequences by Next, we let R be a sequence of |V Y |−L|A|−|I N | uniformly distributed bits (only known by the encoder) and define V M as follows. We set and successively draw the remaining components of V M in V c Y , according to Finally, the encoder returns Decoding. Upon observing Y M , the decoder computes V M = G 2n Y M and recovers I N from the last log(q) bits of V M [V Y ]. Next, with K 0 and I N , the decoder can recover p and p (from (19), (20), and (21)), determine H X and V X , and recover A L from the first L|A| bits of V M [V Y ]. With (K i ) i∈ 1,L and A L , the decoder can also recover (U N i [V X ∪ H X \V X ]) i∈ 1,L by (22). Finally, the decoder runs the successive cancellation decoder of [28] to estimate X N i , i ∈ 1, L , from U N i [H X ]. Remark 6. (24) can be slightly simplified. Specifically, the randomizations could be replaced by deterministic decisions for j ∈ H c Y , i.e., randomized decisions are only needed for j ∈ V c Y \H c Y , as shown in [32]. Remark 7. In the special case of source resolvability, i.e., when the source is known to have a uniform distribution, then no seed is required in our coding scheme. This was already known, e.g., [19,Remark 16].
Remark 8. In the special case of uniform compression when the distribution of the source to compress is known, polar coding schemes can also be obtained in the presence of side information [30].
B. Analysis 1) Reliability: Note that the estimatorp used for the parameter p is unbiased and has variance σ 2 = O((LN ) −1 ). Define the events E {(p a I0+1 ) or (p a I0−2 )}, and E {(p p − N −2t ) or (p p + N −2t )}. We then have where (a) holds because for N large enough, (25). Recall that when p is known to the encoder and decoder, [28] shows that it is possible to reconstruct X N from U N [H X ] with error probability bounded by O (N δ N ), where U N X N G n . The following lemma shows that when p is unknown but there is no loss of information by compressing X N as U N [H X ]. Moreover, using the successive cancellation decoder of [28], by [33,Lemma 4], one can reconstruct X N from U N [H X ] with error probability bounded by O(N δ N ).
Lemma 7.
We have H X ⊂ H X .
Proof. We closely follow [33]. There exists α ∈ [0, 1] such We Consequently, the decoding scheme of Section V-A succeeds in reconstructing X N L with error probability bounded by O(N Lδ N ), which vanishes as N → ∞ since L = O(N ) by (23).
2) Covertness: Similar to the analysis of reliability, to show that the covertness condition holds in probability, it is sufficient to show covertness when H b (p) H b (p). In this case, similar to Lemma 7, we have the following lemma.
Lemma 8. We have V X ⊂ V X and H X ⊂ H X .
Hence, by Lemma 8, V X ⊂ H X ⊂ H X , and |A|= |H X |. Then, let p U V X and p U H X be the uniform distribution over 1, 2 |V X | and 1, 2 |H X | , respectively. Observe that A i , i ∈ 1, L , is nearly uniform in the sense that where (a) holds by the chain rule for KL-divergence and uniformity of K, (b) holds by the chain rule and because conditioning reduces entropy, (c) holds by definition of V X .
Next, define p U L the uniform distribution over 1, 2 L|H X | and defineV M similar to V M but by replacing A L in the description of Step 3 in Section V-A by a sequence distributed according to p U L . We have where (a) holds by the chain rule and positivity of the KLdivergence, (b) holds by the chain rule and sinceV M and V M are produced similarly given U L or A L , (c) holds by the chain rule, (d) holds by (26).
Finally, we have where (a) holds by Lemma 1 with µ V min v∈V p V (v), (b) holds by (27) and because D(p V M p V M ) M δ M , which can be shown by similar arguments to [19, Lemma 1], and the limit holds since L = O(N ) and M = N 2 .
4) Length of the shared seed: Finally, we verify that the length of the shared seed that is needed in the coding scheme of Section V-A is negligible compared to the total length LN of the sequence that is compressed. Note that in Step 2 |K|= o(N ) since |H X \V X |= |H X |−|V X | (because V X ⊂ H X by Lemma 8) and lim N →∞ |H X |/N = lim N →∞ H b (p) = lim N →∞ H b (p) = lim N →∞ |V X |/N (by [28] and [34,Lemma 1]). Hence, the total length of the shared seed is L i=0 |K i |= |K 0 |+L|K|= o(LN ).
Our proposed coding scheme consists of the combination of (i) a type-based coding scheme able to simultaneously perform universal lossless source coding and ensure an almost uniform encoder output, and (ii) source resolvability with lossless reconstruction of the input from the output. Our coding scheme uses a seed, i.e., a uniformly distributed sequence of bits, shared by the encoder and the decoder. Although our seed rate vanishes as n grows to infinity and has a length Θ(log n), it is not clear whether a smaller seed could offer similar convergence rates.
Finally, we have proposed an explicit low-complexity encoding and decoding scheme for universal covertness of binary memoryless sources based on polar codes. Our coding scheme requires a seed length that grows faster than log n, yet, its rate still vanishes as n grows. Note that in the special case of source resolvability, i.e., when the source is known to have a uniform distribution, then no seed is required in our coding scheme.
ACKNOWLEDGMENT
The authors would like to thank the Associate Editor and the anonymous reviewers for their valuable comments. In particular, the authors thank one reviewer for suggesting an alternative achievability scheme that led to Theorem 1 and established sufficiency of a logarithmic amount of common randomness.
APPENDIX A PROOF OF LEMMA 3
Fix pX ∈ P n (X ). We write b n , V n ,V n instead of b n (pX ), V n (pX ),V n (pX ), respectively, to simplify the notation. By Euclidean division, there exist q ∈ N, r ∈ 0, b n − 1 such that a n − 1 = b n q + r. Next, for v ∈ 0, b n − 1 , we have if v > r .
APPENDIX B PROOF OF LEMMA 4 Fix pX ∈ P n (X ). We write b n , c n , V n ,V n , p U , k, instead of b n (pX ), c n (pX ),V n (pX ),V n (pX ), p U (pX ) , k s(pX ) , respectively, to simplify the notation. Defineφ | 2017-03-27T19:47:19.879Z | 2016-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "ba1f39b49e2e195c5749cf11fb20b8549872a40d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.05612",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5b85d31604ff209ae5a24c501fc5827d45176a6a",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
17433493 | pes2o/s2orc | v3-fos-license | A Virtual Reality-Cycling Training System for Lower Limb Balance Improvement
Stroke survivors might lose their walking and balancing abilities, but many studies pointed out that cycling is an effective means for lower limb rehabilitation. However, during cycle training, the unaffected limb tends to compensate for the affected one, which resulted in suboptimal rehabilitation. To address this issue, we present a Virtual Reality-Cycling Training System (VRCTS), which senses the cycling force and speed in real-time, analyzes the acquired data to produce feedback to patients with a controllable VR car in a VR rehabilitation program, and thus specifically trains the affected side. The aim of the study was to verify the functionality of the VRCTS and to verify the results from the ten stroke patients participants and to compare the result of Asymmetry Ratio Index (ARI) between the experimental group and the control group, after their training, by using the bilateral pedal force and force plate to determine any training effect. The results showed that after the VRCTS training in bilateral pedal force it had improved by 0.22 (p = 0.046) and in force plate the stand balance has also improved by 0.29 (p = 0.031); thus both methods show the significant difference.
Introduction
Advancements in medical technology have improved the survival rate of stroke patients. However, stroke survivors may have many complications after they survived, such as abnormal muscle tone and hemiparesis [1,2]. The bilateral sides of stroke patients are often asymmetric or imbalanced, which limit their balance ability and hinder them from coordinating both sides of their bodies during movements [3]. The lack of basic functions compels stoke patients to heavily rely on external assistance for their daily activities. An important rehabilitation goal is to help stroke patients to train the hemiparesis side of their bodies, and to enhance their motor control and coordination, which eventually improves the independence ability of patients.
One of the most important objectives of stroke rehabilitation is to restore a participant's walking ability [4][5][6][7]. Walking for these patients, many times, requires a significant increase in strength and coordination. Many studies suggested that using cycling, as a rehabilitation tool, could significantly improve lower body extremity function of stroke patients [8][9][10][11][12]. The pattern of cycling is very similar to walking [13][14][15][16] because both cycling and walking are cyclical and required reciprocal flexing and extension movements from the hip, the knee, and the ankle. Moreover, these exercises could also alternatively activate agonist and antagonist muscles with regular intervals of activity and coordination [17][18][19], which mitigated the balancing problem and provided a safer means for rehabilitation. Cycling exercise has great potential in preambulation training methods, commencing as soon as the patient has the ability to sit.
Though cycling exercise can potentially restore muscle strength, some problems still remain with this rehabilitation method. Cycling requires participants to move both of their lower limbs alternately with equal force, but, for hemiparesis patients, the lack of activity of the affected limb is often compensated by the unaffected limb. The unaffected limb may mask the insufficiency of the affected limb and result in uncoordinated training, which may reduce potential benefits and intensify gait dissymmetry of the hemiparesis patient [20,21]. To attempt to solve this problem, a real-time feedback mechanism that could provide information concerning the cycling process would be helpful, which would remind the patients to focus on the task.
Virtual Reality (VR) provides patients with a more realistic, varied, and enhanced sensory perception experience and also facilitates motor learning based on various feedback mechanisms [22,23]. It can simulate body movements of daily life, making rehabilitation more entertaining. VR training may also improve cortical reorganization and neuroplasticity by encouraging movement [24][25][26]. Related studies have proposed several advantages concerning the combination of VR and existing rehabilitation methods [27,28]. It has been shown that combining VR with treadmill use, or other mechanical assistance, can increase stroke patients' walking speed and distance on even ground [29,30].
VR-based treadmill ambulatory training provides suspension to support participant's trunk. However, the training sessions are often time-consuming and cumbersome, which not only make participants feel uneasy but also reduce their motivation. A study proposed a novel design to combine VR technology with bicycles, which effectively improves the cardiopulmonary function, the muscle strength, and the operational performance [31]. However, the participants in those studies were normal participants.
The cycling combining VR system has shown positive improvement for stroke patients. A recent study shows that when visual feedback is provided in cycling exercise, it will bring better cycling smoothness, and the average power output is also greater than the exercise without visual feedback [32]. For smoother cycling, bilateral leg force output balance can directly indicate cycling performance and is easy for patients to understand [33]. The force output status can be indicated with visual feedback during cycling training, which helps to improve pedaling balance and walking ability. However, some VR-cycling systems only show force output. Although the display can guide users to achieve training effects, the entertaining effects and gaming designs still require further improvement [32,34]. Therefore, combining VR with a proper feedback device can provide a safer and more interesting training method for the stroke patients, which, in turn, allows the users to learn proper postures and functional abilities [35][36][37][38][39].
Therefore, the purpose of this study is to evaluate the training effect of customizable VR-Cycling Training System (VRCTS) for use in the rehabilitation of stroke patients, by comparing the result of the Asymmetry Ratio Index (ARI) of bilateral pedal force and force plate, before and after training. This system should allow clinical therapists to quantitatively measure the difference of force between both legs during cycling exercise, helping stroke patients to train the affected leg and track their progress. The interactive VR rehabilitation program gives feedback information to patients, stating their balance and speed condition for further customization. The interactive VR rehabilitation program might also offer interesting visual feedback to help patients focus on the training.
VR rehabilitation program
Cycling device
Cycling CR System
Figure 2: VRCTS system experiment setup, which includes VR rehabilitation program, Cycling CR System, and a cycling device.
Cycling Device.
The cycling device is shown in Figure 3.
It is composed of three parts: a cycling device, two load cells placed inside each pedal to detect the force from user's feet, and an angle encoder attached to the crank of the cycling device.
To determine the stepping force, two load cells (MLP-100; Transducer Techniques, Inc., USA) were installed inside each pedal of the cycling device, and a splint was placed on the top of each pedal. When patients perform the cycling exercise, the output of cycling force can be accurately measured by using an amplifier and an adjustable circuit.
An encoder (MES 30-p; Microtech Laboratory Inc., Kanagawa, Japan) has three output signals which are phases A, B, and Z; it was applied to determine the cycling speed. The encoder was located in the middle of the cycling device and connected to the crank arm. Phases A and B are used for counting the angle and determined the direction. When the crank has been rotated 1 degree, phase A and phase B output signal will be turned from low voltage to high voltage. The relationship between the angle and the cycling position is shown in Figure 4. If a user moved the pedals clockwise, phase A would be half a duty cycle ahead of phase B. Otherwise, phase B would be a half duty cycle ahead of phase A. In a full pedal circle, the encoder counted 360 times; each count represented 1 degree; accordingly, we could determine the angle of the pedals. The 0-degree position is shown in Figure 4. When the encoder counted 360 times, the angle would be back to 0 degree, and phase Z turned its state to low voltage simultaneously. Phase Z was used to count the pedal cycle number. The cycling speed was calculated by Cycling CR System. If the speed was too low, it was considered as 0 RPM.
Cycling Graph User Interface Control and Data Record System. A Cycling Graph User Interface Control and Data
Record System (Cycling CR System) was developed based on NI-FPGA system (National Instruments, TX, USA; compact RIO 9014) and LabVIEW software, which is developed for the clinician to set up the parameters for the Virtual Reality rehabilitation program and to follow the state of the participants. The Cycling CR System analyzed signals from the encoder and the load cell at a sample rate of 1 k Hz. The encoder signal was programmed to calculate cycling speed in rotations per minute (RPM). The cycling speed formula is shown below. A time parameter 0 is the start counting of each rotation at the point of 0 degree. A time parameter 1 is the end of each rotation at the point of 359 degrees. Consider Cycling speed = 60 For the load cell, we design the Cycling CR System to calculate the force in each leg and the Difference Force (DF) between the two legs. The DF formula is shown as follows: where R force is the right-leg cycling force ( Figure 5(a)), L force is the left-leg cycling force ( Figure 5(b)), and is the number of points in 0.1 seconds. When DF is a positive number, the right-leg cycling force is stronger than the left-leg cycling force. When DF is a negative number, the turning point is the threshold value of the left turn, which is one SD minus the negative left-leg force. The RTP (right turning point) is the threshold value of the right turn, defined by one standard deviation (SD) added to the average positive DF. The LTP (left turning point) is the threshold value of the left turn, defined by one standard deviation (SD) minus the average positive DF. Both the RTP and the LTP are calculated in the pretest. When the participants try to turn the VR car in VR rehabilitation program during turns, they have to generate greater strength with left leg or right leg. Therefore, the DF value during right turn or left turn will be calculated, and the right-turn DF or left-turn DF value is produced. When the right-turn DF is greater than the RTP, or if the left-turn DF is smaller than the LTP, a right or a left turning signal will be generated and transmitted to VR rehabilitation program and the VR car will turn 15 degrees. All the control signals will be passing from the Cycling CR System to the VR rehabilitation program.
Virtual Reality (VR) Rehabilitation Program. A Virtual
Reality (VR) rehabilitation program was applied in this system, which was used to provide participants with visual feedback. The VR rehabilitation program was created by the Taiwan dollar bills is set in the middle of the road as a target. A VR car was placed in the middle of the screen. We provided two courses, a right-curve course and a left-curve course, to collect notes, thus creating more opportunities for participants to train their affected leg, as shown in Figure 6. At the top of the VR rehabilitation program there are shown the game time, crash number, crash time, score, and round to give participants the motivation to improve with each attempt in the training process. The game time will show how long the participant has been training, but after it has been 15 minutes, the program will stop. Crash number shows how many times that car has hit a house. Crash time shows how often, in seconds, that the VR car has hit a house. When gathering the dollar bills produces a score of 5 points, it is an indication of the user's ability to control the car's position, since the participant has had to keep the VR car in the middle of the road, whether on straight or curvy road by learning how to turn the car. This forces them to train the weak side of leg thus satisfying the requirements of their rehabilitation.
The Cycling CR System was also used to control the VR car. A few parameters were applied to the setup in the Graph User Interface (GUI), as shown in Figure 7. The parameters were used for right-turn control, left-turn control, and speed control. When the Cycling CR System sent the turning signal to the VR rehabilitation program for a turning control, the VR car turned 15 degrees. In Figure 8(a) showing VR car in a right curve, if the right-turn DF is greater than the RTP, VR car will turn 15 degrees (Figure 8(b)) to make the VR car pass the curve, or the VR car will become stuck in the curve (Figure 8(c)).
There were three levels for speed control, and high speed bound and low speed bound were set by the pretest parameters. If the cycling speed was within the high speed bound and the low speed bound, the VR car would move at speed level 2 (middle speed). If cycling speed was faster than the high speed bound, the VR car would move according to speed level 1 (fast speed). On the other hand, if cycling speed was slower than the low speed bound, the VR car would move at speed level 3 (slow speed). Though if cycling speed is slower than 15 RPM, the VR car stops. All cycling signals are processed and controlled by the Cycling CR System.
Evaluation Task and Statistics.
This study adopted quasiexperimental, pretest, posttest nonequivalent control group design. Use the nonblind method to compare the control group and experiment group. Participants were tested before and after the intervention and in a follow-up assessment one week after the end of the treatment by means of the following assessment tests: (1) A bilateral pedaling test was done, where patients will cycle for 2 rounds of 17 cycles. The first round is to let the patient get used to the pedaling exercise. The 5∼10 cycles in 2nd round will be used to record the data of the affected and unaffected leg pedaling force ARI (Asymmetry Ratio Index): (2) To measure the stand balance ARI, the force plate (zebris force measure platform, zebris Medical GmbH, Germany) and zebris WinFDMS (zebris Medical GmbH) were used. The sample rate is 1000 Hz. The patient stands and distributes his weight across a force plate, once in place, stands for thirty seconds to adjust their position, then closes their eyes, and remains still for ten seconds to record the data showing the average COP length and COP area for both legs.
To analyze the result of pedal force and force plate, the SPSS 14.0 (SPSS Inc., Chicago) was used, to analyze the data between before and after training, with the pair -test statistics method to compare the results, if < 0.05 means that the result is significant difference.
2.6.
Participants. The feasibility of VR-Cycling Training System (VRCTS) was tested by ten stroke patients, who were separated into two groups: the first group is the control group and the second group is the experimental group. Table 1 shows the characteristics of participants in the experiment.
In the control group, there were three participants; the average age was 61.3 ± 6.1, ranging from 56 to 68. This group consisted of two females and 1 male. The average month after stroke was 13.3 ± 4.16. The functional ability of the participants' lower limb was classified by Brunnstrom stage classification. Three participants were in stages III and IV on the Brunnstrom stage classification. In the experimental group, there were six participants, two participants with right side affected and four participants with left side affected. The average age of this group was 54 ± 9.14, and this group consisted of five females and one male. The average month after stroke was 15 ± 10.6. The functional ability of the participants' lower limb was in stages III and IV on the Brunnstrom stage classification.
Design and Procedure.
Stroke patients were recruited from the Chung Shan Medical University Hospital to participate in this study. A diagnosis of stroke with Brunnstrom stage III of the lower extremity and no significant perceptual, cognition, or sensory problem was selected. The Institutional Review Board for Human Studies at Chung Shan Medical University Hospital has approved this protocol (number CS11034), and all the participants and their caregivers provided informed consent.
The control group and experimental group both had the same rehabilitation treatments five times per week, with time being for 1 hour. The experimental group also had VRCTS training besides their rehabilitation treatment three times a week for 15 minutes each time for a total of ten times. For VRCTS training, a pretest will be performed to help adjust the parameter in the VRCTS system.
In the pretest, all participants were asked to perform 15 cycles. The initial 0∼4th cycles helped the participants to turn the pedals smoothly and got accustomed to the exercise. The 5th∼10th cycles were recorded and used to measure and calculate the parameters. The 11th∼15th cycles were to prevent cycling speed deceleration during the recorded period. When the cycling angle was 0∼30 degrees, the average force output of the left leg was extracted. When the cycling angle reached 180∼210 degrees, the average force output of the right leg was extracted. All of the above values were measured by the Cycling CR System to distinguish the difference of the force output between two legs. Then, the DF was computed for the average cycling speed (RPM) and for the turning threshold values, which are the RTP and the LTP. The parameter range of high and low speed was also calculated in the pretest. Then, these participants were asked to look at the VR rehabilitation program and control the VR car with the cycling device for 15 minutes.
DF in VR Rehabilitation
Program. The LTP and the RTP were determined in the pretest. During the VR rehabilitation program, the DF of the right and the left turns would be evaluated, and the results are shown in Table 2.
In the pretest, the cycling in a straight line, average DF value in left affected side and right affected side is 2.96 ± 0.31 kg and 0.49 ± 0.08 kg. The average RTP value in left affected side and right affected side is 3.82 ± 0.32 kg (the unaffected side) and 1.34 ± 0.75 kg (the affected side), and the average LTP value was −0.79 ± 0.27 kg (the affected side) and −2.77 ± 0.48 kg (the unaffected side). In a straight line condition, users could go straight by controlling the DF between RTP and LTP. On the other hand, during the turning In the VR rehabilitation program, during the right turning moment the average right-turn DF for the left and right affected side were 6.43 ± 2 kg (the unaffected side) and 1.89 ± 0.83 kg (the affected side). In the left turning moment, the average left-turn DF were −2.19 ± 0.46 kg and −5.07 ± 0.07 kg, respectively. And the total average cycling speed was 63.25 ± 2.36 RPM and 53 ± 8.48 RPM; if the patient can cycle faster and control the VR car's direction with skills, then patients can finish more rounds, and more scores. Also with the different speeds, the VR car can provide patients with motivation to cycle faster and have better clinical effects.
Evaluation.
The stroke patient bilateral pedal force results from before and after the ten VRCTS training sessions are shown in Table 3, the ARI in average results of the symmetry before training between legs in control group and experimental group are 0.24 ± 0.22 and 0.34 ± 0.2, and while the results after training are 0.22 ± 0.20 and 0.12 ± 0.07, in the control group, the symmetry is 0.02 ( = 0.5) which did not improve and did not significantly differ in results. In the experimental group, symmetry improved by 0.22 ( = 0.031) which was significantly different where the control group results were closer to 0. This showed that the pedal force symmetry had improved after training with the VRCTS.
The results of percentage parameter detected by the force plate are shown in Table 4. The force plate average ARI results after training are 0.23 ± 0.01 and 0.05 ± 0.03. In the control group, the ARI in force plate increased by 0.01 ( = 0.73) so it did not differ significantly. In experimental group, the ARI result had improved by 0.29 ( = 0.046) so there is a significant statistical difference, which showed that, after the VRCTS training, the standing balance ability has increased.
Descriptive statistics revealed that the changes in the bilateral pedal force coordination and force plate after the VRCTS training were comparable in both groups. The bilateral pedal force and force plate results after training increased in the experimental group while those in the control group remained almost the same.
Discussion
In this study, we presented a VR-Cycling Training System (VRCTS), which consisted of a VR rehabilitation program and the cycling. In this system, the cycling was used to control a VR car and to give users feedback. To verify the system functionality, 10 stroke patients participated in this experiment. The results showed that, after the calibration of the pretest, users could control the direction and the speed of the VR car, which demonstrated that the system might work on stroke patients.
Many studies have reported that immediate rehabilitation training after the onset of stroke can help patients to restore functional ability faster. Cycling is similar to gait [8][9][10], while it only requires performers to be able to sit. Therefore, the cycling rehabilitation training of lower limb may be an effective way to improve muscle strength and balancing ability. However, there are still some problems in cycling exercise. For example, sometimes stroke patients will use the unaffected side to compensate the affected side [8].
To remedy the flaws of other cycling systems for lower limb, we designed a cycling device equipped with load cells and encoder to detect the cycling force and speed of users in real-time. In the force output analysis of lower limbs, a pretest was used for calibration, so the force output of users could be successfully distinguished in the following tests. DF values could determine if users used their unaffected limb to compensate the affected one. Thus, the problems of cycling for lower limb could be solved by analyzing and evaluating the detected force output from both legs, and users would receive feedback to remind them of using the affected leg.
This study combines cycling with VR to help users to enhance concentration and motivation during rehabilitation training. One common problem of VR is that the display is often plain and unattractive [22]. In the previous study [34], a VR feedback display was used to remind user of their cycling force balance. In the VR display of the system, the force output was displayed as two bars. Although this display could guide users to achieve training effects, its entertainment effect and gaming design still require further improvement. Moreover, one system [31] used a bicycle and several sensors with a VR game. Although the game was quite entertaining, the system applied a conventional bicycle for the training. The seat of the bicycle was relatively small, which might not be able to provide the stability required by stroke patients. In addition, the height of the seat was too high, which made it difficult for patients to mount and dismount. VRCTS used Virtools to design a VR rehabilitation program, and cycling could be used to control the VR car for training, which could enhance the entertainment effect and motive of patient for training. The cycling device could provide a convenient and safe environment with any kind of chair. Even a wheelchair could be used with this cycling device for patients who had sitting balance problems. The VR rehabilitation program also scheduled more turning for the affected side of patients, so they could train their muscle strength and balance of the affected side.
In many existing cycling systems, the normal leg often compensates for the affected one in cycling training. Moreover, the system parameter setting of the VR rehabilitation program is also difficult to operate. The VRCTS developed in this study uses pretest for calibration, so the system can be customized according to the need of each individual patient. The data obtained through the 15 pedal cycles performed in the pretest was analyzed to attain the right-turn and left-turn settings, so cycling alone is used to control the speed and the direction of VR car in the program.
During the 15 minutes of the training process, the Cycling CR System can give the VR rehabilitation program a signal to determine the speed and the direction of the VR car. The participants will have the feedback on the direction and the speed of the VR car and direction of the car will let the participant know which leg should increase cycling force to control the desired direction to gather the $1000 New Taiwan dollar bills and successfully turn curve; then VR rehabilitation program will show the information of game time, crash number, crash time, score, and round, and this information will show the performance of the program.
However, the parameter setting is determined by individual pretest. In the left side and right side affected, VR program average right-turn DF are 6.43 ± 2 kg and 1.89 ± 0.83 kg, and the RTP are 3.82 ± 0.32 kg and 1.34 ± 0.75 kg. The average leftturn DF is −2.19 ± 0.46 kg and −5.07 ± 0.07 kg and the LTP is −0.79±0.27 kg and −2.77±0.48 kg. However, the result shows that, among the six patients, the strength of the unaffected leg is stronger than the affected one, but this study shows that stroke patients can perform the controlling function; thus during turning curve the DF in the affected side can still be greater than the unaffected side. Since the game is trainingoriented, this system might be challenging for hemiparesis patients because the force output of the affected side needs to be greater than the turning point, so the system can provide therapeutic effects.
In 2011, 153 patients with chronic stroke participated in [34] and they performed the visual feedback cycling training for 14 minutes twice a week. The patients should continue the training for two weeks, which were six times in total. The evaluation result showed that the method of the pedal torque symmetry was to maintain the rotating speed at 30 RPM for two minutes, and the torque would be calculated separately. After the training, the patients were divided into three groups, and one participant in each group was selected to conduct discussion. After the treatment, the pedal force was more symmetrical than that of the pretest and showed significant difference ( < 0.01) [34]. The result of the experimental group in our study also can support the hypothesis of the study that the adoption of visual feedback could improve the cycling performance.
The participants of the experimental group in this study received the VRCTS treatment. Their average value of the pedal force of the affected side increased from 11.85 ± 3.92 kg to 15.09 ± 1.86 kg, and the Asymmetry Ratio Index (ARI) improved from 0.34 to 0.12, which showed significant difference ( = 0.031). Regarding the participants of the control group, their average value of the pedal force of the affected side increased from 9.27 ± 1.88 kg to 9.42 ± 2 kg and did not show significant difference ( = 0.59). From the results of previous studies, we can infer that the reason of the improvement of the test group is that after the vision goes into the brain, the participants will practice repeatedly to prevent the collision of edge of marching objects in Virtual Reality. In addition, the visual reinforcement feedback of receiving virtual NT$1000 New Taiwan dollar bills by maintaining the VR car in the middle of the road activates the premotor cortex of the brain, which provides for real-time action reaction. Therefore, the coordination between lower limbs increases, and the visual reinforcement feedback can stimulate human brain remodeling, which makes the brain maintain good symmetry without visual feedback.
In the force plate, if the plantar pressure distribution approaches 50% means the symmetry is the best. Regarding the participants of the test group in this study, after the treatment, the plantar pressure distribution of the affected side improved from 40.75 ± 10.51% to 48.73 ± 0.79%, and Asymmetry Ratio Index (ARI) showed significant difference ( = 0.046). However, regarding the participants of the control group in this study, the plantar pressure distribution of the affected side improved from 43.2 ± 1.33% to 44.1 ± 0.49% and did not show significant difference ( = 0.73).
In both evaluations proved, this system has been shown to improve the problem of leg compensation and achieve customized training, which helps stroke patients to operate it. The VRCTS also makes the cycling training more entertaining and helps users to concentrate more on the rehabilitation training.
Conclusions
The developed Virtual Reality-Cycling Training System in this study can improve the symmetry of the bilateral pedal force from the cycling detection significantly, and the performance is better than the control group. In addition, the distribution ARI of bilateral pedal force and force plate is improved significantly. The result shows that the treatment of the Virtual Reality-Cycling Training System can increase the bearing symmetry in static balance effectively, which provides a new choice for future clinical rehabilitation treatment. However, since the study adopted convenience sampling, nonblind design, and nonrandomized grouping for the participants, the deviation in the results caused by the interaction between participants cannot be excluded totally. Moreover, the maturation effect of personal recovery status and the number of samples of the two groups recruited in this study are not equal. The number of samples of the control group is insufficient, so we cannot be sure that the result complies with normal distribution. | 2018-04-03T00:57:30.569Z | 2016-03-06T00:00:00.000 | {
"year": 2016,
"sha1": "422f0d07a725b25fa7634a4e04698c81dd5f0e2c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2016/9276508",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91e9c63fb6a7f5e0b2240e58b0a2aa80cf3c8886",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125445682 | pes2o/s2orc | v3-fos-license | ESTIMATION OF INDIVIDUAL HEIGHT IN CORRELATION WITH FEMUR LENGTH Dharahaas
The study aims to establish the relationship between body height and the greatest length of the femur. The motive for undertaking these examinations was the lack in the literature of studies that allow the reconstruction of height while alive based on measurements of a skeleton. It was decided to examine isolated bones from human remains undergoing forensic autopsy, and belonging to individuals of both sexes whose growth processes had stopped. Examinations were conducted on 91 human bodies in a hospital in tirupathi. Research points to a very close relationship between the length of a dead body and the measured greatest length of the femur. This relationship was expressed in nine coefficients of correlation calculated for both sexes.
INTRODUCTION
The length of the body while alive is one of the key parameters of identity established in the course of the identification of unknown remains. The reconstruction of body length has been a z subject of study since the beginning of the nineteenth century. The authors of the oldest methods, which are of purely historical importance, are: Orfilla [1], Langer [2], Toldt [3], Topinard [4] and Beddoe [5]. In 1888, Rollet-on the basis of the measurements of long bones taken from French remains-established the common factors by which their length is to be multiplied in order to work out the length of the remains [6]. He indicated that dry bones are shorter, relative to fresh bones, by approximately 2 mm. Manouverier began more recent research into height reconstruction in 1893. He studied part of the bone material examined by Rollet [7]. In his calculations, he took into consideration the differences in the proportions of limbs and height depending on body length, and also the influence of involutionary processes on body length. He was the first to assert -contrary to opinion up to that time-that height is a function of the length of the long bones. Rollet's material was further evaluated by Pearson in 1899. In his study, Pearson employed statistical methods not used by his predecessors: the correlation of features and linear regression [8]. This allowed him to introduce new formulae that permitted the calculation of human height on the basis of bone measurements. He recognized, however, that as a result of the sharp differentiation of body length among people, these formulae
INTRODUCTION
The length of the body while alive is one of the key parameters of identity established in the course of the identification of unknown remains. The reconstruction of body length has been a z subject of study since the beginning of the nineteenth century. The authors of the oldest methods, which are of purely historical importance, are: Orfilla [1], Langer [2], Toldt [3], Topinard [4] and Beddoe [5]. In 1888, Rollet-on the basis of the measurements of long bones taken from French remains-established the common factors by which their length is to be multiplied in order to work out the length of the remains [6]. He indicated that dry bones are shorter, relative to fresh bones, by approximately 2 mm. Manouverier began more recent research into height reconstruction in 1893. He studied part of the bone material examined by Rollet [7]. In his calculations, he took into consideration the differences in the proportions of limbs and height depending on body length, and also the influence of involutionary processes on body length. He was the first to assert -contrary to opinion up to that time-that height is a function of the length of the long bones. Rollet's material was further evaluated by Pearson in 1899. In his study, Pearson employed statistical methods not used by his predecessors: the correlation of features and linear regression [8]. This allowed him to introduce new formulae that permitted the calculation of human height on the basis of bone measurements. He recognized, however, that as a result of the sharp differentiation of body length among people, these formulae can only be used when they are applied to the population groups on the basis of whose data they were developed. Pearson further drew attention to the lengthening of the body that occurs after death: by 1.2 cm in the case of men, and by 2 cm in the case of women. In 1950, Telkkä-recognizing the necessity of applying differing formulae to different populations in order to reconstruct height, proposed new models for the population of Northern Europe and, concretely, for the Finns that had been the subject of his research [10]. He was the first researcher to introduce into his calculations corrections resulting from the differences he had observed between the length of bones on the right and left sides of the body. In 1951, Dupertuis and Hadden published their study, which also took into consideration black individuals [11]. These authors' method, however, turned out to be rather useless: its failing was that it took measurements of height from remains suspended by the external auditory meati; this meant that the body lengths established by this method were greater than height when alive. Some of the best studies of the reconstruction of body length while alive are those of Trotter and Gleser from 1952 to 1958 [ [12], [13]]. In the first stage of their research (1952), these authors had at their disposal the remains of black and white American soldiers who had died during World War II [12]. Body length had been measured while the subjects were alive; bones taken from the remains were macerated. The authors of the study established that, beginning from his thirtieth year, the height of a human being lessens each year by 0.06 cm, and they proved that after death body length increases by 2.5 cm. Trotter and Gleser (1958) conducted similar research based on the quantitatively large amount of bone material from the dead of the Korean War,
INTRODUCTION
The length of the body while alive is one of the key parameters of identity established in the course of the identification of unknown remains. The reconstruction of body length has been a z subject of study since the beginning of the nineteenth century. The authors of the oldest methods, which are of purely historical importance, are: Orfilla [1], Langer [2], Toldt [3], Topinard [4] and Beddoe [5]. In 1888, Rollet-on the basis of the measurements of long bones taken from French remains-established the common factors by which their length is to be multiplied in order to work out the length of the remains [6]. He indicated that dry bones are shorter, relative to fresh bones, by approximately 2 mm. Manouverier began more recent research into height reconstruction in 1893. He studied part of the bone material examined by Rollet [7]. In his calculations, he took into consideration the differences in the proportions of limbs and height depending on body length, and also the influence of involutionary processes on body length. He was the first to assert -contrary to opinion up to that time-that height is a function of the length of the long bones. Rollet's material was further evaluated by Pearson in 1899. In his study, Pearson employed statistical methods not used by his predecessors: the correlation of features and linear regression [8]. This allowed him to introduce new formulae that permitted the calculation of human height on the basis of bone measurements. He recognized, however, that as a result of the sharp differentiation of body length among people, these formulae can only be used when they are applied to the population groups on the basis of whose data they were developed. Pearson further drew attention to the lengthening of the body that occurs after death: by 1.2 cm in the case of men, and by 2 cm in the case of women. In 1950, Telkkä-recognizing the necessity of applying differing formulae to different populations in order to reconstruct height, proposed new models for the population of Northern Europe and, concretely, for the Finns that had been the subject of his research [10]. He was the first researcher to introduce into his calculations corrections resulting from the differences he had observed between the length of bones on the right and left sides of the body. In 1951, Dupertuis and Hadden published their study, which also took into consideration black individuals [11]. These authors' method, however, turned out to be rather useless: its failing was that it took measurements of height from remains suspended by the external auditory meati; this meant that the body lengths established by this method were greater than height when alive. Some of the best studies of the reconstruction of body length while alive are those of Trotter and Gleser from 1952 to 1958 [ [12], [13]]. In the first stage of their research (1952), these authors had at their disposal the remains of black and white American soldiers who had died during World War II [12]. Body length had been measured while the subjects were alive; bones taken from the remains were macerated. The authors of the study established that, beginning from his thirtieth year, the height of a human being lessens each year by 0.06 cm, and they proved that after death body length increases by 2.5 cm. Trotter and Gleser (1958) conducted similar research based on the quantitatively large amount of bone material from the dead of the Korean War, who belonged to varying ethnic groups: White, African-American, Asian, Mexican and Puerto Rican [13]. The authors demonstrated significant differences in height-limb proportions between the materials from both studies, pointing to the necessity of periodic verification of the equations that serve to reconstruct height.
MATERIALS AND METHODS
Examinations were conducted on 91 human bodies from the current Indian population, in a hospital in tirupathi undergoing forensic examination. Bodies were chosen that were subject to rigor mortis, without obvious bodily deformation, and with clearly formed features of skeletal maturity. As a consequence of the possibility, in the case of women, of the easy visibility of the bodies' state after the removal of bones, the number of individuals of the female sex examined-out of consideration for the families of the deceased-had to be limited. The remains studied belonged to 71 men with body lengths from 157.5 to 192.7 cm, between the ages of 19 and 87, and also to 20 women with body length from 155.7 to 168 cm, between the ages of 28 and 74.
Device for measuring long bones-an osteometer specially devised, permitting measurements with a precision of 0.1 mm Two steel squares with 40 and 60 cm sides.
The naked bodies were placed on their backs on the flat steel surface of the dissecting table. The lower limbs were straightened at the joints. The Achilles tendon was cut through on both sides. A block 3.5 cm in width was placed under the head in order to place the frankfurt plane perpendicular to the surface of the table.
One of the squares was placed so that the outer edge of its shorter side lay on the surface of the dissecting table; kthe inner edge of the longer side touched the vertex point at the top of the head. The square was stabilized. The surface of the feet was placed on one of the surfaces of the polyethylene block lying on the table. The block was stabilized. The second square was placed on the table in a similar way to the first; in this case, the inner edge of its longer side ran along the surface of the block touching the feet. The square was stabilized. By means of the measuring tape, with the participation of two people, the distance between the inner edges of the squares was measured.
The femurs on both sides were exposed by means of a longitudinal incision and they were removed from the soft tissue. The femur was separated at the knee joint from shank bone. The head of the femur was enucleated from the acetabulum of the hip joint. Soft tissue was removed from the bone without disturbing the joint cartilage.
According to Martin's criteria, the greatest length of the femur was measured [1]; in other words, the rectilinear distance between the top of the head and the furthest point of the paracentral condyle (29)
RESULTS
The research points to a very close relationship between the length of a dead body and the measured, greatest length of the femur. This relationship was expressed in nine coefficients of correlation calculated for both sexes. Their value in the case of male femurs was greater. For the right-and left-side bone, the coefficient of correlation was identical and amounted to 0.923; for the average length of the bones of both sides of the body, its value was 0.925. In the case of female femurs, the coefficients of correlation were as follows: for right femurs, 0.892; for left femurs, 0.833; for the average length of right and left bones, 0.869. The highest coefficient of correlation (0.950) was obtained after measuring all the examined bones taken from remains of both sexes. Calculated errors in reconstruction (standard errors/standard deviations from the line of regression, S ab ), which affected 68% of cases, were lower with regard to the measurements taken from female bodies. All results are set out together in following tables.
; r ab , coefficient of correlation; S ab , reconstruction error (standard deviation from line of regression). Fdx, right femur; Fsin, left femur; r ab , coefficient of correlation; S ab , reconstruction error (standard deviation from line of regression).
DISCUSSION
The practical use of existing formulae for reconstructing height while alive from the measurement of long bones is limited. Even the studies that are recognized as the most exact -Trotter and Gleser [ [12], [13]] and Fully and Pineau [ [15], [16]]-as a result of their age and, in addition, as a result of their using as material ethnic groups differing from those of Poland, do not have any substantial diagnostic significance for the Polish population, and cannot be applied to that population without serious reservations. This criticism applies even more to the even older methods of Manouverier [7] and Pearson [8], which are, however, still in general use in Poland. The extension of the long bones, connected with the constant growth and destruction-which lasts until the achievement of skeletal maturity-of the simultaneously ossifying basal cartilage, is conditioned by the defined proportions of the processes that take place in them: chondroblastic, chondroclastic, and osteoblastic. These processes are subject to disturbance through the influence of a host of exogenous and endogenous factors that result in changes in length and, in consequence, in the proportions of the body. These factors are intrapopulation factors, interpopulation factors, and intergenerational factors [27]. This means that it is necessary to periodically verify the models serving to fix height while alive. It also explains the lack of a universal method that could be applied in every case of establishing the identity of unknown remains. Trotter and Gleser [13] have proved that intrapopulation changes in length affecting changes in body proportions emerge in the course of a relatively short period of time [13]. It is necessary to consider modified formulae that serve to fix height depending on racial identity or on the constitutional type of build of body [ [30], [31]]. It is emphasized that when reconstructing body height while alive, the calculated height does not express actual value of that height, but rather that which the examined individual would have if he/she belonged to the population that served to establish the applied formulae. Thus the reconstructed height is, to a substantial degree, a function of the method used [30]. The data cited indicates that the reconstruction of body length from the long bones is only, seemingly, a simple task. In reality, the matter is much more complicated. In order to obtain results that are closest to height while alive, it is necessary to consider all the elements of the skeleton that determine height [ [15], [20]]. In reality, this possibility only rarely occurs.
The results of the measurements taken pointed to the asymmetry of the bones of the right and left side of the body, which had been suggested by other authors [ [10], [18], [30]]. The average length of left femurs was greater than that of right femurs: among women by 1.76 mm; among men by 0.54 mm. Such differences were, however, statistically insignificant. Telkkä also noted the greater length of left femurs; in his study, the differe nce between right and left femurs among women was closer and amounted to 1.6 mm, while among men it had a value of 1.5 mm [10]. Suggestions present in earlier studies indicated that it would be reasonable to establish separate formulae for right side bones, for left side bones and for the average length of the bones on both sides.
CONCLUSION
By this research we can conclude that there is a close relationship between the individual height and the femoral length. | 2019-04-22T13:06:05.260Z | 2017-03-28T00:00:00.000 | {
"year": 2017,
"sha1": "e5ffcb768df3913c20c9ebdc0d1eb4127f792476",
"oa_license": "CCBYSA",
"oa_url": "http://journalijcar.org/sites/default/files/issue-files/1487-A--2017.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "212415d43c6163799244f401f16875a4508444c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
247124229 | pes2o/s2orc | v3-fos-license | Special Issue “The Use of Recycled Materials to Promote Pavement Sustainability Performance”
Recycling road pavement materials allows for a more sustainable use of raw materials and contributes to creating a circular economy [...]
Introduction
Recycling road pavement materials allows for a more sustainable use of raw materials and contributes to creating a circular economy.The entire life cycle of the pavement products, focusing on their design, promoting circular economy processes, and fostering sustainable consumption, aims to ensure that the resources used are kept in the economy for as long as possible.Carrying out recycling policies may significantly impact civil engineering activities, including the construction and exploitation of transport infrastructure.Pavement engineering can also contribute to successfully achieve the sustainable development goals proposed by the United Nations [1], through a global framework supported by sustainable production employing green technologies.
Implementing consumption and production patterns based on recycling and adopting an industrial symbiosis approach can promote sustainable urban development under a carbon-neutral economy through green technologies.Due to intensive research and practice, recycling has been used in road construction, maintenance, and rehabilitation in the last few decades.Recycling pavement materials prevents the extraction of non-renewable resources and minimizes waste production and landfilling.It can save energy and decrease greenhouse gas emissions, thereby reducing pollution.Recycling effectively helps to reduce environmental impacts and combat global climate change.
The purpose of this Special Issue was to collect and publish technical and research papers, including review papers, focusing on the recycling of road pavement materials to promote pavement sustainability performance.Ten papers were published in total covering the use of construction and demolition waste (reclaimed asphalt pavement, recycled concrete aggregate and glass) and industrial waste (plastic and slag).The application of recycled materials concerns bituminous mixtures, concrete mixtures, and non-traditional interlocking blocks or cobbles.The most relevant contributions of each paper are briefly described in the following sections.The papers involved thirty-four authors from eleven countries of Europe (Belgium, Finland, Italy, the Netherlands, and Portugal), Africa (Nigeria), Asia (Malaysia and Saudi Arabia), Australia, and South America (Brazil and Colombia).
Use of Construction and Demolition Waste 2.1. Reclaimed Asphalt Pavement
Bituminous pavement courses are designed to present adequate characteristics, in terms of safety and comfort, during its period of life.After this period, construction, maintenance, and rehabilitation operations must be performed and, as a result, very high amounts of reclaimed asphalt pavement (RAP) are usually produced.
Reclaimed asphalt pavement (RAP) is a 100% recycled material obtained from road maintenance and rehabilitation operations.After adequate processing, such as crushing and screening, RAP can present high-quality and well-graded aggregates coated by bituminous mastic, thus becoming a secondary raw material proper to replace bitumen and virgin aggregates [2,3].However, RAP applications, without downgrading, with incorporation in similar applications, still face several barriers, due to some lack of confidence in RAP recycling in new bituminous mixtures.The common maximum RAP incorporation rates vary between 10% and 50%, and between 0% and 20% for wearing courses [3].The incorporation of high rates of RAP in new bituminous mixtures is still a challenge to be overcome to minimize life cycle costs and environmental impacts.
Vandewalle at al. [2] developed a comparative analysis on a real road pavement section, in which the real applied solutions were compared to alternative ones and were combined with the incorporation of five RAP rates into new bituminous mixtures (0%, 25%, 50%, 75%, 100%), in production, construction, and rehabilitation activities.The life cycle assessment (LCA) methodology was applied, and the results have been expressed in four damage categories: human health, ecosystem quality, climate change, and resources, together 15 impact factors.The results demonstrated that both recycled and multirecycled bituminous mixtures led to a decrease in the environmental impact when RAP was reused once or multiple-times.The benefits are greater for higher RAP rates presenting an average decrease of 19%, 23%, 31%, and 33% in all the four impact categories, for a 25%, 50%, 75%, and 100% RAP rate incorporation [2].
Antunes et al. [3] studied high RAP incorporation rates into new bituminous mixtures for wearing courses based on their long-term mechanical behaviour, taking into consideration the RAP bitumen mobilization degree, the evaluation of the RAP fractioning and mixing conditions, and both the mechanical and long-term behaviour of RAP mixtures.The behaviour of high RAP mixtures (75%) and virgin bituminous mixtures was compared.A crude tall oil rejuvenator was used to promote bitumen mobilization.The simulation of the ageing that occurs during mixture production and in-service life was accomplished by short-and long-term oven ageing procedures.The laboratory tests for the mechanical assessment were performed.The RAP bitumen mobilisation degree was evaluated, and a mixing protocol was developed and validated.
As a major conclusion, it was referred that, in general, the high RAP mixtures presented equivalent or even improved behaviours when compared with virgin bituminous mixtures.The performance of the high RAP mixtures presented good results even after ageing, allowing to conclude that these can present good long-term performances.
Recycled Concrete Aggregate
Since the cement industry is a major contributor to greenhouse emissions on a worldwide level, alternative materials are studied for the partial or complete substitution of cement in concrete.Recycled concrete aggregate (RCA) obtained from the demolition of old reinforced concrete structures is one of the recycled materials that can be reused to produce concrete and thus reduce the negative environmental impact of cement production [4,5].However, some barriers need to be overcome regarding the use of RCA, namely the low demand for these materials and the costumers' unwillingness to pay more for them [4].Many studies have considered the partial or complete replacement of cement in concrete.The use of fly ash and other by-products from the energy and mineral industry as additional cementitious materials in cement has a significant potential for reducing the carbon footprint of concrete [5].
Katar et al. [4] evaluated the application of construction demolition waste produced in Riyadh to manufacture high-strength concrete.Self-compacting concrete with 100% natural aggregate and three replacement levels (25%, 50%, 75%) of RCA was produced.Fly ash and a superplasticizer were added to obtain the adequate properties of flowability and cohesion in fresh state mixtures.The authors evaluated both fresh and hardened properties of the mixes, and J-ring, v-funnel, and slump flow tests were performed.Compressive strength tests after seven, 14, and 28 days were performed.The results confirmed that RCA can produce concrete with a reasonable compressive strength, its use being acceptable for structural applications.Rintala et al. [5] presented a case study as part of the EU-funded research project "Urban Infra Revolution" that estimated the cost prices of four different geopolymer concrete (cement-free binders) with different material compositions and carbon footprints, considering the raw material price fluctuation and the potential impact of carbon emission regulation through carbon price.Two major questions presented: "What are the benefits of using the materials?"and "How much does it cost?".The authors concluded that the results seem to indicate that carbon pricing, at the actual rates, does not significantly change the cost-price difference between traditional and geopolymer concrete.This means that the cost-competitiveness of low-carbon concrete depends on the material mix type and the availability of critical side streams.
Glass
Glass waste is suitable for various applications, including in the cement and concrete industries, due to its pozzolanic properties being more intensive in fine-grained form.Megna et al. [6] described research on combining glass and marble wastes to produce a new sustainable mortar for non-structural pavement solutions.Based on the experimental characterization of different types of mortars, the authors have confirmed the pozzolanic properties of the glass waste that led to the production of a hydraulic binder suitable to replace the conventional cement in concrete production.
Plastic
Plastic wastes are a major global environmental issue, and their recycling and reuse are becoming more and more investigated.The diversity of plastic properties is enormous, and different approaches can be adopted to incorporate plastic wastes into pavement materials.The types of plastic covered in the Special Issue are polyethylene terephthalate (PET) [7], high-density polyethylene (HDPE) [8], acrylonitrile butadiene styrene (ABS) [9], polystyrene polymers (PS) [9], and low-density recycled polyethylene (LDPE) [10].The authors have studied plastic waste applications in bituminous mixtures [7], concrete mixtures [8], interlocking plastic blocks [9], and sand/recycled-plastic cobbles [10] for pavements of roads [7,8] and other trafficked areas (e.g., parking areas, sidewalks, bike paths) [9,10].
In general, bituminous mixtures are considered a promising application for plastic wastes to achieve more sustainable pavements.Plastic wastes are being addressed as modifier agents of bituminous binders or as substitutes of aggregates.Mashaan et al. [7] investigated the effect of PET from plastic bottles on modifying a bitumen binder to be used on a 14 mm dense-graded asphalt for wearing course, composed of granite aggregates and a 4.9% optimum binder content.The authors studied the rheological properties of the plastic modified bitumen and the mechanical properties of the plastic modified bituminous mixture.Improved stability and resistance to permanent deformation were observed, more significant for 8% PET by weight of the bituminous mixture.
Other pavement applications of plastic wastes were concrete mixtures [8] and nonconventional blocks or cobbles [9,10].In these applications, plastic waste was used for partial or total replacement of natural aggregates.Tamrin and Nurdiana [8] studied the incorporation of HDPE lamellar particles from diverse origins in concrete mixes for nonstructural pavement applications.The authors concluded that the concrete with a 10 MPa compressive strength had the best resistance to adding HDPE, and 5% and 5 × 20 mm were the optimal content and size, respectively.Gabriel et al. [9] investigated interlocking plastic pavers composed of 70% ABS and PS from electronic equipment (e.g., computers) and 30% of other polymers and residual materials (e.g., other plastic or metal wastes).The 100% recycled blocks resulted from shredding, agglutination, and pressing procedures performed at specific temperature conditions.The authors carried out laboratory tests that confirmed similar properties compared to traditional blocks made of concrete for light traffic conditions.Sanchez-Echeverri et al. [10] evaluated the use of LDPE from recycled plastic bags to manufacture cobbles with 10 cm × 20 cm × 4 cm dimensions.The cobbles were composed of 25% plastic and 75% sand.The experimental research performed in the laboratory demonstrated the adequacy of the cobbles for pedestrian and lightweight traffic pavements.In complement, those authors also presented a market study to implement a factory in Colombia to produce these recycled cobbles.
Slag
Slags, in particular steel slag, lead slag cooper slag, and tin slag, are some of the industrial waste materials that have been studied to validate their application as replacement of natural aggregates.Tin slag (TS) is an industrial waste that is accessible and still underutilized.About 2 million tons of this waste is landfilled worldwide.Olukotun et al. [11] studied the use of TS as a substitute for fine aggregates in cement mortar, considering different percentages of incorporation (0%, 25%, 50%, 75%, 100%).Three water/cement ratios of 0.5, 0.55, and 0.6 were used to prepare the tested specimens.Laboratory evaluation was conducted at fresh and hardened states and after 3, 7, and 28 days of water-curing of the testing specimens.The workability and the mechanical properties of mortar specimens were evaluated.According to the results, the authors considered that TS could be applied as a substitute for natural sand to produce mortars, thus promoting a reduction in costs and in natural resource depletion and leading to the sustainability of natural fine aggregates.
Final Remarks and Future Trends
The guest editors believe that this group of ten papers, published in this Special Issue, gave a significant contribution to promote the circular economy through the pavement sustainability performance of recycled materials.Other studies, namely regarding the application and validation of different types of alternative raw materials into pavements, may also be published in the following Special Issue: "The Use of Recycled Materials to Promote Pavement Sustainability Performance II".
Funding: This research received no external funding. | 2022-02-26T16:16:27.018Z | 2022-02-23T00:00:00.000 | {
"year": 2022,
"sha1": "fad37f4ad9c1ca970cb11220a1641ae8c9ff19a2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-4321/7/2/12/pdf?version=1645605094",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8ac1a52a30ed0438b24ceadac4081658e0a7f5b1",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
131958803 | pes2o/s2orc | v3-fos-license | Ocean Observation from Haiyang Satellites : 2012 – 2014
During 2012 and 2014, China has two Haiyang (which means ocean in Chinese, referred to as HY) satellites operating normally in space which are HY-1B and HY-2A. HY-1B is an ocean color environment satellite which was launched in April 2007 to observe global ocean color and sea surface temperature, and HY-2A is an ocean dynamic environment satellite which was launched in August 2011 to obtain global marine dynamic environment parameters including sea surface height, significant wave height, ocean wind vectors, etc. Ocean observation data provided by HY-1B and HY-2A have been widely used by both domestic and international users in extensive areas such as ocean environment protection, ocean disaster prevention and reduction, marine environment forecast, ocean resource development and management, ocean investigations and scientific researches, etc.
surface temperature and coastal zone dynamic changing information of all Chinese seas.The remote sensing payloads onboard both HY-1A and HY-1B are Chinese Ocean Color and Temperature Scanner (COCTS) and Coastal Zone Imager (CZI).HY-1A is an experimental satellite with a design life of 2 years, and it stopped working in April 2004.HY-1B, which was launched on April 11, 2007, is the successor of HY-1A with several improvements such as increased swath of COCTS, extended design life of 3 years, larger storage capacity, higher transmission rates, etc.As of January 2014, HY-1B has been operating normally for 6 years and 9 months which makes a new record of Chinese low Earth orbit small satellites with the longest effective working life.
HY-2 satellite series is ocean dynamic environment satellites which are designed to obtain ocean surface wind vectors, sea surface height, significant wave height and sea surface temperature of global oceans.HY-2A, which was launched on August 16, 2011, carries four microwave payloads and is capable of observing oceans in all-weather all-time conditions.The four payloads are Microwave Scatterometer (MS), Radar Altimeter (RA), Scanning Microwave Radiometer (SMR) and Calibration Microwave Radiometer (CMR).As of August 2014, HY-2A has been operating normally for nearly 3 years.
Data Acquisition and Distribution
National Satellite Ocean Application Service (NSOAS), which is a commonweal institution under SOA, is responsible for receiving, processing, archiving, managing and distributing all collected data and products of HY satellites.NSOAS owns four ground stations which are located at Beijing, Mudanjiang, Sanya and Hangzhou respectively.
Ocean Color and Sea Surface Temperature
HY-1B/COCTS provides daily, monthly and seasonal averaged ocean color products of chlorophyll-a concentration, suspended materials, yellow substance, etc.The region for regular ocean color products is presented in Figure 1 where the monthly averaged chlorophyll-a concentration product of July 2013 is shown.Regional ocean color products of global oceans are also provided occasionally depending on the actual data acquisition program of HY-1B/COCTS.
Based on the data of HY-1B/COCTS and HY-2A/ SMR, NSOAS provides daily, weekly and monthly Sea Surface Temperature (SST) fusion products of both Northwestern Pacific and global oceans.The monthly SST fusion product of November 2013 is given in Figure 2 to show the form and region of Northwestern Pacific products.HY-2A/SMR is capable of observing SST of more than 90% global oceans every data, the product of January 22, 2014 is shown in Figure 3 for illustration.NSOAS provides global ocean SST fusion products based on HY-2A/SMR and other available SST data sources.A fusion product of December 3, 2013 is shown in Figure 4. NSOAS also provides daily OWV fusion products based on HY-2A/MS data and other available data sources.The OWV fusion products, with the time resolution of 6 h and the space resolution of 0.25°, better satisfies the Nowadays, SSH data fusion of all available RD sources is the only practical approach to obtain more complete global SSH products of each data.NSOAS provides HY-2A/RD meshing products, and one example is shown in Figure 7. HY-2A/RD meshing products are fused with other available RD data sources to provide more delicate and frequently observed SSH products, which are mainly provided to ocean forecast users.
Product Distribution
HY satellite products are distributed to both domestic and international users via internet, dedicated communication systems and manual services, etc.As shown in Figure 8
Application Achievements
As a rapidly developing high technology, HY satellites now play a more and more important role in both ocean economy development and national defense construction of China.HY satellite products are widely used in extensive domains, such as ocean environment protection, ocean disaster management (including prevention, mitigation, warning, response, recovery and assessment), marine environment forecast, ocean resource development and management, ocean right protection and law enforcement, ocean investigations and scientific researches, etc.Some typical application achievements of both HY-1B and HY-2A, which were made in 2012 and 2013, are presented as follows.
Tropical Cyclones
Nowadays, there are only two space-born data sources for global ocean vector wind observation which are HY-2A/MS and the Advanced Scatterometer (ASCAT) carried on Metop satellites of Europe.The major advantage of HY-2A/MS is that its swath, which is about 1500 km, is much wider than that of Metop/ASCAT.A comparison example is shown in Figure 9 which is the observation of typhoon No.15 "Bolaven" in 2012 carried out by HY-2A/MS and Metop/ASCAT, respectively.From Figure 9, it is easy to find that the much wider swath makes the observing data of HY-2A/MS containing more complete information of the "Bolaven" typhoon.With the much wider swath, HY-2/MS is capable of covering more than 90% global oceans every day.All 25 typhoons in 2012 and all 31 typhoons in 2013 were observed by HY-2/MS.Continuous monitoring results of the No.14 typhoon "Libra" in 2012 are shown in Figure 10 for illustration.From HY-2/MS data, we are able to clearly observe the position, strength, extent and structure of a tropical cyclone.Moreover, HY-2A/RD also provides the SSH and SWH data of tropical cyclone areas.All the information is very supportive for tropical cyclone track forecast, storm surge forecast, ocean navigation, typhoon disaster prevention and mitigation, typhoon scientific researches, etc. ice disaster reduction and emergency response.
Enteromorpha
Enteromorpha is a seasonal large-area ocean disaster which adversely impact local marine environment and economy.In 2013, HY-1B was applied to operational enteromorpha surveillance form May 2 to August 31, and 102 regular reports were distributed by NSOAS.An enteromorpha image obtained by HY-1B/COCTS on June 29, 2013 is shown in Figure 12.The corresponding thematic map is in the lower right of Figure 12 which shows the distribution extent, coverage area, and other necessary information.
Red Tides
Red Tides, which are harmful algal blooms, usually occur in East China Sea.The Second Institute of Oceanography and the East China Sea Environment Monitoring Center provide the operational red tide detection and monitoring based on HY-1B and other satellite ocean color sources.From May to September in 2013, fifty four reports were released to the public via internet.The July 2013 red tide distribution thematic map is shown Figure 13.
Marine Environment Forecast
Marine environment forecast organizations, such as National Marine Environment Forecast Center (NMEFC) and forecast departments of local governments, are the major part of HY satellites users.The data of SST,
Fishery Forecast
Based on the chlorophyll, SST, and SSH data derived by HY-1B and HY-2A, NSOAS has developed fishery forecast models and constructed an operational ocean fishery information forecast system.Fishery forecasts are weekly provided to 9 fishing companies for more than 11 fisheries which are located separately in Pacific, Atlantic and Indian Ocean.Fishery forecast information has effectively increased the fishing production.One fishery forecast example is shown in Figure 15.
Future Plans
China has formulated and implemented several mediumand long-term national plans on the development of HY satellites.HY satellites will be developed in three series, i.e., ocean color environment satellites (HY-1), marine dynamic environment satellites (HY-2), and maritime surveillance and monitoring satellites (HY-3).Besides traditional optical and microwave sensors that have been carried on HY-1A/B and HY-2A, several new type sensors have also been researched and decided to be involved in future HY satellite missions in order to further improve the data accuracy and resolution.
HY-2A observes marine dynamic environment parameters every data including Sea Surface Height (SSH), Significant Wave Height (SWH), Ocean Wind vectors (OWV), Sea Surface Temperature, Water Vapor Content, etc.The SSH, SWH and OWV products observed by HY-2A on January 22, 2014 are presented in Figure 5 to show their own specific forms.
Fig. 1
Fig.1 The monthly averaged chlorophyll-a concentration product of July 2013 derived by HY-1B/COCTS (© NSOAS/SOA, 2013 -All Rights Reserved) , NSOAS distributed products of HY-1B and HY-2A 26.72 TB and 5.79 TB respectively in 2012; products of HY-1B and HY-2A 27.56 TB and 20.93 TB respectively in 2013.From Figure 8, we can clearly find that the distributed data volume of HY-2A products sharply increased from 5.79 TB in 2012 to 20.93 TB in 2013, which illustrates a significant growth on the application of HY-2A.Domestic users of HY satellites include government departments, forecast organizations, research institutions, companies, universities, etc.International users of HY satellites include European Organization for the Exploitation of Meteorological Satellites (EUMETSAT), National Oceanic and Atmospheric Administration (NOAA) of United States, Centre National d'Etudes Spatiales (CNES) of France, Australian Bureau of Meteorology, etc.
Fig. 11
Fig.11 The sea ice remote sensing image obtained by HY-1B on February 8, 2013 and the corresponding sea ice extent thematic map (© NSOAS/SOA, 2013 -All Rights Reserved)
Fig. 13
Fig.13 The red tide distribution thematic map of July 2013 derived with HY-1B/COCTS and other satellite sources
Fig. 14 A
Fig.14 A Northwestern Pacific SST product example which is released to users by the CCTV-13 TV channel
Fig. 15
Fig.15 The September 13, 2013 forecast of Northern Pacific squid fishery including the SST and current information
Fig. 16 AFig. 17
Fig.16 A thematic map of the ice condition around Snow Dragon in the reports provided by NSOAS.(© NSOAS/SOA, 2013 -All Rights Reserved) Fig.17 A remote sensing image of sea ice around Snow Dragon provided by NSOAS | 2016-01-29T17:58:53.149Z | 2014-09-30T00:00:00.000 | {
"year": 2014,
"sha1": "7389c4493d5ce20b4f26a4b5b4b40de5e3db5d7a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11728/cjss2014.05.710",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7389c4493d5ce20b4f26a4b5b4b40de5e3db5d7a",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
209525640 | pes2o/s2orc | v3-fos-license | Investigation on the Impact of Degree of Hybridisation for a Fuel Cell Supercapacitor Hybrid Bus with a Fuel Cell Variation Strategy
: This paper presents the development of a control strategy for a fuel cell and supercapacitor hybrid power system for application in a city driving bus. This aims to utilise a stable fuel cell power output during normal operation whilst allowing variations to the power output based on the supercapacitor state-of-charge. This provides flexibility to the operation of the system, protection against over-charge and under-charge of the supercapacitor and gives flexibility to the sizing of the system components. The proposed control strategy has been evaluated using validated Simulink models against real-world operating data collected from a double-decker bus operating in London. It was demonstrated that the control strategy was capable of meeting the operating power demands of the bus and that a wide range of degrees of hybridisation are viable for achieving this. Comparison between the degree of hybridisation proposed in this study and those in operational fuel cell (FC) hybrid buses was carried out. It was found that the FC size requirement and FC variation can be significantly reduced through the use of the degree of hybridisation identified in this study.
Introduction
The London bus network is the largest road transportation network in the UK and is an essential part of the public transportation network [1]. This, however, results in significant contributions to both Greenhouse Gas (GHG) and local pollutant emissions [2][3][4], with strategies such as the ultralow emission zone implemented as a means of reducing these emissions through deployment of hybrid and zero emissions technologies [5]. One of the more promising potential zero emissions solutions for bus applications is the proton exchange membrane (PEM) fuel cell technology. The PEM fuel cell (which will be referred to as FC in this paper) uses hydrogen as its fuel and produces electricity and water as a waste product through an electrochemical process [6]. Hybridisation of FCs with some form of energy storage is a promising solution to solving the problems of over sizing the FC stack and the FC's poor transient response [6]. Much work has been carried out in the field of PEM FC hybrids for vehicular applications, where hybridisation with battery and/or supercapacitor (SC) technologies has been considered. This covers FC/battery [7][8][9][10], FC/SC, and FC/battery/SC hybrids [11][12][13][14], with some examples given. The literature review that follows focuses on FC/SC hybrids as these are most relevant to the work presented here.
In the work of [15] a comparison between fuel cell hybrid configurations and Energy Storage System (ESS) technologies is presented for use in a vehicle. Of the available configurations it was found that connecting the FC and SC via DC/DC converters provides the best solution in terms of reducing the stress on the fuel cell and achieving a high hydrogen economy because of the optimal fuel cell operation. A number of examples of this configuration have been presented in the literature, such as [16][17][18][19][20][21][22][23][24]. A control strategy based on reducing the transient changes on the FC load has been developed and experimentally tested in [16]. It was shown that the developed system avoids fuel starvation of the FC whilst using the SC to meet transient power changes. In the work of [17] a control strategy based on reducing the transient response of the FC is considered. This was tested against the ECE15 EU drive cycle and performed acceptably. An energy management strategy utilising shortterm future energy demand prediction was developed and tested through both simulation and experimentation in [18]. It was found that this strategy offers improved performance, owing partly to the better management of the SC for regenerative braking. Components sizing and development of a control strategy based on Pontryagin's minimum principle has been developed with cost functions of hydrogen consumption, SC supercapacitor state-of-charge (SoC) and fuel cell durability in [19]. The control strategy maintained a fairly stable FC output but did exhibit a large range of FC outputs. An equivalent consumption minimisation strategy (ECMS) is employed to assess the sizing of system components against different driving cycles in [20]. The results suggest a significant variation to the FC output is beneficial in terms of the hydrogen consumption. In the work of [21] the energy management is achieved by using only the SC for transient responses and only the FC for stable load conditions. This however necessitates large transient changes to the FC output. In [22], a control system aimed at providing voltage regulation on the busbar, tracking of the SC reference current and asymptotic stability of the closed-loop system was developed. In the work of [23], the control strategy focused on a differential flatness control that offers a simple and effective means of reducing the transient power demand changes on the FC. In the work of [24], an interleaving technique was successfully used to improve the voltage and current control in the FC/SC hybrid system. This focussed primarily on the short-term system performance. Along with work of [25,26] that each proposed control strategies to mitigate the stress applied on the FC from a step response to the output power demand. In real world application, step response is rarely required for a vehicle application while frequent variation is often required. In the work of [27][28][29] representative duty cycles such as New European Drive Cycle (NEDC) was used to evaluate their proposed control strategies to control the FC output power. The work in the literature highlights that there are numerous methods of controlling the balance of power in a FC/SC hybrid system. Most of the proposed designs have however focussed on the short-term operation of the system and/or have also resulted in significant variations in the FC power output. The aim of the work presented here is to limit the transient response of the FC power output and to assess the possible sizing solutions for a FC/SC hybrid power plant against real-world load profiles. This approach additionally allows for the assessment of the potential for downsizing the FC stack.
The work detailed in this paper is a continuation of the research presented in [30][31][32] and further considers the evaluation of the degree of hybridisation through improvement of the control strategy. In the previous work, a FC/SC hybrid propulsion system had been developed, constructed and simulated. The FC was used as a fixed output power source to eliminate the dynamic stress applied to the FC. The SC was used to supplement the FC output power and meet the dynamic power demands. A stabilised FC control strategy was designed and demonstrated to be capable of maintaining the FC output constant while enabling the propulsion system to meet the dynamic load demands of a bus. A strategy to identify the FC output power and required SC size was proposed and shown to perform as expected, although a number of limitations of the control strategy were highlighted. These are mainly the required prior knowledge of the required FC output, the lack of flexibility and protection against over and under charge.
The limitations lead to another question. Would it be best practice to maintain the FC and boost converter power output at a predefined and constant setting throughout the entire journey? Hence, this research aims to: 1. Investigate a strategy to facilitate variation of the FC output control operation to eliminate or mitigate the identified limitations. 2. Investigate the impact of the degree of hybridisation for a FC/SC hybrid bus with the proposed control strategy.
Within this paper the outline and development of the updated control strategy is detailed. The performance of the control strategy is compared against the stabilised control strategy previously employed against real-world performance data collected from a city driving bus. Finally, an assessment of the degree of hybridisation is carried out for variations to the control strategy parameters. The novel contributions of this paper are as follows. The development of the control strategy to include protection offers novelty in its application to real-world data and the impact this has on the sizing of the system components. This highlights the viability of using SCs as the energy storage medium even for long drive cycles and for significant downsizing of the FC used. Further to this the wide range of possible sizing solutions shows the flexibility available to the designer.
Data Collection
Operational performance data collected from an ADL Enviro 400H diesel hybrid bus (Alexander Dennis, Larbert, UK) operating in London was used as the basis to test and compare the control strategies. This comprised of data for a whole day of operation of the bus whilst in operation on the 388-bus route, comprising roughly 18 hour of operation, as shown in Figure 1. For the purposes of this study the data collected was used to provide power profiles of the traction motor power demand upon which the control strategies could be tested. For this the data collected regarding the motor input power were used directly as the power profile and were based on the assumption that the traction motor used on the Enviro 400H would be kept the same with the proposed FC/SC hybrid system. The power profiles used to compare and assess the control strategies implemented are detailed in Table 1. The purpose of these driving cycles was to test the system under a variety of operating conditions which provide high power, low power and long duration performance requirements.
FC/SC Operation Strategy
The FC/SC hybrid configuration is shown in Figure 2. The originally proposed FC/SC operation strategy is to keep the FC at a constant pre-defined output power while using the SC to cover any transient power demand, as detailed in [32]. In this system the balance of power between the FC, SC and load is controlled on the common busbar linking these components. This method has been validated and tested in [31,32] and was shown to perform well under transient conditions whilst maintaining a stable busbar voltage (630 V in this case). Since the voltage is maintained at a constant value, the power balance is directly controlled by controlling the magnitude of the current and can simply be written as: where each of the current values are defined on the 630 V busbar and Iload is the current to/from the load, Ifc_out is the current from the FC and Isc_out is the current to/from the SC. The balance of power provided by Equation (1) remains the default control for the proposed control strategy detailed in this paper. The output power of the FC and boost converter (Pfc_out) is defined as 110% of the average power requirement of the bus duty cycle (Pload) with the additional 10% included to account for the losses in the SC buck/boost converter and is maintained at a constant value. The SC was sized by considering the cumulative energy change over the course of the drive cycle, with a 20% margin over the magnitude of cumulative energy change chosen as the SC size, further details of this can be found in [32]. This strategy has been proven capable of providing a reasonable estimation of the required degree of hybridisation for a certain duty cycle. A more detailed description of the system can be found in [32]. However, the strategy has been proven to lack the flexibility required to work effectively across a range of different load profiles and offers no protection against under charge and overcharge of the SC module. To address this, a simple overcharge and undercharge protection strategy is introduced which aims to both provide protection to the energy system and provide greater operational flexibility. Whilst the protection of the system is an important consideration it is also worth considering how the presence of such a protection system will impact upon the sizing of system components.
Overcharge Protection Design
To prevent the SC overcharging, a higher threshold value (HTV) was assigned. The HTV is the threshold of the SC SoC, which when exceeded, the value of Ifc_out begins to ramp down as a means of preventing the SC from overcharging. The intent is to calculate a new Ifc_out reference (and thus a new Pfc_out) based on the SoC of the SC. The calculation for overcharge protection was carried out using the equation: The change in Ifc_out decreases linearly with the SC SoC, such that when SC SoC reaches 100%, the value of Ifc_out is 0 A. The value of HTV of the overcharge protection was selected to be 90% SoC. Hence if the SC SoC exceeds 90% the value of Ifc_out will decrease, reducing the charging rate of the SC during charge operation and also increasing the discharge rate during discharge operation. Limiting power transients on the FC has also been proved to be very important in [33][34][35]. Hence a rate limiter was added to control the rate of change requirement applied on the FC. It takes at least 30 s at a constant rate to increase from no load power (0 kW) to full load power (85 kW) and with the same rate of change limit when power output needs to be decreased.
Undercharge Protection Design
To prevent the SC from becoming fully discharged, a lower threshold value (LTV) was assigned. In this case, the value of Ifc_out will ramp up if the SC SoC falls below the LTV and acts as a means of protecting against the SC SoC becoming depleted. The calculation for undercharge protection was carried out using the equation: where Ifc_max is the maximum output current of the FC and boost converter can provide and is set as 120 A, amounting to a maximum power output of 76 kW (85 kW at the FC). For the initial tests, the value of the LTV is set at 60%. Additionally, a lower limit (LL) is introduced and acts as the value of the SC SoC at which Ifc_max is reached and assigned as 30%. An increased Ifc_out will charge the SC at a higher rate during charge operation and also reduce the power demand placed on the SC during discharge operation. The new Ifc_out will be increased by an amount determined by the SC SoC until Ifc_out reaches the maximum value of 120 A. A rate limiter has also been added to ensure the change in FC output is gradual.
Control Strategy Overview
The overall control strategy implemented for the hybrid system is based on a defined value of Ifc_out and the SoC of the SC. This can be summarised as follows, The basis of the strategy is to control the value of Ifc_out based on the SOC of the SC. Under normal operation the value of Ifc_out is taken as the user defined value. If the SOC is less than the LTV then Ifc_out is increased to prevent the SC from becoming depleted. The increase in Ifc_out is limited in accordance with the maximum output of the fuel cell. If the SoC is greater than the HTV, then Ifc_out decreases to prevent the SC overcharging. This will be referred to as the FC variation strategy, with the Simulink model of the hybrid system and control strategy shown in Figure 3.
Performance with FC Variation Strategy
The degree of hybridisation is identified for each of the profiles detailed in Table 1, where the SC size is defined by calculating the required cumulative energy from the SC based on the duty cycle. The FC variation strategy is applied to the power profiles detailed in Table 1 and compared with the performance of the constant FC output strategy. It should be noted that the FC rated power for each of the simulations is 85 kW, however the results of the simulations determines the minimum size of FC that would be required based on the maximum output observed for each simulation. For power profile 1 (Figure 4a), the initial FC and boost converter output power was determined to be 17.63 kW and the SC was sized at 2.08 kWh. These settings matched those used for the tests carried out previously without the FC variation strategy. The final SoC at the end of the driving cycle was reasonably close to the initial SoC when using the FC variation strategy at the same degree of hybridisation as that of the base line comparison tests without the FC variation strategy. It should also be noted that the FC and boost converter output reference was reduced a number of times between 100 s and 500 s when the SoC attained 90%. Additionally, the FC and boost converter output reference was increased multiple times to prevent undercharge triggered at the 60% threshold, particularly between 1200 s and 1380 s. The peak FC and boost converter output power is 31.52 kW for this 32-minute journey. This requires a FC power output of 35 kW when a 90% average boost converter efficiency is considered. Hence the required degree of hybridisation for profile 1 (high power journey with the highest average power) would be 35 kW FC/2.08 kWh SC.
For profile 2, the bus journey with the lowest average power (Figure 4b), the initial FC and boost converter output power was set as 6.22 kW and the SC was sized at 1.41 kWh. In this scenario, the SoC of the SC at the end of the test was higher, with the FC variation strategy in operation. The FC and boost converter output was regulated depending on the SoC of the SC, with two occasions when the undercharge protection was engaged. The peak FC and boost converter output power is 26.2 kW which equates to a required FC maximum output power of 29.1 kW assuming a 90% boost converter efficiency. Hence the required degree of hybridisation for this low power journey would be 29.1 kW FC/1.41 kWh SC.
The driving cycle comprising three completed bus journeys, profile 3 (Figure 4c), used the initial FC and boost converter output power setting of 11.77 kW and a 3.97 kWh SC. As expected, the variations in SoC are identical for both the models with and without the FC variation strategy until the SoC drops to the lower threshold value. Once beyond the FC variation trigger point, the SoC was sustained at an overall higher level as would now be expected. The FC and boost converter output power setting clearly increased during the second part of this bus journey where higher power operations occurred. This results in a significant increase in the SoC of the SC at the end of the journey for the FC variation strategy. The peak power output of the FC and boost converter output power in this driving cycle is 28.7 kW which requires a FC capable of delivering up to 31.9 kW rated output power. Hence the required degree of hybridisation for this longer driving cycle is 31.9 kW FC/3.97 kWh SC. It can be seen that the calculated degrees of hybridisation for all three driving cycles functioned as expected with the inclusion of the FC variation strategy. The strategy to identify the degree of hybridisation was validated against a number of driving cycles with the inclusion of the FC variation strategy. The model will be used to identify the required degree of hybridisation for the entire day of route 388. The average power of the entire day (without driver breaks) has been measured at 9.45 kW based on the operation power measurements. That gives the required initial FC and boost converter output power base reference as 10.39 kW. Based on the load average and FC initial power output setting, the minimum capacity for the SC was determined to be 505 F, which equates to a maximum 16.2 kWh of stored energy. The model has been tested with the entire day's power profile (profile 4). The FC and boost converter output power and the SoC variation have been plotted in Figure 5. It is evident that the SC was generally at low SoC having delivered large amounts of energy initially to propel the bus during the morning portion of the driving cycle and largely absorbed excess energy during the afternoon and evening portions of the driving cycle. As a result, the FC and boost converter output was increased significantly by the FC variation strategy in the morning operations and then decreased on two occasions in the afternoon and evening operations. This is because morning (rush hour) driving requires a lot of starts which are high load events and rarely will the bus attain appreciable speeds which would also compromise regenerative energy capture. It was found the average charge efficiency of the SC throughout the entire day was 82.7%, while the discharge efficiency was 90.3%. The SoC was maintained within the prescribed operational range. The proposed degree of hybridisation proved capable of delivering effective bus operation for the entire day. Since the highest power of the FC and boost converter output is 24.2 kW, this equates to a required FC power of 26.9 kW with a 90% average boost converter efficiency. Therefore, the degree of hybridisation on route 388 bus for the operating day can be identified to be 26.9 kW FC/16.2 kWh SC.
The variable FC output control strategy showed to limit the variation of the SC SoC and thus allow the system to provide for the long-term transient power demands of the bus without either depleting of over-charging the SC. It has been seen that the FC variation strategy results in significantly less variation in the SC SoC during operation, this leads to the situation where the calculated SC size can potentially be reduced by utilising the FC variation strategy. This also allows for a greater flexibility in the degree of hybridisation of the system and will now be explored.
The response of the system during over-and under-charge is highlighted in Figure 6. The overcharge and under-charge protection response is taken from profiles 1 and 3 respectively. It can be seen in Figure 6(a) that the SC SoC rises above the HTV (90%) at 346 s as a result of a regenerative braking event. This causes the value of Ifc_out to decrease. This is followed by a period with no load power requirements. During this period the FC continues to charge the SC but at a decreasing rate. At 376 s an acceleration event occurs, resulting in the SC discharging before SC SoC falls below the HTV at 396 s. During the period of over-charge protection the SC is still able to meet all of the transient demands whilst the FC output is able to slowly ramp down. Similarly, for under-charge protection (Figure 6(b)), a period of relatively high-power demand occurs at around 4885 s. This causes the SC SoC to fall below the LTV. At this point the FC output begins to ramp up, and limits the rate of discharge SC. A regenerative braking event starting at 4962 s acts to recharge the SC with the FC output ramping down as a result. The SC SoC rises above the LTV at 4978 s and coincides with the FC output returning to the reference value. Again the SC is able to meet the transient load demands whilst the FC is able to ramp slowly.
Degree of Hybridisation Investigation
This section aims to investigate the impact on the degree of hybridisation of the FC variation strategy and the impact that changes to the SC sizing and control strategy parameters has on this.
The tests are all carried out on profile 4, the whole day operating profile on route 388, with an initial FC and boost converter output base reference (10.39 kW) utilised for the tests. The SC size utilised for the previous test for the full day driving cycle was a 505 F SC (16.2 kWh) with a 60% lower threshold undercharge protection. The same tests have been carried out with different SC sizes to investigate the impact of degree of hybridisation applied on the same driving cycle. The SC size has been decreased while running the same duty cycle simulation. The required FC power has also been determined by using the highest required power from the FC. The tests have also been run for different values of the LTV, with values of 50%, 60%, and 70% utilised to determine the impact of this on the performance of the system and resulting degree of hybridisation. Hence a degree of hybridisation ratio between the required FC size and SC size can be obtained. The obtained results have been plotted in Figure 7. It can be seen that reducing the SC size results in an increase in the FC power required, since a smaller SC will experience quicker variations to the SoC. It was also found that further reducing the SC size beyond 3.2 kWh will cause the system to fail for this particular profile. The failure was caused by the SC SoC dropping to quickly for the FC to be able to respond sufficiently and is a result of the SC being too small to effectively act as a damper for the transient power demands of the power profile. It is clear from Figure 6 that the SC size can be reduced significantly but that this comes at the cost of a larger required FC power. Variations to the value of the LTV had a significant impact on the viable values of the degree of hybridisation of the system. It can be seen that reducing the LTV to 50% would increase the required size of both the FC and SC, whereas increasing the LTV to 70% would reduce the required size of both the FC and the SC. It has been found there is a trade-off relationship between the SC size reduction and FC size increase.
Although it has been shown that the system having the lowest required FC and SC size occurs for a lowest threshold setting at 70%, this results in more frequent adjustments to the FC output. Utilising a high value of LTV results in a change in the dynamic of the system, where the SC is acting to meet the transient demands but the change in fuel cell output varies more readily to adapt to the power profile. In the cases with the smallest SC size, this variation occurs more frequently and rapidly because of the increase in the rate of change of the SC SoC. This increase in variation comes at the cost of using the FC over a wider dynamic power band. To investigate the FC variation frequency, the percentage of time the FC output varied from the initially defined value of Ifc_out has been calculated. The results are plotted with different lower thresholds as shown in Figure 8. It can be seen from Figure 8 that the 70% lower threshold was subject to the most FC variation for a given SC size, where the FC varied its output for nearly 47% of the day for the worst-case scenario. The variation includes the FC and boost converter output being increased to prevent SoC depletion or being decreased to prevent overcharge. It was found that the average power of the FC and boost converter output for each case is nearly the same with less than 1% variation in results. Since the net power profile of the load are the same for each of the degrees of hybridisation, varying the SC size and LTV will not affect the energy delivery to or from the SC. The minor difference is caused by the charge/ discharge efficiency and differing values of the final SC SoC. The same average FC output power also means the total energy delivered by the FC is always the same.
It can be seen that the degree of hybridisation can be optimised with respect to a number of parameters. However, there is always a trade-off relationship for the parameter that is being controlled. There will be a number of factors involved and is about finding the "best balance" amongst those factors.
All the degrees of hybridisation in Figure 7 have been shown to be capable of suitably delivering the service for a complete operating day of route 388. The three hybrid option results in minimum variation, minimum FC size and minimum SC size have been highlighted. One of these parameters can be maximised for each case, but this would also consequently change the other parameters. It can be seen selecting the degree of hybridisation is not simply finding a "best" number. The factors would depend on the requirements of the bus designer. Finally, the degrees of hybridisation proposed for route 388 in this research were compared with other operating FC buses. All the operating buses used for comparison have been in passenger service in commercial use for a relatively long period of time and represent the majority of commercially available FC buses. Information for the operating buses in terms of FC power and energy storage system size was obtained from a number of literatures sources [36][37][38][39][40][41][42][43][44][45]. The comparison is plotted in Figure 9. The three options controlled in terms of minimum FC size, SC size and FC variation have been plotted in the FC/SC ratio plot.
From the FC point of view, it can be seen that the FC size proposed in this research is significantly smaller when compared with those for most existing FC buses. However, there is an important point that needs to be addressed for the FC size comparison. The required FC size used in the proposed degree of hybridisation are the operating power which defines the minimum rated power required from the FC. This is not necessarily the same as the rated power of the FC. Additionally, the degrees of hybridisation proposed in this research were mainly based on the driving cycle of one operating day. The driving cycle is subject to change based on a variety of factors such as season, weather and other events. Although the proposed FC variation strategy will provide some flexibility for the model to be operated under different driving cycles, the required FC size could be increased to be prepared for possible worst-case scenarios. As a result, the degrees identified in this research are more likely to be appropriate for route 388 on that day instead of for route 388 generally. Further information regarding the operating load profile on different days are needed to make an assessment of whether the operating profile collected is representative of normal operation.
From the energy storage point of view, the energy storage size proposed in this research varies over a wider range compared with those installed in existing buses. Most existing FC hybrid bus models utilise Li-ion batteries as the energy storage technology, with the exception of the WrightBus FC bus (Wrightbus, Ballymena, UK) used on the RV1 Bus route in London. The capacities of the battery used in the existing FC buses are generally larger than proposed in this research. The reason for this is the lower power density of the Li-ion batteries [58]. More batteries need to be integrated to provide for the high transient power outputs required. The SC used on the WrightBus (0.5 kWh) is significantly smaller than the proposed SC capacity in the degree of hybridisation for route 388. There are three reasons for this. First, route RV1 is a relatively flat route which was specifically selected for the FC bus demonstration. As a result, the power variations in the route RV1 in terms of magnitude and frequency are expected to be significantly smaller than for the same bus on route 388. Second, is significantly more powerful than the FC proposed for route 388. As shown earlier this has the potential to reduce the size of the required energy storage system.
Conclusion
This research evaluated and investigated the control strategy for a FC/SC hybrid power system for city bus applications. This is built on a previously proposed stabilised FC output control strategy and degree of hybridisation identification strategy. Based on the limitations identified with the stabilised operating strategy, a FC variation strategy was applied that offers the facility to adjust the FC and boost converter output reference through monitoring the SC SoC. It was found that the model with the inclusion of the FC variation strategy can not only eliminate the limitations for the initial proposed operation strategy, but also bring potential benefits of further optimising the identified degree of hybridisation. A power profile of a complete day of bus operation was used to test the control strategy and explore the viable range of FC and SC sizing. It was found that the system operated as expected in terms of managing the balance of power and SC SoC throughout the bus journey. It can be concluded that the degree of hybridisation identification strategy can be used to assign an appropriate degree for any FC/SC hybrid bus system and the inclusion of the FC variation strategy is an important feature to add flexibility to power system of the bus.
It was found that there are a wide range of degrees of hybridisation that can fulfil the operating performance requirements. It was found that reducing the size of the SC resulted in the need for a larger required FC power to compensate for the increased rate of change of the SC SoC. Additionally, increasing the value of the LTV resulted in a reduction in both the FC and SC size requirements. However, a greater value of the LTV significantly increased the frequency and magnitude of the variation to the FC power output. It has been found that all of the parameters in the degree of hybridisation are interlinked. As a result, the selection of the degree of hybridisation would be dependent on the requirements of the bus designer. Three controlled degrees of hybridisation namely minimum FC size, minimum SC size and minimum FC variation have been proposed for the route 388. The proposed degrees of hybridisation have been compared with degrees of hybridisation of existing FC buses. It has been found that the FC can be significantly downsized from those used in commercial FC buses.
This research further improved the degree of hybridisation identification strategy by implementing a FC variation strategy. Although the degrees of hybridisation proposed in this research are more designed for the specific profile, the most important contribution of this research is the strategy to identify and explore the feasible degree of hybridisation options. The degree of hybridisation identification method can be applied on any other route. | 2020-01-02T03:06:52.778Z | 2019-12-19T00:00:00.000 | {
"year": 2019,
"sha1": "eac14a828287a134b98a22aeb61348dd3042803a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2624-8921/2/1/1/pdf?version=1577343517",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "276da90797286183c8d1013d119eb30acef6755d",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54592190 | pes2o/s2orc | v3-fos-license | Asymmetric multifractal model for solar wind intermittent turbulence
We consider nonuniform energy transfer rate for solar wind turbulence depending on the solar cycle activity. To achieve this purpose we determine the generalized dimensions and singularity spectra for the experimental data of the solar wind measured in situ by Advanced Composition Explorer spacecraft during solar maximum (2001) and minimum (2006) at 1 AU. By determining the asymmetric singularity spectra we confirm the multifractal nature of different states of the solar wind. Moreover, for explanation of this asymmetry we propose a generalization of the usual so-calledp-model, which involves eddies of different sizes for the turbulent cascade. Naturally, this generalization takes into account two different scaling parameters for sizes of eddies and one probability measure parameter, describing how the energy is transferred to smaller eddies. We show that the proposed model properly describes multifractality of the solar wind plasma.
Introduction
The solar wind is a an example of turbulent and intermittent astrophysical plasma (Burlaga, 1991a(Burlaga, , 1992a,b;,b;Marsch, 1991;Carbone, 1993;Marsch and Liu, 1993;Marsch and Tu, 1997;Sorriso-Valvo et al., 2001;Biskamp, 2003;Bruno et al., 2003).For this highly nonlinear system the energy at a given scale is not evenly distributed in space and we can observe how fluctuating parameters affected by intermittency alternate between burst of activity and quiescence.Therefore, based on Richardson's cascade and Kolmogorov's ideas (Kolmogorov, 1941(Kolmogorov, , 1962)), followed by Kraichnan (1965), several classes of models have been developed to describe nonuniform distribution of energy in the turbulent Correspondence to: W. M. Macek (macek@cbk.waw.pl)flow (Lesieur, 1990;Borgas, 1992;Goldstein and Roberts, 1999), in particular, β-model (Frisch et al., 1978) and random β-model (Benzi et al., 1984).Moreover, due to multifractal models, e.g., p-model (Meneveau and Sreenivasan, 1987), She-Leveque model (She and Leveque, 1994), we can look inside complex nature of intermittent turbulence (Mandelbrot, 1989).Using generalized dimensions and singularity spectra allow us a better description of energy turbulence cascade and the degree of multifractality in the solar wind plasma (Meneveau and Sreenivasan, 1991).It is well known that multifractal nature of solar wind has been observed in the inner heliosphere (Marsch et al., 1996;Macek, 1998Macek, , 2002Macek, , 2003Macek, , 2006Macek, , 2007;;Macek et al., 2005;Macek and Szczepaniak, 2008a), and in the outer heliosphere (Burlaga, 1991a(Burlaga, ,b,c, 2004;;Burlaga et al., 1993Burlaga et al., , 2003)), also at various phases of the solar cycle (Burlaga, 2001;Burlaga et al., 1993) and various heliographic latitudes (Horbury and Balogh, 1997).However, the multifractal singularity spectrum obtained for the solar wind data has an asymmetric shape and shows a substantial departure from the standard p-model (Burlaga, 1993;Macek, 2007;Macek and Szczepaniak, 2008a).The nature of this departure is still unexplained.Therefore, the main aim of this work is modeling and explanation of this asymmetry.This paper is organized as follows.In Sect. 2 we introduced data collection taken and methods used for analysis.Generalization of the p-model is considered in Sect.3. Sections 4 and 5 present results and conclusions of our investigations.
Data and methods
Using Helios 2 data (Schwenn, 1990) we have demonstrated that intermittent pulses are stronger for asymmetric scaling and a much better agreement with the data is obtained, especially for the negative index of the generalized dimensions (Macek and Szczepaniak, 2008a).In this paper we consider Published by Copernicus Publications on behalf of the European Geosciences Union and the American Geophysical Union.changes of the multifractality of the energy transfer rate in the solar wind turbulence with the solar cycle activity.For this purpose we use two-years samples (2001 and 2006) of the velocity parameter measured in situ by Advanced Composition Explorer (ACE).These intervals are representative for a broad range of the solar wind conditions, in particular, we take into account both slow and fast wind streams and changes during the solar activity cycle.Our data of the resolution of 64 s have been obtained at about 1 AU in GSE system, near Lagrangian point (L1).For these data under study we apply multifractal formalism, which is one of the most adequate method for describing local scaling properties of energy transfer rate in nonhomogeneous turbulence.
There are several techniques to evaluate the multifractality and to obtain the generalized dimensions (Hentschel and Procaccia, 1983) or multifractal spectra (Halsey et al., 1986).Some methods are based on the calculations of the scaling exponents of structure functions (Anselmet et al., 1984), and are related to the generalized dimensions D q (Frisch, 1995;Tsang et al., 2005).It is also possible to obtain the multifractal spectrum directly from data (Chhabra and Jensen, 1989).Here, we construct the transfer rate of the energy flux as a multifractal measure and consider its scaling properties.Namely, to each ith eddy of size l at nth cascade step (i=1, ...N=2 n ) we associate a probability measure where ε i (l)∼|u(x+l)−u(x)| 3 / l (Marsch et al., 1996).In Fig. 1 we show the multifractal measure obtained using N=2 n , with n=18, data points for (a) solar minimum ( 2006) and (b) solar maximum (2001), correspondingly.One can notice that intermittent pulses are somewhat stronger for data at solar maximum.This results in fatter tails of the probability distribution functions as shown in Fig. 2, for solar maximum and minimum with large deviations from the normal distribution (dashed lines).
In the next step we identify the inertial range, η l L, where η is the dissipation scale and L is the size of the whole system.Calculation of this range is essential, because it provides information as to whether turbulence is fully developed and the energy cascade is actually present (Sorriso-Valvo et al., 2007).This may indicate that in fact we can have a fully developed turbulence during solar maximum.One can therefore expect that in this case the distribution of the energy between cascading eddies is more inhomogeneous, and consequently the behaviour of intermittent pulses are stronger for solar maximum, Fig. 1.We identify the inertial range (Carbone, 1994).
by considering the scale dependence of the usual third and fourth orders caling exponents ξ(3) and ξ(4) (Carbone, 1994;Horbury et al., 1997;Horbury and Balogh, 1997).The results based on the experimental values are presented in Fig. 3.We see that the scaling range is much clearer and wider for solar maximum.
Next, we analyse the log-log plots [ N i p q i (l)] 1 q−1 versus l for different steps (n) of the cascade.The slopes of this curves correspond to the generalized dimensions, D q (Meneveau and Sreenivasan, 1991).The multifractal measure µ=ε/ ε L on the unit interval for several steps of the construction of the generalized p-model is presented in Fig. 4. As usual the generalized dimensions are defined by To obtain multifractal spectra we use the methods described by Chhabra and Jensen (1989).Finally, we verify the multifractal spectra f (α) (Halsey et al., 1986;Stanley and Meakin, 1988) obtained from D q using Legendre transform (Ott, 1993; Macek and Szczepaniak, 2008b 1 ).
Asymmetric model
A generalized two-scale Cantor set, which is a combination of asymmetric and weighted Cantor set, is a theoretical ground for the cascade model (Halsey et al., 1986;Ott, 1993).In general, at each step of the cascade construction we use two different scales l 1 and l 2 for the segment generated at each level, and two different, in general, weights, p and 1−p.For l 1 =l 2 = 1 2 one recovers the standard p-model (Meneveau and Sreenivasan, 1987) resulting in a symmetric shape of the multifractal singularity spectrum function.Direct relation between q and D q for the proposed model is obtained from the following transcendental equation: We also consider the degree of multifractality ≡ α max −α min , which is given by Halsey et al. (1986): and the degree of asymmetry: where the singularity spectrum has its maximum f (α 0 )=1 (Ott, 1993; Macek and Szczepaniak, 2008b 1 ).
Results
The results for the generalized dimensions D q as a function of q, calculated from Eq. ( 2) using the ACE data and com- pared with those obtained from Eq. ( 4) for solar wind turbulence at 1 AU during solar minimum ( 2006) and solar maximum ( 2001) are presented in Fig. 5a and b, correspondingly (cf.Macek and Szczepaniak, 2008a, Fig. 3).The related singularity spectra f (α) as a function of singularity strength α are depicted in the corresponding Fig. 6a and b (cf.Macek and Szczepaniak, 2008b 1 , Fig. 7).In particular, in agreement with other studies, we confirm the universal shape of the multifractal spectrum as noticed, e.g., by Burlaga (2001).
Since the Cantor set is sensitive to initial conditions the multifractal spectrum for intermittent turbulence can be naturally related to the Lyapunov spectrum as discussed by Chian et al. (2006).
We have also calculated the degree of multifractality given in Eq. ( 5), which is equal to 1.75 for solar maximum and 1.62 for solar minimum.Hence we observe that the solar wind is multifractal during the whole solar cycle.It is worth noting that the shape of the multifractal singularity spectrum is rather asymmetric, which cannot be explained by the usual p-model, which involves only a one-scale Cantor set.The actual degree of asymmetry A defined in Eq. ( 6) is of about 1.3 for both solar minimum and maximum, as summarized in Table 1.
Conclusions
We have studied the inhomogeneous rate of the transfer of the energy flux indicating multifractal and intermittent behaviour of solar wind turbulence in the inner heliosphere.In particular, we have demonstrated that for the model with two different scaling parameters a much better agreement with the real data is obtained, especially for q<0.By investigating the ACE data we have shown that as the solar activity increases the solar wind becomes somewhat more multifractal and more asymmetric.Admittedly, it seems that the degree of asymmetry of the singularity spectrum for one-year samples is rather weakly correlated with the phase of the solar activity.The dependence for slow and fast streams is thoroughly studied in another paper by Macek and Szczepaniak (2008b) 1 .
Basically, the generalized dimensions for the solar wind are consistent with the generalized p-model for both positive and negative q, but rather with different scaling parameters for sizes of eddies, while the usual p-model can only reproduce the spectrum for q≥0.Therefore we propose this cascade model describing intermittent energy transfer for analysis of turbulence in various environments.
Fig. 4 .
Fig. 4. The multifractal measure µ=ε/ ε L on the unit interval for (a) first, (b) fifth and (c) tenth step of the construction of the generalized p-model.
Fig. 5 .
Fig. 5.The generalized dimensions D q for the energy transfer rate in the solar wind turbulence at (a) solar minimum (2006) and (b) solar maximum (2001), correspondingly.
Table 1 .
Degree of multifractality and asymmetry A. | 2018-12-04T16:48:05.381Z | 2008-07-30T00:00:00.000 | {
"year": 2008,
"sha1": "4d68edeb402c89748cf5705b364962306be3e1e8",
"oa_license": "CCBY",
"oa_url": "https://npg.copernicus.org/articles/15/615/2008/npg-15-615-2008.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4d68edeb402c89748cf5705b364962306be3e1e8",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259472069 | pes2o/s2orc | v3-fos-license | A comparative study of the effectiveness of Pfizer-BioNTech (BNT162b2), Astra Zeneca (ChAdOx1nCoV-19) and Sinopharm (BBIBP-CorV) vaccines in eliciting Humoral immunity in a sample of vaccinated population from Iraq.
Background: In order to tackle COVID-19 pandemic and the emerging variants, researchers around the globe have investigated many vaccine candidates from different manufacturers, however vaccine development is not an easy task but is a top priority to restore normalcy as represented a step to achieve the desired herd immunity threshold. Patients and methods: in this study we assessed and compared the level of IgG anti-RBD neutralizing antibodies triggered from each vaccine against SARS-CoV2 infection in 123 vaccinated subjects, by using isotype-and species-free competitive blocking ELISA. Blood samples were taken from vaccinated individuals 1 and 8 months after the second dose of the vaccines. Results: the findings of the current study revealed that two-dose vaccination might be effective to trigger robust humoral neutralizing immunity at 1month and even durable for as long as 8months with different sustained levels among the three studied previously mentioned vaccines. The serum level of the neutralizing IgG antibodies, Pfizer group revealed the highest level compared to AstraZeneca and Sinopharm groups (P<0.05); the Sinopharm showed trend of higher levels of neutralizing antibodies than AstraZeneca but without reaching statistical significance (P>0.05). Additionally, the serum level of neutralizing IgG antibodies, which represent the humeral immunity to SARS-CoV-2, was shown to be far higher in 1-month than in 8-month post-2 nd dose vaccination groups (P<0.0001). Conclusion: Altogether, it is concluded that Pfizer vaccine proved to be of highest and most durable neutralizing anti-RBD IgG antibodies and followed with Sinopharm and AstraZeneca vaccines.
subunit vaccines against SARS-CoV-2 wild type and its variants 4,5 . However, RBD-based subunit vaccines may face some serious challenges, mostly arising from their relatively low immunogenicity, which must be combined with appropriate adjuvants or optimized for suitable protein sequences, fragment lengths, and immunization schedules 6 .
As of Feb 3, 2021 the world has shown an impressive capacity for an accelerated COVID-19 vaccine development process, many COVID-19 vaccine candidates have been authorized or approved for human use and others were in experimental phases of clinical testing, only five of vaccines those developed by AstraZeneca in partnership with Oxford University, BioNTech in partnership with Pfizer, Gamaleya, Moderna, and Sinopharm in partnership with the Beijing Institute-have been authorized by stringent regulatory agencies or WHO 7 .
Among the approved vaccines, different platforms have been implemented: inactivated virus, viral vectors, and mRNA-based vaccines which focus the immune response against only the key viral proteins of interest. Generally, all of them are qualified to stimulate an immune response and are efficacious against SARS-CoV-2, even at varying levels 8 . Although vaccination effectiveness against SARS-CoV-2 has been astonishing, but booster immunizations are clearly required for maintenance of effectiveness over time, they are far from perfect. Immunity wanes with time elapsed, and viral antigenic variation 9 .
Vaccines induce both adaptive humoral and cellular immune responses, most of the currently accepted correlates of protection are based on neutralizing antibody responses, however, if there is no detectable antibody response after vaccination the vaccines may still offer protection through cellular immunity, since cellular responses and antibody responses are often correlate to some extent 10-12 .
Three vaccines were introduced to Iraq for use namely, Pfizer, AstraZeenca, and Sinopharm. These three vaccines were introduced after being tested in controlled randomized double blind clinical trials. However, none of these trials was done in Iraq. It is well known that immune response to vaccines might be affected by race, environment, age, sex, underling health conditions and level of exposure of the population to the virus 13,34 .Hence, it was important to set off a study investigating the neutralizing humeral immune response in a sample of vaccinated Iraqi individuals with these vaccines and to test the longevity of the immune response of these vaccines for 8 months after taking the second dose of the vaccine.
Study design and subjects
The study design is a cross sectional study of 6 groups of vaccinated healthy volunteers who received full doses of vaccines in Baghdad province; each group consists of 30individual. To assess the effect of age on the immunological response to the studied vaccines, each group was equally divided into 2 halves: namely15 individuals of age less than 60 years and 15 with age more than 60 years. Both sexes were involved and from different geographical residences without any bias in selection. The study was conducted in the period between 15 December 2021 to 5 July 2022.The included groups of the study population were as follows: at (1 month and 8months) post dual vaccination with Pfizer, at (1month and 8months) post dual vaccination with Sinopharm and at (1month and 8months) post dual vaccination with AstraZeneca. Accordingly, the target of the current study was to attain a sample size of 180 individuals. The exclusion criteria of the study population are: subjects should not have history of symptomatic infection, are not on immunomodulating or immunosuppressive therapy, and have no any kind of immunosuppression-related disease.
The following data were taken into consideration and recorded for each participant by oral questionnaire: the name of the vaccinated healthy volunteer, age, sex, type of the vaccine received, number of the received vaccine doses, the duration after the second dose of each vaccine which was determined by the vaccination card for each individual, comorbidities such as diabetes, hypertension, cardiovascular diseases and others, negative PCR result if done so far, absence of COVID-19 signs and symptoms, and not being in contact with an infected individual, to assure healthy status, and having an immunosuppressive disease or taking immune-suppressive or modulating drugs.
These data were adjusted to the selection criteria at the time of sample collection, the volunteers were selected from Baghdad with the help of Al-Kadhymia vaccination regional center.
Limitation of the study
1-discontinuity of vaccine supply precisely AstraZeneca vaccine.
6-uncertainty of healthy status and possibility of asymptomatic COVID-19 infection.
Samples collection
Up to 3 ml of non-anticoagulant whole blood were drawn into 10 ml serum separator tubes for serum isolation to determine the amount and level of anti RBD-Neutralizing antibodies by indirect competitive inhibitory ELISA kit. The blood was allowed to clot at room temperature for about two hours. Then, it was centrifuged for 10 min at 1000 g and the resultant serum was isolated and stored at -20 C in aliquots for later use in ELISA.
Isotype-free competitive ELISA for the detection and quantification of SARS-COV-2 Neutralizing antibodies in the serum of vaccinated healthy individuals.
This ELISA kit uses Competitive-ELISA as the method to quantitatively detect and quantify anti-SARS-CoV-2 neutralization antibodies in the serum. The micro ELISA plate provided in this kit (SARS-CoV-2 Neutralization Antibody ELISA Kit. Elabscience, USA. Cat No.: E-EL-E608) is precoated with recombinant human ACE2. During the reaction, the SARS-CoV-2 neutralization antibodies in the pretreated samples or controls competes with a fixed amount of human ACE2 on the solid phase supporter for sites on the Horseradish peroxidase (HRP) conjugated recombinant SARS-CoV-2 RBD fragment (HRP-RBD). After incubation at 37℃, the unbound HRP-RBD as well as any HRP-RBD bound to non-neutralization antibody will be captured on the plate and eventually form the ACE2-RBD-HRP complex, while the circulating neutralization antibodies HRP-RBD complexes remain in the supernatant and are removed during washing. Then a TMB substrate solution is added to each well. The enzyme-substrate reaction is terminated by the addition of stop solution and the color change is measured spectrophotometrically at a wavelength of 450 nm ± 2 nm. The inhibition ratio resulted will indicate the level of SARS-CoV-2 neutralization antibodies exists in the tested samples. The concentration of SARS CoV-2 neutralization antibodies in the samples is then determined by comparing the OD of the samples to the OD of the kit standard curve.
Ethical clearance:
The study was approved by the Institutional Review Board at al Nahrain University, College of medicine under number 20211047. Informed consent was obtained from all subjects to participate in the study.
Characteristics of the participants in the study
To compare the effectiveness of the elicited humoral immune responses from the used COVID-19 vaccines in Iraq namely: Pfizer, AstraZeneca and Sinopharm,123 healthy supposedly non-infected vaccinated volunteers were assessed and classified into mainly 6 groups; each group was subdivided into two groups according to the vaccine type, duration of post 2 nd vaccine dose and age.
Vaccine induced humoral immunity with age, sex and comorbidity
It was found that there was no association between the age of vaccinated participants and the type of vaccine received (P>0.05), as shown in table 1. In addition, sex of participants was shown not to be associated with the type of vaccine taken (P>0.05) as shown in table 2. Observantly, the concentration of neutralizing IgG antibodies ug/ml was shown to be borderline higher in younger age group (<=60 year) than in older age group (>60 year) (P=0.053), as shown in table 3, figure 1, 2. Figure 1: A box-plot shows the median, upper and lower quartiles of the neutralizing antibody concentration in age group =<60 versus >60.
Figure 2:
The mean±2SE values of neutralizing antibody concentration in age group =<60 versus >60 years.
Regarding sex, neutralizing antibodies concentration in plasma was shown to be not significantly different between male versus female sex groups (P >0.05), as shown in table 4, figure 3. Figure 3: A box-plot shows the median, upper and lower quartiles of the neutralizing antibody concentration in male versus female sex.
As expected, the group of participants with comorbidities was with higher age median, than those without comorbidities (P<0.05). This study findings did not show any significant difference in the serum level of neutralizing IgG antibodies between participants with and without comorbidities (P>0.05), as shown in table 5.
Vaccine induced humeral immunity at different time interval
Additionally, the serum level of neutralizing IgG antibodies, which represent the humeral immunity to SARS-CoV-2, was shown to be far higher in 1-month than in 8-month post-2 nd dose vaccination groups (P<0.0001), as shown in table 6, figure 4, 5.
Vaccine induced humoral neutralizing immunity considering the vaccine type
The serum level of neutralizing IgG antibodies, Pfizer group revealed the highest level compared to AstraZeneca and Sinopharm groups (P<0.05); the Sinopharm showed trend of higher levels of neutralizing antibodies than AstraZeneca but without reaching statistical significance (P>0.05), as shown in table 7, figure 6, 7.
Vaccine induced humoral immunity considering study group
By using Kruskal Wallis test, for IgG anti-RBD neutralizing antibodies concentration ug/ml in 1month and 8months post vaccination, it was shown that the median levels were significantly different among the study groups (P<0.01). It was found that Pfizer then AstraZeneca, then Sinopharm induced the highest median levels of neutralizing antibodies 1month post vaccination, respectively (P<0.05); by contrary, for 8 months post vaccination, Sinopharm, then, Pfizer, and AstraZeneca induced highest levels of neutralizing antibodies, respectively (P<0.05).
Altogether, the current findings reveal that Pfizer vaccine, then AstraZeneca, then Sinopharm are the best ones for inducing high neutralizing antibodies shortly after the vaccination; nevertheless, AstraZeneca proved to be short in preserving good level of neutralizing antibodies after 8 months of vaccination while the best vaccine found to preserve highest levels of neutralizing antibodies by month 8 was Sinopharm then Pfizer. As shown in table 8, figure 8, 9.
Discussion:
In contrary to the disparity in COVID-19 infection clinical outcomes based on sex as a biological variable as females tend to experience less severe disease than males 14 ; In similarity with other studies our findings showed that COVID-19 vaccine responses and efficacy rates were almost comparable between the two sexes 15 .
As age significantly determines the clinical features and prognosis of COVID-19 which was worse in patients older than 60 years, revealing that age is not just a number. Hence; the concept of immune senescence is particularly relevant within the context of the declared pandemic (16) . Several studies have provided evidence that antibody level and antibody quality are both diminished in older adults as compared to younger adults, well, but this is not true for all vaccines; vaccines that are more effective in older adults utilize several strategies including: 1) altering administration route, 2) increasing vaccine dose and 3) using vaccine adjuvants 17 , as such our findings showed that vaccination potential might be insignificantly associated with age.
A study was done in Italy focused on the tremendous impact of comorbidities precisely on the elderly people since older adults confounding higher rates of underlying health conditions (18) , which lead to decreasing of vaccine immunogenicity particularly poor antibody response; however, the current study did not show a clear association between comorbidities and vaccine-induced humeral response; this might be attributed to the fact that the vaccines trialed in this study are tailored particularly for elderly, or maybe the sample of size of this study was not sufficient to detect divergence in response to vaccines between elder and younger subjects.
Dual vaccination with Pfizer resulted in an observed maximum neutralizing antibody response at one month followed by a sharp decline by month 8; Evangelos, et al., found that there was sustained humoral immunity with a statistically significant decline thereafter up to 9 months (19) . For vaccination with AstraZeneca, there were an initial substantially lower specific nAb responses at month 1 than in Pfizer, but these responses were more durable and persisted at month 8. Our findings indicated that Sinopharm vaccine at 1month of vaccination elicited moderate antibody levels compared to very high levels following two doses of Pfizer then decay gradually with time.
The three vaccines studied behaved in some aspects quite differently and in other aspects behaved similarly. All of them revealed a clear decline in the humeral immunity over 8 months post-vaccination. This was in harmony with several previous studies [20][21][22] . this is explained by the fact that Coronaviridae family have the tendency to induce short-to midterm memory B cells and SARS-CoV-s is not an exception. As known, humeral immunity is the only arm considered as protective immunity (23) . Nevertheless, the current study found that Pfizer vaccine elicit nAbs more efficiently than AstraZeneca and Sinpharm did. This is can be attributed to the novel platform design of this vaccine which help translate mRNA of RBD domain in a robust and quick manner 24 . Anyway, AstraZeneca and Sinopharm performed similarly well in eliciting nAbs and they generated quite enough level of nAbs. In fact, Pfizer and AstraZeneca elicited nAbs at quite close levels in both 1 and 8months interval while Sinopharm lagged behind in eliciting nAbs in 1month interval but Sinopharm compensated that shortage at 8month interval where nAbs level of Sinopharm became comparable to that of Pfizer and Astrazeneca. This indicated several notions: First, Pfizer and Astrazeneca vaccine are potently inducing humeral immunity weeks after the second dose while Sinopharm lags behind in this completion indicating long-time production process. Second, the rate of decline of of nAbs level by Sinopharm was shown to be significantly slower than Pfizer and Astrazeneca vaccines. This might be explained when comparing vaccine designs and platforms, a potential advantage of inactivated vaccines over other vaccine types is that they comprise all viral structural proteins which may induce a broader spectrum of immunity in addition to NAbs against RBD, this means more epitopes, especially those conserved epitopes in proteins other than spike engaged (25) , typically, make the vaccine more durable trigger. This was seen as well by other studies (26,27) , while other studies contradicted this observation 28,29 . Taken together, we observed that better sustained levels of neutralizing response at month8 might be elicited with Sinpharm than in Pfizer and AstraZeneca. As such, neutralizing humoral immunity were shown to be significantly different among the study groups.
It is quite known that cellular immunity of Coronaviruses do not fade easily and persist for maybe decades (30) ; However, a question might be laid then why the humeral immunity is not augmented as well? The answer might be because of the resurgence of variants of concern that show some level of changes in epitopes recognized by nAbs but not quite same variations in the epitopes recognized by cell mediated immunity.
Conclusions and Recommendations:
The societal value of safe and effective COVID-19 vaccines is enormous. We can conclude from the current study that Pfizer, AstraZeneca and Sinopharm vaccines were shown to be quite effective in eliciting humoral immunity and was robustly activated against SARS-CoV-2 from two doses as early as 1 month. The neutralizing humeral immune response induced by the studied vaccines was shown to last up to 8 months after the second dose but at significantly reduced level.
The level of immune response by the vaccines studied did not correlate with age, sex and comorbidities of the vaccinated individuals.
Vaccine design platforms seem to play a crucial role in vaccine effectiveness and how far this effectiveness can be sustained.
We recommend that COVID-19 vaccines with high immune response should be encouraged in Iraqi vaccination campaigns, and further studies are recommended for more follow up of the vaccine effectiveness and protection against the variants of concern of SARS-CoV-2 in Iraq.
Further studies are recommended for the detection and quantification of the IgA neutralizing antibodies in Iraqi vaccinated subjects. It is recommended to conduct studies to monitor COVID-19 vaccines effectiveness in age younger than 18 years and even in children.
There is no conflict of interest | 2023-07-11T00:59:30.892Z | 2023-06-16T00:00:00.000 | {
"year": 2023,
"sha1": "bbee582d437dd6d299eca6534a4a3adfc9816817",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/BJMS/article/download/65323/44848",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5506132087814e46656c366ad67f7bbc1105985b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
106408601 | pes2o/s2orc | v3-fos-license | Immunotherapy in gliomas: Are we reckoning without the innate immunity?
Innate immunity plays a central role in neoplasms, including those affecting the central nervous system (CNS). Nowadays, tumors classification, especially that regarding gliomas, is based on molecular features such as mutations in isocitrate dehydrogenase (IDH) genes and the presence of co-deletion 1p/19q. Therapy, in most cases, is based on surgery, radiotherapy, and pharmacological treatment with chemotherapeutic agents such as temozolomide. However, the results of the treatments, after many decades, are not completely satisfactory. There is a class of drugs, used to treat cancer, which modulates immune response; in this class, the immune checkpoint inhibitors and vaccines play a prominent role. These drugs were evaluated for the treatment of gliomas, but they exhibited a poor outcome in clinical trials. Those scarce results could be due to the response of tumor-associated macrophage that creates imbalances between innate and adaptive immunity and changes in blood–brain barrier properties. Here, we have briefly reviewed the current literature on this topic, focusing on the possible role for innate immunity in the failure of immunotherapies against brain tumors.
Introduction
Gliomas are one of the categories that have undergone the most profound variations in the brain tumors classification by World Health Organization (WHO) in 2016. In the past, many mutations in many genes have been described as related to the development of glial neoplasms. 1,2 However, the authors of the aforementioned classification essentially focused on the isocitrate dehydrogenase 1 (IDH-1) or IDH-2 gene mutations, ordering the various entities, mainly for prognostic reasons. The practical benefits from a therapeutic point of view are imperceptible so far, since the prognosis, primarily for high-grade forms such as glioblastoma, remains very poor. Furthermore, diagnostic practices applied to individual entities can be tricky or too expensive and therefore not applicable throughout the world. In recent years, various strategies of immunotherapy have been proposed for central nervous system (CNS) tumors, but results seem to be quite disappointing so far.
The aim of this work is to briefly discuss the role of innate immunity as a possible cause of failure of immunotherapy facing brain tumors.
Methods and results
Pertinent studies published from January 2004 to September 2018 were selected by means of a MEDLINE search, accessed via PubMed database scanning. Search keys like "Innate immunity and brain tumors," "immunotherapy and brain tumors," and "Macrophages and brain tumors" gave an output of 2578 items. The articles we decided to cite in this article were chosen based on their overall significance, referring to a couple of criteria. First, we took into account the recent appearance namely for the ability to better represent the state of the art describing subjects involved in the studies, and second, we considered the exemplary value on the subjects dealt with, such as innate immunity or brain tumors.
The main results we could summarize from literature reviewing would be the messages that immunotherapy cannot be considered yet a safe tool for therapy against brain tumors and that innate immunity could exert a critical influence in the real efficacy of this kind of treatments.
Discussion
Current classification of diffuse gliomas is based on IDHs genes mutation and 1p/19q co-deletion. Generally, glioblastomas, oligodendrogliomas, and astrocytomas are treated combining radiotherapy and chemotherapy, making this approach more effective. Differentiated protocols are applied regarding IDH mutation status. 3 The currently used immunotherapeutic tools against brain cancers are based on immune checkpoint inhibitors (ICIs) and vaccine-mediated immunization. ICIs consist of monoclonal antibodies that neutralize immunosuppressive signaling and enhance immune responses against tumor cells targeting costimulatory and inhibitory molecules, which can regulate the activation and effector functions of T lymphocytes ( Figure 1). Under physiological conditions, those regulatory circuits are essential for self-tolerance, but in many cases, they may be coopted in malignancies. ICIs such as pembrolizumab and nivolumab are anti-programmed cell death protein-1 (PD-1); durvalumab, atezolizumab, and avelumab are anti-programmed cell death ligand-1 (PD-L1); and ipilimumab is anticytotoxic T lymphocyte-associated protein 4 (CTLA-4). All these preparations have shown a discrete efficacy in clinical trials.
However, in a Phase 3 study focusing on recurrence of glioblastoma, the treatment with nivolumab failed to increase overall survival compared to the treatment with bevacizumab that targets vascular endothelial growth factor A (VEGF-A). Furthermore, in patients with recurrent high-grade gliomas, salvage therapy with nivolumab or pembrolizumab did not significantly improve survival. 4 Two proposals for explanations can be advanced: the former is that not always in glioblastomas, there is a sufficient number of PD-1 receptor expressing cells 5 , and second, there could be an active role of the neoplasm populating innate immunity cells (Figure 2(a) and (b)).
Ipilimumab and tremelimumab are CTLA-4 targeting monoclonal antibodies currently being tested in glioblastoma immunotherapy. In a clinical trial, the concomitant use of ipilimumab and bevacizumab, in patients with malignant glioma, culminated in partial radiographic response for 31%. Tremelimumab, in combination with durvalumab (AstraZeneca), is under investigation as a combined treatment against a variety of solid tumors, including recurrent glioblastoma (NCT02794883). 5 These unfavorable data for ICI-based immunotherapy could be explained by already known mechanisms of resistance. VEGFs can be produced by glioma tumor cells, and also by polymorphonuclear neutrophils, macrophages, and endothelium, during angiogenesis ( Figure 2(c)), and it can result in apoptosis induction of CD8 T-effector cells that enter the tumor tissue. 5 Interestingly, the number of infiltrating neutrophils correlates with glioma grade and with acquired resistance to anti-VEGF therapy in glioblastoma multiforme (GBM). 6 M2-polarized macrophages have a higher angiogenic potential compared to M1. Tumor-associated macrophages (TAMs) are, as a rule, alternatively polarized toward an M2 state. Both M1 and M2 macrophages can produce VEGF, and IL-10 secretion and hypoxia, present in high-grade gliomas microenvironment, enhance it. 7 Even mast cells are powerful producers of IL-10 and angiogenetic factors 8 conditioning, similar to the macrophage phenotype. 9 FasL and PD-L1 are expressed both on glioma cells and TAMs surface T cell inhibition and apoptosis (Figure 2(a)). Both tumor-infiltrating macrophages and microglia are reported to express high levels of PD-L1 in GBMs. This implies that an important fraction of administered immunotherapeutic antibodies might target macrophage population rather than tumor cells (Figure 3(a)); microglia account for 50% of the FasL expressing cells in gliomas and may be considered a major cause for the induced apoptosis of lymphocytes 5 (Figure 2(a)). In astrocytic neoplasms, the greater the tumor grade, the more increase in tumor mass and the percentage of TAMs increases in parallel, up to 70% of tumor mass, as it can be observed in glioblastoma. 5 The state of macrophage polarization can move from M1 to M2 while the tumor grade is increasing. 7 It is probable that failure of ICIs therapy in higher grade glial tumors could be due to a direct action of macrophages against the lymphocytic population, hired to kill neoplastic cells.
Immunization strategies with tumor-associated or tumor-specific antigens can increase the immune response facing tumor, and it has also been explored against gliomas. TAMs and microglia shall support tumor progression, uptaking the injected antigens or released by glioblastoma tumor cells ( Figure 3(b)). Indeed, TAMs, by producing CCL2, are also able to recruit lymphocytes with immunosuppressive activity such as T reg cells, or they directly inhibit tumoricidal T cells by means of receptors and ligands usually targeted by ICIs 5 (Figures 2(b) Figure 2. Direct and indirect action of TAMs and hypoxia-driven immunosuppressive dynamics in glioma microenvironment: (a) cytotoxic lymphocytes (CTL) could be led on to apoptosis through the interaction FasL/FasR expressed on TAMs and CTL, respectively, (b) alternative polarized TAM by secreting chemokine such as CCL2 are able to recruit T reg cells inside the tumor microenvironment and these suppressor T cells inhibit CTL, and (c) hypoxia can induce several cell types to secrete vascular endothelial growth factor (VEGF) among which are tumor cells (TC), tumor-associated macrophages (TAM), and mast cells (MC). VEGF, in turn, can induce the expression of membrane-anchored Fas ligand (FasL) on the vascular endothelium during angiogenesis. The interaction between FasL and Fas receptor (FasR) or CD95 drag CTL toward the apoptotic program. and 3(b)). Some, among the mechanisms that would make ICIs treatments ineffective, could be responsible for vaccination failure just because, in any case, cytotoxic T lymphocytes are the terminal effectors.
In conclusion, the cells of innate immunity, such as macrophages, mast cells, and neutrophils, can somehow negatively interfere with the action of ICIs or immunized lymphocytes against tumor antigens. This is the reason why macrophage reeducation might be an interesting strategy in order to support ICI or vaccine therapies. Also, it must be considered that future therapeutic intervention, targeting mast cell activity, hypothetically could provide another tool in this scenario. 5,9,10 Generally speaking, a possible future direction for research in the therapy of brain tumors could be the molecular and immunological targeting of the innate immune system. | 2019-04-11T13:03:13.721Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "6f39efea9140e20e4e7bd8faac874a0ad4d381cd",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2058738419843378",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f39efea9140e20e4e7bd8faac874a0ad4d381cd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10878802 | pes2o/s2orc | v3-fos-license | Molecular Analysis of the Clavulanic Acid Regulatory Gene Isolated from an Iranian Strain of Streptomyces Clavuligerus , PTCC 1709.
Objective: The clavulanic acid regulatory gene (claR) is in the clavulanic acid biosynthetic gene cluster that encodes ClaR. This protein is a putative regulator of the late steps of clavulanic acid biosynthesis. The aim of this research is the molecular cloning of claR, isolated from the Iranian strain of Streptomyces clavuligerus (S. clavuligerus). Materials and Methods: In this experimental study, two different strains of S. clavuligerus were used (PTCC 1705 and DSM 738), of which there is no claR sequence record for strain PTCC 1705 in all three main gene banks. The specific designed primers were subjected to a few base modifications for introduction of the recognition sites of BamHI and ClaI. The claR gene was amplified by polymerase chain reaction (PCR) using DNA isolated from S. clavuligerus PTCC 1705. Nested-PCR, restriction fragment length polymorphism (PCR-RFLP), and sequencing were used for molecular analysis of the claR gene. The confirmed claR was subjected to double digestion with BamHI and ClaI. The cut claR was ligated into a pBluescript (pBs) vector and transformed into E. coli. Results: The entire sequence of the isolated claR (Iranian strain) was identified. The presence of the recombinant vector in the transformed colonies was confirmed by the colony-PCR procedure. The correct structure of the recombinant vector, isolated from the transformed E. coli, was confirmed using gel electrophoresis, PCR, and double digestion with restriction enzymes. Conclusion: The constructed recombinant cassette, named pZSclaR, can be regarded as an appropriate tool for site directed mutagenesis and sub-cloning. At this time, claR has been cloned accompanied with its precisely selected promoter so it could be used in expression vectors. Hence the ClaR is known as a putative regulatory protein. The overproduced protein could also be used for other related investigations, such as a mobility shift assay.
Introduction
Streptomyces species are mycelial, aerobic grampositive bacteria readily isolated from soil (1, 2). Streptomyces are unique among prokaryotes due to their complicated morphological differentiation (3). These morphological changes are accompanied by a wide range of physiological events, including the production of secondary metabolites, many of which have potentially important biological activities. They include many useful antibiotics and other products, such as antitumor drugs and herbicides (4)(5)(6)(7)(8). Streptomyces clavuligerus (S. clavuligerus) produces the β-lactam antibiotic, cephamycin C and the β-lactamase inhibitor, clavulanic acid (9-11). Clavulanic acid is a clinically significant inhibitor of β-lactamases, while the other clavam metabolites produced by S. clavuligerus demonstrate weak antibacterial and antifungal activities (1,9). Several other Streptomyces species have also been determined to be producers of clavulanic acid (12,13). The combined use of clavulanic acid and broad-spectrum β-lactam antibiotics such as amoxicillin are an important therapeutic tactic to combat the rapid increase in β-lactam resistance (14-17). The cluster of genes for clavulanic acid biosynthesis is located downstream from the pcbC gene of the cephamycin C cluster in S. clavuligerus (18,19). Most genes of the cephamycin and clavu-lanic acid clusters are known (20-24). All essential genes of the clavulanic acid pathway are within a 12kb EcoRI DNA fragment of the S. clavuligerus genome, because this fragment appears to confer production of clavulanic acid when introduced in Streptomyces lividans (25). Very little is known about the regulation of the genes of the clavulanic acid cluster. The transcriptional activators CcaR and ClaR are known to regulate the expression of clavulanic acid biosynthetic genes (26-28). The ccaR gene lies within the cephamycin biosynthetic gene cluster. This gene is a pathway-specific transcriptional regulator for cephamycin biosynthesis, as well as a controlling expression of the claR gene from the clavulanic acid gene cluster (21,(29)(30)(31). Another regulatory gene, claR, is located immediately downstream from orf-7 in the clavulanic acid cluster and encodes a 431 amino acid protein (31, 32). The regulatory nature of the ClaR protein has been deduced from the presence of one helix turn helix (HTH) motif and flanking sequences which show significant similarity to LysR transcriptional regulators (33). Finally, the absence of orf-7, orf-9 and orf-10 transcripts in a claR mutant blocked in clavulanic acid production confirmed the regulatory role of ClaR (32-34). To increase the amount of clavulanic acid produced by S. clavuligerus, different tactics have been employed by researchers. Enhancement of clavulanic acid production was seen in S. clavuligerus in the presence of peanut (Arachis hypogaea) seed flour and its fractions (35). Random mutagenesis was performed on S. clavuligerus. The new mutated strains were able to produce the elevated level of clavulanic acid (36). Since clavulanic acid is produced industrially by fermentation using S. clavuligerus, the regulation of clavulanic acid biosynthesis is a point of great interest. It has been shown that the cloning of the claR gene in the S. clavuligerus resulted in a threefold increase in clavulanic acid production (31). In our previous work, an isolated claR gene was ligated to a Streptomyces specific vector (pMA::hyg). The cloned claR genes had been isolated from two standard strains of Streptomyces. In this work, a new recombinant construct that carries the claR regulatory gene is presented. This vector not only transfers the claR gene isolated from one Iranian strain of S. clavuligerus, but also contains an inducible promoter.
Materials and Methods
Bacterial strains S. clavuligerus DSM 41826 (DSM, Germany) and S. clavuligerus PTCC 1705 (Iranian Scientific and Industrial Research Organization, Iran) were used in this study. Escherichia coli (E. coli) XL1-Blue was also used in this study. The Streptomyces strains were grown in defined conditions as described previously (37). A suspension of Streptomyces spores was prepared in 20% (v/v) glycerol and stored at -20°C (38). Cultures for the isolation of chromosomal DNA were prepared by inoculating 100 ml of yeast extract medium (YEM) with 100 μl of spore suspension. The YEM medium was prepared as described previously (37). Luriabertani (LB) agar medium (that contained per liter: 10 g of trypton, 5 g of bacto-yeast extract, 10 g of NaCl and 17 g of agar; pH= 7.5) supplemented with Ampicillin (100 μg/ml), whenever required, was used for the propagation of E. coli at 37°C. The bacterial pellet was stored in 20% glycerol at -20°C.
Vector
The pBs SK reproduced from Stratagene Catalogue was used as the vector in this study.
Primers
OLIGO® version 5.0 software (39) was used for designing all primers. The entire coding region of the gene was considered for primer selection. Accession number AJ000671.1, GI:2764535 (or U87786.2, GI:9280818) was used for obtaining the claR sequence. These accession numbers are based on S. clavuligerus ATCC 2706. This strain is the same as S. clavuligerus DSM 738, as mentioned in the NCBI. One set of primers (claR1) was designed for nested PCR (F: 5′GCC TGG AGC AGA TGG AG 3′and R: 5′AGG TGC TGT CGC TGG TCT 3′). Two primers (claR2) were designed for isolation of the claR gene from genomic DNA of S. clavuligerus (F: 5′CAT GGA TCC GTA TCT GTA CC 3′ and R: 5′TAG GAT CGA TTC CGA AGC 3′). These primers were subjected to modification at each 5' end in order to have two recognition sites for BamHI and ClaI (Fig 1).
Separation of total genomic DNA from Streptomyces
Total genomic DNA was isolated from the liquid culture of Streptomyces using the High Pure PCR Template Preparation Kit (Roche; Cat. No.1 796 828). The amount of DNA was quantified by gel electrophoresis and spectrophotometric analysis.
Polymerase chain reaction (PCR )
The reaction mixture for PCR amplification was prepared as follows: forward primer, 20 pM; reverse primer, 20 pM; dimethyl sulfoxide (DMSO), 4 μl; 10×PCR buffer without MgSO4 (200 mM Tris-HCl, 100 mM (NH 4 ) 2 SO 4 , 100 mM KCl, 1% (v/v) Triton X-100, 1 mg/ml bovine serum albumin (BSA)), 5 μl; MgSO 4 , 3 μl; deoxynucleoside triphosphates (dNTPs), 2 μl (10 mM each dNTP); and H 2 O, up to 50 μl. A total of 100 ng of chromosomal DNA was added as the template DNA. The PCR reactions were then carried out using 0.3 μl (2.5 U/μl) of Pfu polymerase enzyme. The amplification steps for the main PCR were as follows: hot start at 95°C for 5 minutes; 33 cycles of denaturation at 94°C for 1 minute, annealing at 60°C for 1 minute, primer extension at 72°C for 4 minutes, and a final extension at 72°C for 15 minutes. These conditions were set up for the modified primers. The amplification procedure was slightly different for the nested primers. The PCR was carried out normally for 30-35 cycles. The products were visualized by a standard electrophoresis procedure using 0.7% (W/V) agarose gels.
Restriction endonuclease (RE) digestion
Two sets of primers were designed not only to ampli-fy the claR region, but also to integrate one unique recognition site (BamHI and ClaI) in each end of the amplified fragments. Digestion was performed following the recommendations of the manufacturer (Fermentas, Germany). Required amounts of DNA samples (0.2-5 μg) were generally digested with 5-10 units of restriction enzymes (BamHI and ClaI) in a 10-20 μl final volume of restriction buffer (10× buffer) for about 1-3 hours in a water bath at the recommended temperature (normally 37°C). A sample was run on an agarose gel after incubation with each enzyme, which ensured that the digestion was done completely (40).
DNA ligation
DNA ligation was performed using one unit of T4 DNA ligase (Fermentas, Germany) in the presence of 1× ligation buffer. The 3:1 molar ratio of insert to vector was used in order to optimize transformation. Incubation was done at 16°C overnight (40). The products of the ligase reaction (a 20 ng aliquot from the completed ligase mixture) were analyzed by electrophoresis on a 2 × 50 ×75 mm agarose gel (mini gel). On the other hand, the results of the nested PCR have confirmed the approximate similarity between these two genes (claR isolated from Iranian S. clavuligerus and S. clavuligerus DSM738). This conclusion was achieved because the claR sequence of S. clavuligerus DSM738 had been used to design the primers. RFLP-PCR was then carried out using the SalI restriction enzyme. SalI cuts the claR gene at 740 and 1331 bp, producing three fragments, 740, 591 and 338 bp. The resultant fragments confirmed the correct structure for the isolated claR gene (Fig 3).
Transformation of E. coli
For making competent cells from E. coli, the calcium chloride method was used (40). An aliquot (200 μl) of frozen competent cells were slowly thawed on ice for about 30 minutes. Cells were gently mixed with DNA and incubated on ice for 30 minutes. The cells were then heat shocked at 42°C for 90 seconds. They were added to 2 ml LB (without antibiotic) and incubated at 37°C for one hour in a shaking incubator. A total of 100 μl of transformed cells were spread on the surface of the LB plate that contained an antibiotic. The plates were allowed to dry before overnight incubation at 37°C (40). (40). The overnight LB culture of E. coli was harvested by centrifugation (13K rpm, 30 seconds). The pellet was re-suspended in 350 μl of STET [0.3 M NaCl, 10 mM Tris-HCl (pH= 8.0), 1mM EDTA (pH= 8.0), 0.5% Triton X-100] buffer, and subsequently 25 μl of freshly prepared lysozyme solution (10 mg/ml lysozyme in 10 mM TrisCl) was added. The tube that contained bacterial lysate was placed in a boiling water bath for 40 seconds before centrifugation at room temperature (12K rpm, 10 minutes). The pellet of bacterial cell debris was removed using a sterile toothpick. Plasmid DNA was precipitated with cold sodium acetate and isopropanol, washed with 70% ethanol, and re-dissolved in 50 μl of TE containing 10 g/ml RNase (40).
DNA sequencing
DNA sequencing was carried out using the Applied Biosystem (ABI) system (Bioneer, Italy).
Isolation and molecular analysis of claR gene
Total DNA was isolated from Streptomyces and subjected to gel electrophoresis to analyze the concentration and purity. The pure, isolated total DNA was used for PCR reactions. Two different sets of primers were used. The claR gene was successfully amplified by using the claR2 primer set (Fig 2). The isolated fragment had to be studied in more detail to further compare it with the original claR of S. clavuligerus DSM738. Two different strategies were conducted to not only confirm the amplified fragment as claR gene, but to also compare it with the claR gene sequence from S. clavuligerus DSM738. Initially nested PCR, using the claR1 primer set, confirmed the existence of claR gene (data not shown). Sequencing analysis of the claR gene revealed that the claR gene was amplified and sub-cloned, free from any mutation that was also essential for the correct expression of the gene. Bioinformatics analysis determined the complete similarity between the isolated claR of the Iranian strain S. clavuligerus and S. clavuligerus 738 (Fig 4).
Fig 4: Structural analysis of the cloned claR, isolated from S. clavuligerus PTCC 1705. The start codon (ATG) of the claR gene has been shown here along with a few initial sequences related to amino acids E, V, A, and R. Not all the sequences have been shown (This figure has also been printed in full-color at the end of this issue).
The sequence of the claR gene from S. clavuligerus PTCC 1705 was determined for the first time in this study and will be submitted to the DDBJ/EMBL/ GenBank databases in the near future.
Cloning of the claR gene E. coli XL1-Blue was transformed with pBS plasmid. The pBs plasmid was then isolated from the transformed E. coli and subjected to double digestion (with BamHI and ClaI), gel electrophoresis, and gel purification. The PCR amplified fragment was also double digested with BamHI and ClaI, and the resultant cut fragment was purified by gel electrophoresis. A ligation mixture was set up using the double digested vector and the claR gene. E. coli XL1-Blue competent cells were transformed using 10 μl of the ligation mixture. About 20 colonies were observed on each plate, which was inoculated with 100 μl of the transformed cells of E. coli XL1-Blue. Therefore, the recombinant plasmid was isolated from a transformed colony. Molecular studies were then conducted on a 4581 bp new construct named pZSclaR (Fig 5). The isolated plasmid was subjected to gel electrophoresis for initial confirmation of the size of the constructed vector (Fig 6). pZSclaR was then cut with BamHI and ClaI for further confirmation of its structure and the resultant fragments were separated and visualized by gel electrophoresis.
Fig. 5: A physical map of the vector pZSclaR, 4581 bp. This plasmid map was drawn using computer software Clone
Manager 6 (This figure has also been printed in full-color at the end of this issue). These two enzymes cut the pZSclaR plasmid (4581) and separated the claR gene (1650 bp) from the original vector (pBs; 2931 bp). pZSclaR was then used as the PCR template in a PCR reaction containing nested primers that could confirm the existence of the claR gene. Therefore the correct recombinant plasmid did exist in the recombinant strain of E. coli.
Discussion
The overall aim of this work was to expand our knowledge of the regulation of antibiotic production in Streptomyces (the producer of two thirds of all known microbial antibiotics). Genetic engineering of the clavulanic acid producing strains could be done afterwards, in order to increase the capacity of clavulanic acid production in S. clavuligerus. It has been reported that cdaR, the regulatory gene for the production of a calcium dependent antibiotic (CDA), positively regulates its own transcription. As a result, introducing extra copies of cdaR into different strains of Streptomyces coelicolor MT1110, S. coelicolor 2377 and Streptomyces lividans has led to overproduction of this antibiotic (41). Designing novel antibiotics, on the other hand, is greatly dependent on the structural analysis of the gene cluster for each antibiotic. Clavulanic acid is a multi-billion-dollar per annum product useful for its β-lactamase inhibitory activity. While the biosynthesis of clavulanic acid has been the subject of intense investigation in recent years, the details of its production and regulation are still not completely worked out. Amplification of the ccaR gene, a regulatory gene in the cephamycin gene cluster, resulted in an almost threefold increase in the production of both cephamycin and clavulanic acid in S. clavuligerus (20). The formation of clavulanic acid is controlled by a LysR-type regulatory protein encoded by the claR gene. The claR gene was then chosen because this is a putative regulatory gene in the production pathway of clavulanic acid (33). The claR gene, which is located downstream from the gene encoding clavaminate synthase in the clavulanic acid biosynthesis gene cluster, is involved in regulation of the late steps in clavulanic acid biosynthesis (32-34). Amplification of the claR gene using multi-copy plasmids and under its own promoter in S. clavuligerus results in a three-fold increase in clavulanic acid production (31). We precisely amplified the coding sequence of claR accompanied with its promoter by using a specifically designed primer and an error proof PCR. In this case, only the promoter sequence of the gene comes with the claR. Since the distance between the vector born promoter and the claR transcription start point is not too great, the expression of the cloned gene could also be started by two individual promoters that exist in the vector. Therefore, the usage of three promoters (one native claR gene and two vector-born) leads to an elevated level of claR gene expression. Prior to this study and in contrast to other regulatory genes in S. clavuligerus, claR has not been isolated by PCR, but has been previously cloned via restriction enzyme digestion ( (39), so the subcloning of claR was practically impossible. To overcome these problems, new primers were designed with new incorporated cut sites for BamHI and ClaI. The amplified claR was then cloned in E.coli by using a newly constructed vector called pZSclaR (Fig 4). This unique vector contains a greatly expanded multiple cloning site (MCS), which makes it suitable for different purposes of gene cloning. Furthermore, this new construct is expression and inducible. In the same way, increasing the copy number of certain clavulanic acid-specific biosynthetic genes, by the introduction of multiple copy expression plasmids, resulted in positive effects on the production of clavulanic acid (42).
Conclusion
Characterization of isolated claR from an Iranian strain of S. clavuligerus PTCC 1705 was carried out using molecular studies. This gene was cloned in E. coli via a multiple copy expression vector. The constructed recombinant cassette (pZSclaR) may also be utilized as an appropriate tool for site directed mutagenesis and sub-cloning. The ClaR is recognized as a putative regulatory protein, so the overproduced protein could also be used for other related investigations, such as an enzyme assay and a mobility shift assay. The claR gene could also be expressed in Streptomyces by sub-cloning it into different varieties of Streptomyces specific expression vectors. | 2018-04-03T01:23:40.015Z | 2011-09-23T00:00:00.000 | {
"year": 2011,
"sha1": "b2d937bfdc2701ea721989e48320d663e319ee59",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b2d937bfdc2701ea721989e48320d663e319ee59",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221713518 | pes2o/s2orc | v3-fos-license | DFraud3- Multi-Component Fraud Detection freeof Cold-start
Fraud review detection is a hot research topic inrecent years. The Cold-start is a particularly new but significant problem referring to the failure of a detection system to recognize the authenticity of a new user. State-of-the-art solutions employ a translational knowledge graph embedding approach (TransE) to model the interaction of the components of a review system. However, these approaches suffer from the limitation of TransEin handling N-1 relations and the narrow scope of a single classification task, i.e., detecting fraudsters only. In this paper, we model a review system as a Heterogeneous InformationNetwork (HIN) which enables a unique representation to every component and performs graph inductive learning on the review data through aggregating features of nearby nodes. HIN with graph induction helps to address the camouflage issue (fraudsterswith genuine reviews) which has shown to be more severe when it is coupled with cold-start, i.e., new fraudsters with genuine first reviews. In this research, instead of focusing only on one component, detecting either fraud reviews or fraud users (fraudsters), vector representations are learnt for each component, enabling multi-component classification. In other words, we are able to detect fraud reviews, fraudsters, and fraud-targeted items, thus the name of our approach DFraud3. DFraud3 demonstrates a significant accuracy increase of 13% over the state of the art on Yelp.
Abstract-Fraud review detection is a hot research topic in recent years. The Cold-start is a particularly new but significant problem referring to the failure of a detection system to recognize the authenticity of a new user. State-of-the-art solutions employ a translational knowledge graph embedding approach (TransE) to model the interaction of the components of a review system. However, these approaches suffer from the limitation of TransE in handling N-1 relations and the narrow scope of a single classification task, i.e., detecting fraudsters only. In this paper, we model a review system as a Heterogeneous Information Network (HIN) which enables a unique representation to every component and performs graph inductive learning on the review data through aggregating features of nearby nodes. HIN with graph induction helps to address the camouflage issue (fraudsters with genuine reviews) which has shown to be more severe when it is coupled with cold-start, i.e., new fraudsters with genuine first reviews. In this research, instead of focusing only on one component, detecting either fraud reviews or fraud users (fraudsters), vector representations are learnt for each component, enabling multi-component classification. In other words, we are able to detect fraud reviews, fraudsters, and fraudtargeted items, thus the name of our approach DFraud 3 . DFraud 3 demonstrates a significant accuracy increase of 13% over the state of the art on Yelp.
I. INTRODUCTION
Reading through online reviews before making a purchase is increasingly a common practice of consumers. Studies [1] show that a rating increase of 1-star in Yelp may lead to a 5-9% in a surge of increase for a restaurant. The financial implications of online reviews are becoming significant which incentivise some businesses to pay imposters to write fake comments, i.e. fraud reviews, to either promote one's own business or defame competitors. Experts estimate that between 9% to 40% of reviews in Amazon are fraud [1].
Given the challenging nature of fraud review detection, even humans can only achieve an accuracy close to a random guess. It is, therefore, not surprising to see a surge of research effort in this area. To ensure a clear discussion of the research done in this area, let us model a review platform as a triple review, user, item where a review is written by a user for an item. Fraud detection algorithms typically rely on historical data to extract behavioral patterns of users, which have shown to S. Shehnepoor (*corresponding author) is with the University of Western Australia, Perth, Australia. R. Togneri is with the University of Western Australia, Perth, Australia. M. Buneman is with the University of Western Australia, Perth, Australia. W. Liu is with the University of Western Australia, Perth, Australia. emails: {saeedreza.shehnepoor@research.uwa.edu.au, roberto.togneri@uwa.edu.au, wei.liu@uwa.edu.au, mohammed.bennamoun@uwa.edu.au.} be more effective than linguistic features [2], [3] for fraud review detection. A key problem resulting from the reliance on historical data in such fraud detection systems is the phenomenon of cold-start. Cold-start refers to the failure of a detection system to recognize the authenticity of a new user u given the first review r on an item i, since there is no historical information about that user. Furthermore, detecting fraud reviews and fraudsters may take time, and even when they are detected, the fraud reviews have already had their negative impacts. The situation is exacerbated when new fraudsters apply the camouflage strategy in their first reviews.
Camouflage [2], [3], [4], [5] refers to the act of writing genuine reviews by fraudsters to hide their true identity and mask their traces. As a result of this act a fraudster gains the trust of other people before writing his/her first fraud review. Surprisingly, most fraudsters start their activity with genuine reviews, in order to cover up their true identity. In fact, statistics on the widely used Yelp dataset show that 62.18% of fraudsters (1319 users out of 2121 camouflaged users) started their activity by writing genuine reviews. Intuitively, information from other components can be used to predict the probability of camouflage behaviors. For example, a review from a new user for an item frequently targeted by fraudsters is more likely to be a fraud [2]. Hence, multi-component classification to classify reviews into genuine or not, users into fraudsters or not, and items into targeted or not, plays a very important role in handling cold-start, even when camouflage is employed by new fraudsters.
Recent attempts at the cold-start problem [6], [7] adopted a knowledge graph embedding approach to model the relation between three components, namely, review, user, and item. To learn their respective vector representations, Wang et al. [6] and You et al. [7] adopted the TransE [8] embedding model, attempting to jointly learn the salient features representing each of the three components. However, despite TransE's simplicity and effectiveness in capturing multiple relations, its well known limitation is that it only works for 1-to-1 but not 1to-N nor N-to-1 relations [9]. This is a significant drawback for the fraud review detection domain, because it is quite common for the same user to write similar reviews to different services. Take the Yelp dataset for instance, 5.56% of users (5,034 out of 90,177) wrote similar reviews (reviews like "Yummy") for different items. In TransE parlance, for these users, one review (same content) is translated through one or more users to describe multiple items, thus exhibiting a N-1 relationship. In addition, 6.62% of items (334 out of 5,044) have similar reviews (e.g., "Great Steaks" or "Awesome") from different users, reflecting the 1-N-1 relation (same review, different users, same item) as illustrated in Fig 1. This limitation causes multiple users modelled as relations in TransE to have identical vector representations, as was also observed by [10]. Accordingly, modeling users as a separate component is fundamental in obtaining a useful representation for each user, which TransE fails to achieve. Moreover, the camouflage problem is neglected, despite that it may significantly affect the performance of fraud review detection systems when coupled with cold-start. This calls for a better representation learning model with the ability to represent the intrinsic multi-relations between components, and to help spot the fraudsters when they start writing either fraud reviews from the beginning or genuine reviews to camouflage themselves.
Heterogeneous Information Networks (HIN) have been demonstrated to be suitable when it comes to gathering information from interconnected components [3]. In this research, to address the limitations of TransE, we choose to use an HIN as a more natural model for social review platform representation. Contrary to the random vector strategy for network component initialization, in this research, we argue the importance of an appropriate component vector representation. Based on the theory of Collective Intelligence [11], [12], aggregations of reviews are used to characterise each component in the network. In other words, a review's vector representation is the Sum of Word Embedding (SoWE) of all tokens in the review; a user's vector representation is then the SoWE of all reviews written by this user; and an item is the SoWE of all reviews about this item. The SoWE are further fine-tuned by training three independent Convolutional Neural Networks (CNNs). CNNs are chosen in preference to Recurrent Neural Networks (RNNs) in dealing with the potential multiple aspects discussed in each review [13], [14]. Then, to address the TransE limitations using HIN, we model a reviewer as a separate node, rather than a connection between a product and a review.
A graph inductive learning algorithm [15] is used to finetune the pre-trained embeddings from the CNNs, which are concatenated by the respective Negative Ratio (NR) value (See Sec. IV-D3). Negative Ratio (NR), the proportion of a user's negative ratio is chosen, because it has been demonstrated as one of the most important user behavioral indicators in previous studies [2], [3]. Other features used in [6], [7] such as Maximum Content Similarity (MCS) and Review Length (RL), are shown to be less significant in performance gain despite their higher computational cost [3]. The benefit of the graph inductive learning is twofold: first, to facilitate the generation of embeddings for a new node, or a new (sub)graph in real-time; and second, to refine the pre-trained embedding. DFraud 3 leverages component (review, user, and item) features, such as text features, metadata features, and also the graph structure (e.g., node degree), enabling the approach to learn an embedding function that generalizes the embedding features to unseen components. So every time a component is added, the inductive learning propagates the information to learn the component representation. Finally, the representation is fed to a softmax layer for the final classification. Softmax is chosen over SVM, due to its ability to discriminate between samples with similar representation and different labels; a common case in fraud review detection with similar text for fraud and real reviews. In addition to the substantial performance gain (15% for AUC) as compared to the state-of-the-art, our contributions of this work can be summarized as follows: • We propose a novel three staged framework to address the cold-start problem using multi-component classification. This approach takes advantage of an HIN which considers item, review, and user as separate components. We employ SoWE to obtain a unique representation for each component, shown to be the first important performance contributor. See Sec. IV-D1, IV-D2. • For the first time, we propose a graph based inductive learning model for fraud detection that aggregates information of a node's neighborhood into a dense vector embedding, addressing the limitation of TransE for multirelation representation. Our extensive study demonstrates that graph based inductive learning is the second most important performance contributor, right after the CNN pre-trained component vectors. See Sec. IV-D3. • We investigate the camouflage problem as pointed out but not investigated in [2], [5], [4], [3] when it occurs together with the cold-start problem. We devise a new approach to evaluate the performance of the system when facing the camouflage problem. Experimental results demonstrate that the DFraud 3 improves the detection of fraudsters who employ camouflage, with an increase in performance of 17% as measured by AUC (see Sec. IV-D5).
The rest of the paper is structured as follows. In Section II, we present the related work. In Section III, we introduce our methodology. In Section IV, we show the experimental evaluation. We conclude the paper with an outlook to future work in Section V.
A. The cold-start Problem
Despite its significance, since the first work on Fraud Review Detection [16], only a few studies investigated the cold-start problem. In particular, [6] employed three behavior features, namely, Review Length (RL), Reviewer Deviation (RD), and Maximum Content Similarity (MCS) for fraud review detection.
To mitigate the lack of information about a new user, i.e., the cold-start problem, Wang et al. [6] employed TransE [8] to encode a graph structure between an item, a user and a review, where an item and a review are the head and the tail of a triple respectively, and the user who wrote the review for the item is considered as the relation. To learn vector representations of the three components, a training objective of TransE is to minimize the distance between an item vector after being translated by a user vector in the embedding space and that of the review. An item's and a user's vector are randomly initialized from a random uniform distribution, while the embedding of a review is learned through a CNN, initialized using a pre-trained Word2Vec word embedding (CBOW) [17]. Results on the Yelp dataset show an accuracy of 65%.
AEDA (Attribute Enhanced Domain Adaptive) is an attribute based framework proposed by You et al. [7] to adapt the TransE model from [6]. AEDA relies on the same concepts of that users are relations between items and reviews as in [6] to solve the cold-start problem. Three types of relationships are therefore defined, attribute-attribute, entity-attribute, and entity-entity between entities (review, item, and users). Different pairwise features (comparing two attributes of each entity) such as date difference (dateDif), rating difference (rateDiff) between two reviews are calculated for each entity as input for TransE. The results of the proposed framework shows a 75.4% for accuracy on the Yelp restaurant dataset and 80.0% on hotels, with an increase of 14% as compared with [6].
B. Network-Based Fraud Detection
As mentioned in Sec. I, HIN as one of the network based models, has shown to be effective in network modeling [18], [19], [20]. There are also attempts on using network based approaches for fraud review detection, but they overlooked the cold-start.
REV2 [21] formulates the fraudster detection as a bipartite network between users and products, and uses a Bayesian Inference Network (BIN) to iteratively learn the latent scoring about the fairness of reviews, quality of products, and reliability of reviewers. The performance is evaluated on 5 different datasets including Flipkart, Bitcoin OTC, Bitcoin Alpha, Epinions, and Amazon. It uses Laplacian smoothing to handle fraudster detection. Despite REV2 providing a theoretical guarantee for the performance with a 64.89% for accuracy on fraudster classification, the approach does not perform well on the Yelp datasets. This is because in Yelp each user has only a single or a small number of reviews, resulting in a sparse network.
Netspam [3] modeled fraud detection as a single component classification problem for fraud review detection. Features are extracted from text and metadata, and a metapath is used to model the connection between every two reviews. Reviews are then labeled based on their similarity, through unsupervised and semi-supervised learning. Camouflage is discussed, and the impact of using the metapath is elaborated based on metapath weighting concept. However, no analytic explanation is provided to show how the framework works in face of camouflage.
SPeagle [2], first extracts a vector of features from both text and metadata, then applies a function on the whole vector to calculate prior knowledge for fraudster group detection. For classification, Loopy Belief Propagation (LBP) is used. The results show significant performance on fraud detection on the Yelp dataset. Similar to Netspam, SPeagle also considers the possibility that a user might be a camouflaged fraudster. However, there is no discussion on how the framework performs in the face of camouflage.
III. PROPOSED METHOD
In this research, we propose to model a social review platform as a heterogeneous network, where each node is either a user, an item, or a review. The connections indicate a user has written a review for an item. Our proposed methodology follows three main steps as illustrated in Fig. 2. First, a vector representation for each component of the HIN including item, review, and user are obtained. For each item and user, reviews are aggregated and regarded as one document.The vector representation of each (aggregated) document is finetuned through a CNN. This text based representation obtained for components is then combined with the Negative Ratio (NR) as a behavioral feature. Next, This combination is then fed as an input to the inductive forward propagation of the HIN. Finally, a softmax layer is applied for a final multicomponent classification. Fig. 2 shows the overall framework of the DFraud 3 . The edge E reflects two types of relations in the network; the edge between user and review (u n , r p , type = "write") ∈ E, and edge between review and item (r p , u m , type = "belong") ∈ E. The goal is to label each component in the graph. For each user, L U = {f raudster, honest}, each item, L I = {targeted, non − targeted}, and each review, L R = {f raud, genuine}.
A. Pre-training
In the pre-training stage, we aim to learn an initial vector representation of each of the three components. Collective Intelligence (CI) [11], [12] states that the intelligence about a subject matter from a group, crowd and generally people about a subject, when considered together is a suitable representation of that subject matter. In the same token, we treat the aggregation of written reviews for a specific item as a suitable descriptor for that item. When consumers consider an item, they will look through all reviews and disregard who the reviewers are. So a natural representation for an item would be the collection of reviews. Similarly, the online footprint of a person can be an important behavior or character indicator for recruiting agencies. In other words, it makes sense to look through reviewers' comments collectively to gauge his/her reviews' behavior. Given its effectiveness and simplicity in a multitude of semantic-centered tasks [22], [23], the SoWE is adopted as an algorithm to obtain vector representations for each component. In other words, the SoWE of all reviews for an item is used as an initial vector representation for an item; the SoWE of all reviews written by a reviewer is used as the representation for the reviewer; and the SoWE of the tokens in a review as the initial vector for the review. These aggregated representations are much more meaningful covering global characteristics of each item and user, as compared with the random initialization, as which we will demonstrate in our results. DFraud 3 includes three main substeps; word representation, sentence representation, and finally the node representation. 1) Word Representation: For a sentence containing n words, we denote each word as {w 1 , ..., w n }, where the word i embedding is represented as e wi ∈ R D , with D as the word vector dimension. To obtain the representation, a lookup matrix, say E, is used, where E ∈ R D×V , with V the vocabulary size. Here, E is initialized with a pre-trained word embedding [24].
2) Sentence Representation: After pre-training the word embedding, a sentence model is trained using a shared CNN separately for each component (as shown in Fig. 2). Inspired by [14], the CNN is trained in a supervised setting with the ground truth data as labels, to give a primary representation of the sentences. The convolutional layer in the CNN performs the role of a language model. The input for this layer is the concatenation of different words comprising the sentence, fed to a linear layer in a fixed-length window size equal to 3, representing a trigram language model for the words. The concatenated word representations are denoted as where D is the dimensionality of the word embeddings. The output of the linear layer is: In Eq. 1, W ∈ R L×D×D3 (L as output size of linear layer), and b are shared parameters of the layer. Next, the output of the previous part is fed to an average pooling.
where n is the number of words in a sentence. Finally, a hyperbolic function tanh is applied to incorporate non-linearity and obtain the final sentences' representation as follows: where e s is the final embedding representation of the given sentence s as input. The e s is then, fed to a softmax layer for classification. We use e s as the output.
3) Component Representation: In this step, the input is the embedding for each sentence, required to be concatenated for each review, user, and item.
In Eq. 4, e c is the representation of component c, and e si ∀i ∈ m indicates the representation of sentence i, and m is the total number of sentences for c. A max pooling layer is applied to the input to obtain the representation for each unique component, as a node in the graph: In Eq. 5, x c is the final representation of component c. For an improved representation of each component, first the NR (Negative Ratio) for each user and item is calculated by following equation: In Eq. 6, N (r) is the number of reviews with specific ratings (r) in range of 1-5 (5 is the highest), and N is the total number of reviews for each component. To work out the NR for each type of components, we count the number of where x v in Eq. 7 is a pre-trained feature representation for each component as node v in the graph. Note that a new user is not introduced to the network unless he/she makes a review about an item. Once the review is written, the new user will be added to the network alongside the review, and the review is connected to an item, and this process continues. Items, regularly have connections in realworld datasets, making it easy to gather data from other reviews and users.
B. Inductive Forward Propagation 1) Objective Function: With pre-trained vectors as an input, for obtaining final graph based embeddings, an objective function is required to guarantee the satisfaction of two criteria: (1) neighbor nodes should have a similar representation, and (2) distant nodes should be apart in the embedding space. To satisfy these two criteria, we developed an unsupervised algorithm to learn the representations. Let z u , z v be the final vector representation of vertex u, v ∈ V , respectively, where v is in u's neighbourhood, The objective function below employ Stochastic Gradient Descent (SGD) for training the weights: where v is a node that connects with node u in a specific neighborhood, with a predefined search depth of K, σ is the sigmoid function. In addition, P n is a probability function for negative sampling, and Q is the number of negative samples. The first term is to ensure that two similar nodes are close to each other in the embedding space. The second term ensures that negative samples, i.e., nodes that are not in the neighborhood of each other, should be distant from each other in the embedding space.
2) Forward Propagation:
We assume that the model is trained based on the objective function in Eq. 8 and with fixed hyper-parameters, namely K, Q, H, where K is the specified maximum search depth, Q is the number of negative samples, and H is number of randomly selected neighbors. Intuitively, the reason for sampling is to reduce the computational complexity. The fixed size sampling is also to keep the computational cost for each batch fixed. Without using sampling, we will not be able to predict the memory used by each batch and the runtime of batch processing, which is O(V ) in the worst case, where V is total number of nodes in the graph. On the other hand, per-batch space and time complexity would be fixed by the size of each batch. The testing process is thus: when a review from a new user is added to the system, K aggregator functions (in this case the mean aggregator) are used to aggregate information from neighbors, with K different weighting matrices known as W k , ∀k ∈ {1, ..., K}. The algorithm for the whole framework is described in Alg. 1.
The key idea of Alg. 1 is that through each iteration of k, outer loop nodes' representations are combined with the neighbors' representations gradually. As a result, in every iteration of k, the node's representation is combined with neighbors of one more depth, where k represents the search depth. Note that h k v denotes the node v representation at depth k and is initialized with the pre-trained features. In other words, for the first loop, k = 0, the representation x v is the pre-trained features from Sec. III-A, given as an input to the forward propagation system: Each iteration in the inner loop follows three main steps; first, the representations of a set of randomly selected neighbor nodes, {h k−1 u , ∀u ∈ N (v)} are aggregated using the "mean" function, h k N (v) , which is a single vector indicating the aggregated values form the neighborhood. In every iteration, h k is determined by the previous neighbor nodes' representations: where M is the number of randomly selected neighbor nodes of node v. Neighbor nodes are selected from a uniform distribution with probability less than 0.5. Next, this vector is concatenated (⊕) with the node's current representation, h k−1 v and then the resulting vector is fed to a fully connected layer with sigmoid (σ) as its activation function.
In Eq. 11, W k is the weight matrix in the k th iteration. Finally the representation is normalized for each node v: After K steps, the generated representation h k v , is considered as the final representation of each node, z v . These representations are then fed to a softmax layer for classifying each node. Note that the outputs of the softmax layer are the final classifications for each type of component, which makes the approach capable of multi-component classification. The forward propagation and the training process through back propagation is depicted in Fig. 3.
A. Datasets
To address the cold-start problem we require activity history provided through time stamps. This will help us identify new users. Thus, for this research, we use time-stamped dataset 1 Yelp. Yelp is an online platform for people to share their experience of hotel and restaurant services in NewYork City (NYC). Other datasets such as TripAdvisor and Amazon lack either the ground-truth or timestamp. Hence they are not suitable in assessing the cold-start problem. Accordingly, similar to [6], [7] the state-of-the-art works on the cold-start, which we use as baselines for comparison, we conduct the experiments on the Yelp dataset. We prepared two subsets of data from the Yelp dataset to evaluate the performance of DFraud 3 . The first one is Yelp-partial with randomly selected reviews from the whole dataset, The other is Yelp-whole which is the whole dataset containing all the reviews. Reviews in the datasets are labeled by the Yelp filtering system [2]. Table I summarizes the the two datasets. [25], BERT [26], XLNet [27]) developed to refresh the new state of the art techniques on many natural language processing tasks. However, to provide the fair comparison with the two baseline systems (Wang et al. [6] and You et al. [7], introduced in Sec. II), we use the same embedding techniques i.e. word embedding initialized using 100-dimension (D) Continuous Bag of Words (CBOW) [28] trained on Yelp dataset with a window size of 2. The vocabulary (V ) size is 37, 257 for Yelp-partial, and 5, 354, 252 for Yelp-whole. The learning rate is 0.1; batch size was set to 256, with 10,000 training epochs. For pretraining the CNNs, as mentioned in Sec. III-A2, the filter size is 3, learning rate was set to 0.1, cross-entropy function is used as an objective function, with 30 training epochs. The initial values for W and b are set randomly from a uniform distribution. For training the graph-based representation, the minibatch number is 512, the learning rate was set to 0.01, the number of training iterations was set to 30 , with 3 as search depth (K). DFraud 3 is implemented in Python using Tensorflow 1.13.
2) Training and Test: To determine the training and test set for evaluating DFraud 3 performance on the cold-start problem, reviews are split into two datasets with 80% of the first reviews (based on timestamp) as the training set and remaining as the test set. The statistics of the sets are shown in Table II. 3) Labeling Procedure: Baseline systems [7], [6], provide no information on how the labels from reviews are leveraged to become the ground truth for users (i.e., fraudster or not). Similarly, the near ground-truth labeling procedure of the Yelp datasets contains labels only for reviews, not for users (fraudster or honest), nor items (targeted or non-targeted). To address this problem, Shebuit et al. [2] considered a user with at least one fraud review as a fraudster. This type of labeling can lead to inaccurate results, due to the near ground truth labeling procedure of reviews. We used a simple probability assignment for each user and item based on the fraud review they write and written for, respectively. A user u(i) is a fraudster with the probability of n f u nu where n f u is the number of fraud reviews written by user u, and n u is the total number of reviews by the same user u. If the calculated probability is higher than 0.5 user u is considered as a fraudster, otherwise u is labeled as honest. Similarly, an item i is targeted with a probability n f i ni , where n f i and n i are the number of fraud reviews written for item i, and the number of total reviews written for item i , respectively.
Since there is no ground truth on the camouflage problem, we devised a new approach to provide the labels. We used the camouflage definition (Sec. I) to measure the effectiveness of DFraud 3 to uncover camouflaged users. In other words, we looked for users with both fraud and genuine reviews in datasets and labeled them as suspicious of camouflage. For Yelp-partial, 137 users have multiple reviews with only 2 users suspicious of camouflage. While in Yelp-whole there are 90,179 users with multiple reviews and 2,121 users are suspicious of camouflage. Therefore, the approach was evaluated on the Yelp-whole for measuring the performance on the camouflage task, where 905 (out of 2,121) users are considered as camouflaged users for the test set (from original training and test set in Table II) and remaining (1215 users) are considered as training.
C. Evaluation Metrics
For evaluation, we rank the fraudster probability for each user. Users with higher values are more probable to be a fraudster. We used three standard metrics to describe the performance: Area Under Curve (AU C), Average Precision (AP ), and F-measure.
1) Area Under Curve: For AU C [2], integration of the area under the plot of True Positive Ratio (T P R) on the x-axis and False Positive Ratio (F P R) on the y-axis is calculated. Consider A as a list of sorted users in descending order according to their probability to be a fraudster. If we consider n j is the number of fraudster (honest) users sorted before the user in index j, then T P R (F P R) for index j is nj f , where f is the total number of fraudster (honest) users. The AU C is calculated as follows: (13) where N is the total number of reviews.
2) Average Precision: For AP [3], [2], we need to have a list of sorted users based on their probability to be a fraudster. If I is a list of sorted user indices based on their probability and M is the total number of fraudster users, then AP is formalized by: 3) F-measure: Also known as F 1 [6], uses two main strategies for measuring performance, Micro and Macro. The former uses all correct estimations for different classes and then calculates the measure, regarding collected estimations, while the latter calculates the measure for each class separately, and then average the values. Obviously, with imbalanced data, using micro measure seems legit, while for balanced data macro measure can also be useful. F 1 is calculated as follows:
1) Ablative Study:
To investigate the effectiveness of our graph based inductive learning, we used different combination of graph based approaches with different classifiers: Pre-trained + TransE + SVM (the effectiveness of pretrained features): This set is similar to study on [6] and [7], and only differs in the pre-trained features (see Sec. III-A). Here, Word Embeddings (WE) and Negative Ratio (NR) are used as pre-trained features.
Pre-trained + Inductive + SVM (the effectiveness of inductive learning): To show the effectiveness of DFraud 3 , the TransE model is replaced by our proposed inductive learning approach in Sec. III-B2.
Pre-trained + Inductive + softmax (the effectiveness of softmax): To observe the effectiveness of using the softmax as the classifier, the SVM is replaced with a softmax classifier. Table III shows DFraud 3 outperforms the two baseline systems on Yelp-partial. The results suggest that inductive learning yields better performance for all metrics. DFraud 3 performs better in terms of F1-Micro, while the results for F1-Macro is less encouraging, we attribute this to the unbalanced distribution of different classes over the partial dataset. The AP and AU C for DFraud 3 also demonstrate better results in comparison with baseline frameworks. Surprisingly, the performance of our approach is significantly improved using the softmax as the classifier for AP . One reason could be because the SVM works better for samples close to the margins. When the margin criteria is satisfied for the SVM, it will output the results. In better words, the SVM fails to model the samples with high feature similarity and different labels. As a result, the SVM works better for distant samples. The objective of the SVM is to maximize the margin which means for metrics like AP , the SVM works better since anomalies will not affect the classifier. On the contrary, the softmax objective is to produce a high probability for the correct class and small changes in samples can have a large effect on its performance. This can result in a noticeable difference in AP .
By replacing the pre-trained features employed in [7] and [6] and keeping the TransE model and SVM classifier, we observe that the performance is improved for all metrics. This indicates that our proposed pre-trained features are effective in capturing the feature of the three components of a review platform. Substituting the TransE model with inductive learning results in a further improvement for all metrics, as compared with the two baseline systems, but a drop in F1-Macro as compared to TransE model. The classifier adjustment to softmax brings improvement for most metrics, apart from AP .
The results for the Yelp-whole dataset are displayed in Table III. Obviously, the performance improves using all the data from the training set. In addition, the results on the Yelpwhole dataset show that DFraud 3 outperforms the two previous baseline systems for all four metrics. Similar to Yelp-partial, the performance is boosted for AP . In addition, except for F1-Macro inductive learning outperforms the TransE model. One can explain the reason for the reduction of F1-Macro by justifying the data imbalance. As it is shown in Table IV-A, 17% of of reviews in Yelp-partial and 13% of reviews in Yelp-whole are labeled as fraud reviews. This indicates a considerable imbalance between the number of fraud reviews and genuine ones.
2) Multi-Component Classification Analysis: DFraud 3 performs classification on all of the three components. Fig. 4, 5 depict the effectiveness of DFraud 3 for multi-component classification. Results demonstrate that our system yields better performance on fraudster/honest user classification as compared with the classification of reviews and items. Considering the probability for each node as ground-truth (explained in Sec. IV-B3), instead of using binary labeling, it assists the model to detect the fraudsters with high performance in comparison with other types of components. In addition, the performance improves with more data as training data increases. Observation on datasets suggests that performance on three components reaches stability on Yelp-whole compared to Yelppartial. This indicates that with complete data the performance is improved for users, items, and reviews.
3) Impact of Inductive Learning: Another key difference between DFraud 3 and baseline systems is the use of the forward propagation after the pre-training step which outputs a refined primary representation of each component. To observe the forward propagation's impact on the performance of the approach, we devised four different feature combinations: Rand + Inductive: A random feature representation is generated and then fed to the inductive learning for the final representation. The final representation is then fed to the softmax layer for final classification.
WE (Word Embedding) + Inductive: The pre-trained representation for this category is based on the word embeddings (WE) only excluding the NR. This representation is then fed to the inductive learning for final representation. The final labeling is based on the softmax classification. WE + NR: Inductive learning is withdrawn for this part and the pre-trained features are directly fed to softmax layer for final classification.
WE + NR + Inductive: This represents the whole system. Fig. 6, 7 represent the impact of inductive learning on both datasets regarding the mentioned metrics. There is a noticeable difference between the accuracy of the approach with and without the inductive learning (blue vs. green). Results demonstrate that the incremental aggregation of information from neighbors is effective at improving the system for addressing the cold-start problem (inductively based features outperform pre-trained features; yellow vs. blue). In addition, the performance of DFraud 3 on pre-trained features alone without graph embedding is already on par with baseline systems. In other words, even without applying inductive learning, DFraud 3 performs as well as previous studies. Furthermore, adding the NR feature improves the system performance which confirms previous works' findings [2], [3] regarding the importance dataset (yellow vs. blue). 4) Impact of N-1 Modelling: As mentioned in Sec. I, DFraud 3 handles the N-1 and 1-N-1 relations, which was the limitation of the TransE model. To demonstrate that our performance gain is due to the better handling of N-1, and 1-N-1 relations, an experiment is conducted. In this experiment the N-1 relation, i.e., the same reviews written by the same users on different items; and 1-N-1, i.e. the same reviews written by different users, on the same item (TransE ends up with the same representations for different users in this case) are removed. Fig. 8 represents the impact of N-1, and 1-N-1 relations removal on the performance.
As Fig. 8 shows, the removal of the same reviews by the same users on different items drops the performance for all measures. Intuitively, DFraud 3 makes use of user representation and its neighbors to calculate the final representation. More importantly, removing the relations with the same reviews on the same items with different users leads to a noticeable reduction performance as compared with the baseline systems. This, in turn, strengthened our claims that the performance gain of our system is due to its effectiveness in handling cold-start. As a result of 1-N-1 removal, the system efficiency in handling cold-start is reduced, dramatically. 5) Dealing with Camouflage: As mentioned in Sec. I, genuine reviews are not always written by honest people, and they can be written by fraudsters to hide their true identity. Previous approaches have not considered this problem since a fraudster can easily manipulate the traces by writing some honest reviews. We address this issue by using propagation over nodes. Then, the representation of each node is combined with its neighbors to regulate its importance in covering fraudsters.
The performance on the two baseline systems is compared with DFraud 3 . The results are presented in Table IV. Table IV presents the performance of the approach to camouflage detection against two baseline systems. We observe that our system outperforms the two baseline systems across all measures. Analysis suggests that using graph-based forward propagation helps the system to learn feature representations from neighbor nodes, which helps uncover the true intentions of users for writing contradicting reviews in terms of authenticity. Similar to Sec. IV-D4, we conducted an experiment to demonstrate that the gain in the performance is also due to the better handling the camouflage users. Fig. 9 shows the performance of DFraud 3 for two cases: when camouflaged users are included, and when they are excluded from the dataset. As we can see, the performance drops after excluding the camouflaged users. Analytically, camouflaged users first write genuine reviews to hide their true intentions, in the worst-case scenario. This means that the fraud detection system requires information from both the neighbors and the node itself. Previous approaches employed information only from one-hop neighbors, which is not helpful in cases of camouflage. Also, they missed the opportunity of using the initial information for each user, which can be used to initialize the pre-knowledge of each node. To address the first problem, graph-based inductive learning facilitates information propagation for more than one hop and it helps to gather information from distant nodes rather than just the neighbor nodes. The second limitation is addressed using the pre-training step (Sec. III-A). This leads to the effective detection of camouflaged users.
V. CONCLUSION
Cold-start is a challenging issue that hinders the effective detection of fraudsters in social review platforms. In this research, we devised a system that takes advantage of the textual and rating data (abundant surface data) and aggregates them through a CNN as initially learned features for a vector representation of each component of a social review platform. The initial vector representation is then refined through a graph inductive learning algorithm we proposed to capture the interplay between a user, an item and a review, using multicomponent classification (reviews to fraud, genuine; user to fraudster, honest; and items to targeted, non-targeted) as the downstream task. Two sets of comprehensive ablative studies have been carried out that demonstrate the effectiveness of our approach to learning the representation of each component. Notably, there is significant performance gain achieved by WE Camouflage included Camouflage excluded Fig. 9: Impact of considering camouflaged users on DFraud 3 performance on Yelp-whole (purple = camouflaged users included, yellow = camouflaged users excluded). + NR and performing inductive learning on the Yelp dataset from two domains; restaurants and hotels. Defining a new relationship between components, from a different view can be seen as future work. One way is to consider each link's importance regarding metapath weight [3] to calculate contributions of the influence of each link in the final classification. This also can be applied to contents from other media such as twitter to assist spam detection [29], [30], [31]. | 2020-06-11T01:01:37.816Z | 2020-06-10T00:00:00.000 | {
"year": 2020,
"sha1": "29da85b815bc8c8c09b610638b30d48c498cf56e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "29da85b815bc8c8c09b610638b30d48c498cf56e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231786281 | pes2o/s2orc | v3-fos-license | Enhanced Uplink Quantum Communication with Satellites via Downlink Channels
In developing the global Quantum Internet, quantum communication with low-Earth-orbit satellites will play a pivotal role. Such communication will need to be two way: effective not only in the satellite-to-ground (downlink) channel but also in the ground-to-satellite channel (uplink). Given that losses on this latter channel are significantly larger relative to the former, techniques that can exploit the superior downlink to enhance quantum communication in the uplink should be explored. In this work we do just that - exploring how continuous variable entanglement in the form of two-mode squeezed vacuum (TMSV) states can be used to significantly enhance the fidelity of ground-to-satellite quantum-state transfer relative to direct uplink-transfer. More specifically, through detailed phase-screen simulations of beam evolution through turbulent atmospheres in both the downlink and uplink channels, we demonstrate how a TMSV teleportation channel created by the satellite can be used to dramatically improve the fidelity of uplink coherent-state transfer relative to direct transfer. We then show how this, in turn, leads to the uplink-transmission of a higher alphabet of coherent states. Additionally, we show how non-Gaussian operations acting on the received component of the TMSV state at the ground station can lead to even further enhancement. Since TMSV states can be readily produced in situ on a satellite platform and form a reliable teleportation channel for most quantum states, our work suggests future satellites forming part of the emerging Quantum Internet should be designed with uplink-communication via TMSV teleportation in mind.
I. INTRODUCTION
Q UANTUM COMMUNICATIONS via low-Earth-orbit (LEO) represent a critical component of the so-called Quantum Internet -a new heterogeneous global communication system based on classical and quantum communication techniques whose information security will be underpinned by quantum protocols such as quantum key distribution (QKD).
This new internet will also be used as the backbone communication system inter-connecting future quantum computers via routed quantum information transfer. The Quantum Internet paradigm has taken large steps forward in the past few years, particularly with the spectacular success of Miciusthe first quantum-enabled satellite launched in 2016 [1]- [4]. Building on the pioneering Micius mission, some twenty-plus satellite missions are now under development [5] -some at the advanced design phase.
The importance of satellite-based technology to the Quantum Internet paradigm lies in a satellite's ability to transmit Corresponding author: R. Malaney (email: r.malaney@unsw.edu.au) quantum signals through much longer distances relative to terrestrial-only links [1], [2], [6]. Indeed, the Micius experiment has demonstrated quantum communication over a range of 7,600km [4] -a feat put into perspective by the current terrestrial-only quantum communication record of 500km [7].
The Micius experiment deployed quantum communication protocols via discrete variable (DV) technology where the quantum information was encoded in the polarization state of single photons [2], [3]. Alternatively, continuous variables (CV) quantum information, where the information is encoded in the quadratures of the electromagnetic field of optical states, is widely touted as perhaps a more promising candidate to transfer quantum information [8], [9]. This is largely due to the relative technical simplicity (and maturity) of the CV-enabled devices required to send, receive, and measure quantum signals, robustness against background noise, and the potential of the enlarged Hilbert space associated with CV systems to lead to enhanced communication throughput in practical settings. 1 For these reasons, there is great interest in pursuing designs of CV-enabled quantum satellites, with many recent studies focusing on the more feasible satellite-to-ground (downlink) transmission of quantum signals, largely with a view to enable CV-QKD [11], [12]. As yet, there has not been any experimental realizations of satellite-based CV quantum communications. In this work, we turn to a hitherto overlooked type of satellite-based CV quantum communications, namely, the use of CV quantum downlink communications as a means to enhance ground-to-satellite (uplink) quantum communications with a LEO satellite.
The main challenge faced in satellite-based quantum communications is the degradation of the signal as it is transmitted through the turbulent atmosphere of Earth [13]- [15], a degradation that is almost always larger than the noise introduced by the components used [16]. It is well documented that uplink satellite laser communications is considerably more challenging compared to downlink satellite transmission: the turbulent eddies in the Earth's atmosphere have a more disruptive effect in the uplink channel. This is because the size 1 In theory both DV and CV communication deliver the same throughput, and the reality is both systems have their pros and cons. However, there certainly is a school-of-thought that in many pragmatic systems, the higherdimensional encoding space directly available to CV systems will lead to enhanced outcomes. A detailed discussion of the pros and cons of both DV and CV systems is given in [10]. of the eddies encountered by a laser beam in the downlink at the atmospheric entry point are significantly larger than the laser beam's transverse dimensions (spot size) at the entry point, whereas in the uplink the opposite is true [17]. The consequence of this is that an asymmetry in the channels exists, with the uplink beam profile evolving in a more random fashion, especially in regard to beam wandering effects. Ultimately, this asymmetry manifests itself in higher losses in the uplink channel [17], [18].
Here, we investigate the use of quantum resources delivered through the satellite downlink channel as a resource for teleportation in the uplink, and the subsequent use of that teleportation resource to enhance quantum communications relative to simple direct uplink transmission. More specifically, we consider the use of a two-mode-squeezed vacuum (TMSV) quantum teleportation channel created via the downlink channel as a resource to teleport a coherent state from the ground station to a LEO satellite. We will see that for uplink communications, the use of the teleportation channel leads to significantly higher fidelities compared to the direct transmission. Moreover, we find that the teleportation channel is capable of transferring coherent states with larger amplitudes, something that is very difficult via direct transmission. This latter attribute is important for many CV-based quantum protocols, such as CV-QKD, since for these protocols the capability to transmit coherent states of different amplitudes is a key requirement.
The main contributions of this work can be summarized thus. (i) Through a series of detailed phase-screen simulations we quantify the asymmetric losses experienced by the downlink and uplink channels of a LEO satellite in quantum communication with a terrestrial ground station. Moreover, we expand previous analyses of the uplink and downlink channels by quantifying and including the excess noise that arises from each channel. This excess noise limits the accuracy of the quadrature measurements, effectively reducing the amount of transferred quantum information. (ii) Using these same simulations we then determine the fidelity of coherent state transfer through direct uplink transfer. (iii) We model the creation of a resource CV teleportation channel in the downlink created by sending from the satellite one mode of an in situ produced TMSV state. (iv) We then use that resource to determine the fidelity of coherent state transfer to the satellite via teleportation, quantifying the gain achieved over direct transfer. (v) We then investigate a series of non-Gaussian operations that can be invoked on the received TMSV mode at the ground station as a means to further enhance uplink coherent-state transfer via teleportation. Specifically, we investigate photon subtraction, addition, and catalysis as the non-Gaussian operations -identifying the gains in teleportation fidelity achieved for each scheme. Sequences of these non-Gaussian operations are also investigated, and the optimal scheme amongst them identified.
The remainder of this paper is as follows. In section II we describe CV teleportation through noisy channels. In section III we detail our phase screen simulations, comparing their predictions with a range of theoretical models, and discussing the implications of our simulations in the context asymmetric downlink/uplink channel losses. In section IV we discuss a series of non-Gaussian operations that can be applied to a TMSV state, discussing their roles in potentially enhancing CV teleportation via a noisy TMSV channel. In section V we discuss application of our schemes to a wider range of states, and discus differences with the DV-only scheme of Micius. We draw our conclusions in section VI.
Notation: Operators are denoted by uppercase letters. The sets of complex numbers and of positive integer numbers are denoted by C and N, respectively. For z ∈ C: |z| and arg(z) denote the absolute value and the phase, respectively; Re(z) and Im(z) denote the real part and the imaginary part, respectively; z * is the complex conjugate; and i = √ −1. The trace and the adjoint of an operator are denoted by Tr{·} and (·) † , respectively. The annihilation, the creation, and the identity operators are denoted by A, A † , and I, respectively. The displacement operator with parameter α ∈ C is
II. CONTINUOUS VARIABLE TELEPORTATION
We consider the teleportation protocol introduced in [19]. Here, we are considering the parties involved in the teleportation are a ground station and a satellite in space, with the quantum channel between them corresponding to the freespace atmospheric channel as exemplified in Fig. 1a). The teleportation protocol starts with the generation of a bipartite entangled resource state, Ξ AB , in the satellite. Part A of Ξ AB is sent through the atmosphere to the ground station, where it is combined with the input state using a balanced beamsplitter. Afterwards, a Bell projective measurement (using a pair of homodyne detectors) on part A and the input state is performed. The measurement result is broadcast to the satellite which, by doing a corrective operation on B, recovers the input state as the final output of the protocol. 2 To describe the teleportation protocol we follow the methodology introduced in [20]. Using this methodology the output state can be computed by using the Wigner characteristic functions (CF) of the input state Ξ in and the entangled resource state Ξ AB .
Here, we indicate CFs by χ(ξ), for some complex parameter ξ. In [21] the methodology is further expanded to include imperfect homodyne measurements, obtaining where g is the gain parameter, and η the efficiency of the homodyne measurements. The CF of a generic n-mode state Ξ is obtained by taking the trace of the product of Ξ with the displacement operator, giving where {ξ 1 , ξ 2 , ..., ξ n } ∈ C are complex arguments, each one representing a mode of Ξ in the CF. Fig. 1: a) CV teleportation of a coherent state using a bipartite entangled resource between a satellite and a ground station. Homodyne measurement results are transmitted by the ground station after combining the received quantum signal with the coherent state. The satellite uses the measurement results to apply a displacement operator on the remaining mode of the entangled state to obtain the teleported state. b) In satellite communications the downlink channel is considerably less noisy than the uplink channel.
In this work, we consider that the entangled resource used is a TMSV state. The TMSV state can be considered as the application of the two mode squeezing operator to the vacuum where = re iφ is the squeezing parameter. Here, we will take φ = 0, for simplicity. The CF of a TMSV state is where V = cosh(2r) is the variance of the distribution of the quadratures. Throughout this work quadrature variances are in shot noise units (SNU), where the variance of the vacuum state is 1 SNU ( = 2). Additionally, a coherent state (the state we wish to transfer to the satellite) can considered as the application of the displacement operator to the vacuum, with the corresponding CF given by In general, we can describe the effects a noisy channel, with a given transmissivity T and excess noise , has on a mode of any quantum state by scaling the ξ s in the relevant CF by √ T , and adding a CF corresponding to a vacuum state. For a TMSV state where only mode B is transmitted through the noisy channel, the corresponding CF is [22], At times, it will be convenient to refer to the transmissivity in dB, as given by −10 log 10 T . Note, that due to the negative sign in this definition, when the transmissivity is referred to in dB a larger loss will have a higher numerical value of the dB transmissivity. Indeed, in this work take the term "loss" to mean a transmissivity given in dB -the specific transmissivity being referred to being clear given the context. If transmissivity is specified without reference to units then it has its normal meaning of a ratio of energies (larger loss corresponding to lower transmissivity).
A. Fidelity of teleportation
We will use the fidelity as the figure-of-merit to evaluate the effectiveness of quantum teleportation. The fidelity, F , is a measurement of the closeness of two states Ξ 1 and Ξ 2 , and is given by To compute the fidelity of a teleported coherent state, F T , we first use Eq. (4), and Eq. (7) to write the CF of a TMSV state that has been transmitted through a noisy channel. Thereafter, using Eq. (1), and Eq. (6) we obtain the CF of the teleported state. Finally, F T is computed as in Eq. (8), resulting in whereg = gη, and Ultimately, F T depends on the characteristics of the noisy channel involved in the protocol (T and ), the parameter V , and the gain g. These last two parameters, V and g, can be controlled to optimize the fidelity teleportation for any given T and .
We will compare the resulting fidelity of the teleported states with the fidelity of states directly transmitted through the uplink noisy channel. The fidelity of direct transmission, F DT , is computed by first writing the CF of a coherent state that has been transmitted through the noisy channel, as Thereafter, the fidelity between the original state and the transmitted one is computed by using Eq. (8), resulting in To perform a fair assessment, it is not enough to simply consider a single coherent state. Instead, we must consider the mean fidelity over an ensemble of coherent states, drawn from a Gaussian distribution, whose probability distribution is given by [21] with σ the variance of the distribution. We can think of σ as determining the alphabet of states used when transmitting quantum information, or during a protocol such as CV-QKD.
We can now define the mean fidelity as To compare the effectiveness of teleportation relative to direct transmission, we present in Fig. 2 the values ofF obtained for transmission via a fixed noisy channel, for different values of σ. The excess noise in the channel is fixed as = 0.02. Throughout this work the efficiency of the homodyne measurements involved in the teleportation is fixed to η = 0.9. Additionally, the values of g and V involved in the teleportation are optimized for each value of the transmissivity. When the loss is small (1 dB), the optimal value of V is approximately 100, however, as the loss of the channel increases the optimal value of V rapidly decreases towards unity. Using purely classical communications, a value of F classical = 0.5 can be achieved, therefore quantum state transfer is only of interest in the regime where F > F classical [23]. From the results presented in Fig. 2, we make two observations: First, for each value of σ, there exists a threshold in the transmissivity above which teleportation yields a higher mean fidelity. Second, as σ increases this threshold decreases. This second observation is important for numerous quantum communications protocols (e.g. coherent CV-QKD) in which the more states that can be transmitted, the better. These two observations indicate that the transmission of quantum states by means of teleportation can be a better alternative relative to simple direct transmission.
In the next section we will explore this result in more detail in the context of uplink satellite communications, where we consider teleportation from the ground station to the satellite via a TMSV state created via the downlink channel.
III. GROUND-TO-SATELLITE STATE QUANTUM COMMUNICATION We consider a quantum communications setup between a ground station and a satellite. In this setup, the satellite and ground station have the ability to send and receive quantum optical signals between each other. The ground station is positioned at ground level, h 0 = 0km, and the satellite when directly overhead at an altitude H = 500km. The total propagation length between the satellite and the ground station depends on the zenith angle, ζ, of the satellite relative to the ground station. The quantum signals are in the form of short laser pulses with a time-bin width of τ 0 = 100ps, emitted from a laser with a wavelength of λ = 1550nm. Each laser pulse has an amplitude in the transverse plane possessing a Gaussian profile, and with a beam waist of radius w 0 . Although in some special configurations the beam w 0 can be made as large as the transmitting aperture, without loss of generality, we will assume w 0 is always smaller than the radius of the transmitting aperture. As the signal propagates, its beam width increases due to natural diffraction as well as due to the effects of the atmosphere. The satellite and ground station are both equipped with a telescopic aperture to receive the quantum signals. The radius of the aperture of the satellite is r sat , while for the ground station the radius is r gs . Besides the quantum signals, the ground station and the satellite also transmit a strong optical signal which can be used as a phase reference for performing homodyne measurements. This strong signal is commonly called a "local oscillator" (LO). In order to study the transmission of quantum signals through the atmosphere, it is key to have a correct model of the effects of the atmospheric turbulence on the propagating beams. Ultimately, this model will allow us to estimate the values T and of the uplink and downlink channels.
A. Modeling atmospheric channels
The effects of the atmosphere on a propagating beam are modelled using the phase screen model, based in Kolmogorov's theory [24]. The phase screen model is constructed by subdividing the atmosphere into regions of length ∆h i . For each region the random phase changes induced to the beam by the atmosphere are compressed into a phase screen. The phase screen is then placed at the start of the propagation length, and the rest of the atmosphere is taken to have a constant refractive index. The result at the end of the entire propagation length is a beam that has been deformed mimicking the effects of the turbulent currents in the atmosphere. Thus, this process recreates what a receiver with an intensity detector would observe. Numerically, the beam is represented by a uniform grid of pixels, each one assigned with a complex number, and the propagation is modelled via a Fourier algorithm [25]. Since the result of each beam propagation is random, the simulations are run 10,000 times, in order to obtain a correct estimation of the properties of the channel. A detailed description of the numerical methods used can be found elsewhere, e.g. [26]- [28].
In the phase screen model, the first requirement is a model of the refractive index structure of the atmosphere, C 2 n . We use the widely adopted H − V 5/7 model [29]: where h is the altitude in meters, v = 21m/s is the rms windspeed, and A = 1.7 × 10 −14 m −2/3 the nominal value of C 2 n at ground level. In the H − V 5/7 model, the main effects of the turbulence are confined to an altitude of 20km, since for higher altitudes the effects are minimal. Besides the refractive index, we also need the upper-bounds and lower-bounds to the sizes of the turbulent eddies that make up the turbulent atmosphere. The upper-bounds and lower-bounds are the socalled outer-scale and inner-scale, L 0 and l 0 , respectively. Here, we use the empirical Coulman-Vernin profile to model L 0 as a function of the altitude h [30] and we set the inner-scale to be a some fraction of the outerscale, specifically, l 0 = δL 0 , where δ = 0.005.
With the atmospheric models specified, we now look into how the phase screens are constructed so as to mimic the effects of the turbulence. Each individual phase screen is created by performing a fast Fourier transform over a uniform square grid of random complex numbers, sampled from a Gaussian distribution with zero mean and variance, given by the spectral density function [27] Φ φ (κ) = 0.49r where κ is the radial spatial frequency on a plane orthogonal to the propagation direction, κ m = 5.92/l 0 , κ 0 = 2π/L 0 , and r 0 is the coherent length. Since the main effects induced by the atmosphere happen between zero altitude and 20km, the uplink and downlink transmissions will possess key differences, mainly arising from the interplay between the sizes of the beam size and the turbulent eddies. During downlink transmission the beam first encounters the atmosphere with a large beam size -possessing essentially no curvature at this point. On the other hand, in the uplink channel the beam encounters the atmosphere at the start of its path where it has a positive curvature and a small beam size. For these reasons, we expect that the loss in the downlink will be dominated by refraction while the (higher) loss in uplink will be dominated by beam wandering. Under the flat beam assumption, the coherent length for the downlink can be written as where k = 2π/λ, and h − and h + correspond to the lower and upper altitudes of the propagation path corresponding to the respective phase screen. For the uplink, we need to define first some parameters that characterize the properties of the beam, namely where R and w, are given by where L is the total distance between satellite and ground station (dependent on ζ). Given these definitions, the coherent length for the uplink channel can be written as where The position of each phase screen is determined using the condition that the Rytov parameter, r 2 R , is maintained constant over each length ∆h i , specifically [31] We set a value of b = 0.2, which corresponds to a total of 17 phase screens up to 20km. In Fig. 3, we plot the H − V 5/7 model, with the positions of the phase screens set by the condition given by Eq. (23). For comparison, we also plot the positions of the phase screens placed at a uniform distance between ground level and 20km. We can see that by using the condition imposed by Eq. (23) the phase screens are more adequately distributed to account for the altitude variations in the turbulence. Finally, to account for the remaining turbulence between 20km and H a single phase screen is used. At the end of every beam propagation simulation we can obtain the transmissivity induced by the atmosphere by integrating the intensity of the beam over the receiver aperture, as where I sig is the intensity (power per unit area) of the beam at the plane containing the receiver aperture, P 0 is the initial total power of the beam at the point of emission, and D is the surface area of the receiver aperture. Despite the main source of loss arising from the atmospheric turbulence, we also need to account for the extinction of the signal caused by absorption and scattering by the particles of the atmosphere, as well as the loss due to the imperfect optical devices used. To account for the extinction we adopt a transmissivity T ext = exp(−0.7 sec ζ). For the loss due to the optical devices we consider a transmissivity value T opt = 0.794 (1dB) [32]. The total transmissivity of the channel is then simply,
B. Excess noise
Since in CV quantum states the information is encoded in the quadratures of the states, we require an LO in order to extract this information via homodyne or heterodyne measurements. With this in mind, we can take the results presented in this work which adopt a non-zero , as the fidelity outcomes expected if we were to actually measure the fidelities experimentally [33]. The ideal theoretical predictions would correspond to the pure loss channel case, where = 0. In [34] it is discussed that for coherent state transmission via atmospheric channels the main components of the excess noise arise from turbulence-induced effects on the LO, in addition to time-of-arrival fluctuations caused by delays between the laser pulses and the LO. The variations in the intensity of the LO induce an excess noise given by where V sig the statistical variance of the quadratures of the quantum signal, corresponding to V sig = σ for direct transmission, and V sig = V for the teleportation channel. For a given aperture size, the scintillation index averaged over the aperture of the LO is where P LO = D I LO dA is the power of the LO (with intensity given by I LO ) over the aperture. Since the uplink channel is more affected by beam wandering, σ SI,LO (D) can be expected to be much greater for the uplink relative to the downlink.
Time-of-arrival fluctuations are caused by a broadening of the time-bin width of the signal pulse from τ 0 to τ 1 , where τ 1 is given by [34] where and where c is the speed of light in vacuum. As derived in [35], the variance of τ 1 , is given by σ 2 ta = τ 2 1 /4, which leads to an excess noise [14]: where ρ ta is the timing correlation coefficient between the LO and the signal. The value of σ ta is independent of the direction of propagation of the beam. For a value of τ 0 = 100ps, ta is virtually independent of the atmospheric turbulence, since the pulse broadening only becomes considerable for τ 0 < 0.1ps [34]. Therefore, considering that ρ ta = 1 − 10 −13 , the noise contribution due to the time of arrival fluctuations becomes With the two main sources of noise outlined, we now write the total excess noise as = ta + ri . The excess noise being directly proportional to V sig reflects the fact that due to the fluctuating nature of atmospheric channels, the values of T and need to be estimated by repeated measurements of the channel. This means that in a experimental setup one cannot distinguish between variations of the quadratures due to quantum uncertainty, or the variations induced by the fluctuating value of T . Therefore, the variations of T of the channel effectively translate to additional excess noise. We note that there are additional sources of excess noise, however, their contributions are minor compared to those considered here [16].
C. Other channel modeling techniques
Throughout this work we use the phase screen simulations to model the channel. Performing phase screen simulations is essentially a numerical approach to solving the stochastic parabolic equation, and adopts a versatile technique referred to as the split-step method [27]. Despite its computationally intensive nature, the split-step method has been widely used to study the atmospheric optical propagation of classical light under a variety of conditions (see e.g. [36]- [41]). Due to its quantitative agreement with analytical results, the split-step method is also believed to be very reliable (see e.g. [42]- [44]).
Other channel-modeling techniques have been proposed to simplify the description of the atmospheric propagation of quantum light under specific situations. It is worthwhile to compare their predictions with our detailed phase screen simulations. Channel modeling techniques based on the socalled elliptic-beam approximation [13] are believed to be particularly useful when the phase fluctuations of the output field amplitude can be neglected. This point is discussed further in [45] where it is also highlighted that homodyne measurements can be constructed where phase fluctuations of the output field can be neglected. Under the elliptic-beam approximation, it is assumed that the atmospheric propagation leads to only beam wandering, beam spreading, and beam deformation (into an elliptical form). However, the extinction losses due to back-scattering and absorption can also be added phenomenologically under such an approximation [45]. Although originally proposed under the assumption of a horizontal channel, the elliptic-beam approximation was directly adopted in [46] to study the performance of CV-QKD in the downlink channel. In addition, the authors of [32] proposed a generalized channel modeling technique based on the ellipticbeam approximation, providing a comprehensive model for the losses suffered by the quantum light in both the uplink and downlink channels. All these works [13], [32], [45], [46] assumed an infinite outer scale (i.e. L 0 = ∞) and a zero inner scale (i.e. l 0 = 0), effectively neglecting the inner scale and outer scale effects.
In Fig. 4, we compare the mean turbulence-induced loss T turb [dB] predicted by i) the phase screen simulations, and ii) the channel modeling techniques (based on the ellipticbeam approximation) of [46] and [32]. Although our phase screen simulations take into account the inner scale and outer scale effects by adopting the empirical Coulman-Vernin profile (recall Eq. (16)), for comparison we also present the results predicted by the phase screen simulations with L 0 = ∞ and l 0 = 0. From Fig. 4 we clearly observe that the mean transmissivity in the downlink channel predicted by all the considered channel modeling techniques are similar. This can be explained by the fact that the main source of loss in a downlink channel is diffraction loss. For the uplink channel, we observe that the mean transmissivities predicted by the phase screen simulations with L 0 = ∞ and l 0 = 0 match the mean transmissivities predicted by the generalized channel modeling technique. Such an observation is reasonable since [32] indeed assumes L 0 = ∞ and l 0 = 0. Fig. 4 is that the mean losses predicted with a finite outer scale and a non-zero inner scale are lower than the mean losses predicted with an infinite outer scale and a zero inner scale. Such an observation can be explained mainly by the fact that the presence of a finite outer scale reduces the amount of beam wandering and longterm beam spreading [17]. This observation does not refute the conventional wisdom that the channel loss in the uplink channel is higher than the channel loss in the downlink channel. However, this observation does indicate that the disadvantage of an uplink channel may be overestimated in some models. We believe that setting a finite outer scale and a non-zero inner scale (according to the empirical Coulman-Vernin profile) is more relevant (rather than simply setting L 0 = ∞ and l 0 = 0) when studying the atmospheric propagation of light through a satellite-based channel. Therefore, in the rest of this work, we will utilize the results from the phase screen simulations that adopted a finite outer scale and a non-zero inner scale. Table I, with w 0 = 15 cm and r sat = r gs = 1 m. Recall, a higherT turb in dB corresponds to higher loss.
D. Ground-to-satellite state transmission
Using our phase screen simulations we model an uplink and a downlink channel with the characteristics presented in Table I. We consider that r sat = r gs in order to focus our analysis in the turbulence induced loss. We do note, that in a realistic satellite communications deployment it is expected that the aperture of the ground station is larger than the satellite's aperture (see later calculations). However, setting the apertures constant in the first instance allows for a more direct comparison of the effects of turbulence on the links. The model returns the probability distribution function (PDF) of the loss for each channel, as seen in Fig. 5. The PDF of the downlink channel is extremely narrow compared to the PDF corresponding to the uplink channel. This is due to the asymmetry of the interaction between the beam and the atmosphere, as explained above. The scintillation index of the LO is computed by simulating the propagation of a strong beam corresponding to the LO. The scintillation index values are several orders of magnitude larger for the uplink relative to the downlink.
Due to the fluctuating nature of the uplink and downlink channels we need to consider ensemble-averages when computing the fidelity of the transmitted states [47]. The required analysis can be derived as in the non-fluctuating channel if we define an effective transmissivity T f , and an effective excess noise f , as Var( Table I, with ζ = 0 o , w 0 = 15cm, and r sat = r gs = 1m.
with the mean values computed as with p ζ (T ) the PDF of T for a given ζ. We present in Fig. 6 the properties of the downlink and uplink channels obtained using the phase screen simulations. Following Eqs. (26,30), the value of f is proportional to the variance of the quadratures of the quantum states transmitted through the channel. For this reason we show on the plot the value of f with a fixed V sig = 1, to give a fair comparison between the two channels, but we emphasize that this parameter will change in our calculations below. We observe that, as expected, losses are higher (i.e. larger effective transmissivity when stated in dB) for direct transmission. Moreover, the value of f for the direct channel is one order of magnitude greater than the value for the teleportation channel. This is a direct consequence of the variations in the intensity for both the quantum signal and the LO. We do not show the results for direct transmission modelled for an uplink with L 0 = ∞ and l 0 = 0, but we find that f ≈ 0.6 for ζ = 0 o , meaning such a channel is inadequate for the transmission of quantum states.
Using the values of T f and f , obtained from the numerical simulations, we then compute the fidelity of teleportation and direct transmission of coherent states. The values of g and V are optimized relative to the loss of the teleportation channel to maximize the mean fidelity. For the loss values anticipated for the teleportation channel, we observe that the optimal value of V is in the range 1 to 1.5, and the optimal value of g is in the range 1 to 1.1. The results, presented in Fig. 7, show that the teleportation channel has a significant advantage over direct transmission. We see that direct transmission is only capable of overcoming the classical limit for a reduced alphabet of σ = 2, and low zenith angles up to 30 o . On the other hand, the teleportation channel exceeds the classical limit for a larger range in the alphabet, and for a wide range of zenith angles. This shows that one can indeed avoid, to a significant extent, the detrimental effects of the direct uplink channel via Fig. 6: Ground-to-satellite properties for the direct transfer channel and for the teleportation channel, shown for V sig = 1. The parameters of the channels are given in Table I, with w 0 = 15cm and r sat = r gs = 1m. For the teleportation channel the entangled resource is distributed via the downlink. The left axis (blue) corresponds to the effective transmissivity, while the right axis (red) corresponds to the effective excess noise. Recall, a higher T f in dB corresponds to higher loss. Table I, with w 0 = 15cm and r sat = r gs = 1m. The shaded area in red indicates the region where the fidelity can be achieved by classical communications only. The direct transmission for σ = 10, 25 result in mean fidelities < 0.35 for all zenith angles. a teleportation using an entangled resource distributed via the downlink channel. We note that the values of σ considered here encompass the ranges required to undertake high-throughput CV-QKD [16].
1) Asymmetric apertures: A stated earlier, in the calculations just described we assumed that r gs = r sat , in order to focus our analysis in the turbulence induced loss. However, in many satellite deployments it is expected that r sat < r gs . In such a case, use of the teleportation channel in the manner we have described would present an even higher advantage over the direct transmission channel. For example, we find that for a space communications setting as in Table I, with ζ = 0 o , w 0 = 15cm and r gs = 50cm, the downlink channel incurs a loss of ≈ 11dB. Meanwhile, under the same values, but with r sat = 20cm, the uplink channel incurs a higher loss of ≈ 22dB. This means that the fidelity obtained using the teleportation channel in this setting is approximately 0.6, while the fidelity via direct transmission would be well below the classical limit. It is therefore important to emphasize, that our detailed calculations most likely represent a lower bound on the actual gain in uplink communications of many future satellite missions.
IV. CV TELEPORTATION WITH NON-GAUSSIAN OPERATIONS
A great deal of recent research has been focused on the photonic engineering of highly non-classical, non-Gaussian states of light, aiming to achieve enhanced entanglement and other desirable properties. Indeed, non-Gaussian features are essential for various quantum information tasks, such as entanglement distillation [48]- [55], noiseless linear amplification [56]- [61], and quantum computation [62]- [65]. In entanglement distillation and noiseless linear amplification, non-Gaussian features are a requirement due to the impossibility of distilling (or amplifying) entanglement in a pure Gaussian setting [66]. In universal quantum computation, non-Gaussian features are indispensable if quantum computational advantages are to be obtained [67].
Non-Gaussian operations, which map Gaussian states into non-Gaussian states, are a common approach to delivering non-Gaussian features into a quantum system. At the core of non-Gaussian operations is the application of the annihilation operator A and the creation operator A † . There are two basic types of these operations, namely photon subtraction (PS) and photon addition (PA), which apply A and A † to a state, respectively. Both operations have been shown to enhance the entanglement of TMSV states (e.g., [68]- [70]). Various studies on combinations of PS and PA have also been undertaken (e.g., [71]- [74]). A specific combination, photon catalysis (PC), is of particular research interest. Instead of subtracting or adding photons, PC replaces photons from a state, and is known to significantly enhance the entanglement of TMSV states under certain conditions (e.g., [75], [76]). If TMSV states are in fact shared between a satellite and a ground station, it is natural to ask whether non-Gaussian operations can be adopted at the ground station to further facilitate satellite-based quantum teleportation.
A. Non-Gaussian states and non-Gaussian operations
A simple experimental setup for realizing non-Gaussian operations consists of beam-splitters and photon-numberdetectors. For example, as is depicted in Fig. 8a, an input state interacts with an ancilla Fock state |N at a beamsplitter with transmissivity T b . If M photons are detected in the ancilla output the operation has succeeded. In practice, the probability of success of a non-Gaussian operation is an important parameter to consider. In this regard, singlephoton non-Gaussian operations (M, N ∈ {0, 1}) usually have the highest success probability for a given type of non-Gaussian operation [70], making them the best candidates for practical implementation. Therefore, in this work we will restrict ourselves to the non-Gaussian operations with singlephoton ancillae and single-photon detection (i.e., Fig. 8b, c, and d).
In the Schrodinger picture, the transformation of the non-Gaussian operations described above can be represented by an operator [77] where is the beam-splitter operator, : · : means simple ordering (i.e., normal ordering of the creation operators to the left without taking into account the commutation relations), and A and B are the annihilation operators of the incoming state and the ancilla, respectively. Using the coherent state representation of the Fock state, we can obtain the following compact forms for the operators for PS (N = 0, M = 1), PA (N = 1, M = 0), and PC (N = 1, M = 1) [78], respectively, Suppose a non-Gaussian operation O ∈ {O PA , O PS , O PC } is to be performed to a state. Let Ξ in be the density operator of the state. The resultant state after the operation can be written as where N = Tr OΞ in O † is a normalization constant, which is also the probability of success of the non-Gaussian operation. Fig. 9: CV teleportation with non-Gaussian operations performed at the ground station.
B. CV teleportation protocol with non-Gaussian operations
In this section, we study the use of non-Gaussian operations in the protocol of CV quantum teleportation proposed by [19]. The deployment of the protocol over satellite channels has been discussed in previous sections, so we will only describe our modification in this section. Our modified protocol is illustrated in Fig. 9, where we assume the satellite and the ground station already share some TMSV states that have been distributed over the noisy channel. Before teleportation begins, the ground station will perform non-Gaussian operations to the local mode stored at the station. The resultant non-Gaussian states shared between the satellite and the ground station will be used as the entangled resource for teleportation.
Similar to before, we will use the fidelity given by Eq. (8) as the metric to evaluate the effectiveness of our modified CV teleportation protocol. To determine the fidelity we need to derive the CFs of the non-Gaussian states. We begin the derivation with the CF of the entangled state Ξ shared between the ground station and the satellite. This CF, which we repeat here for completeness, can be written as where again is the channel excess noise, T is the channel transmissivity, and χ TMSV (ξ A , ξ B ) is the CF for the initial TMSV state prepared by the satellite -which is given by Eq. (4). On performing PS to mode B of Ξ, the unnormalized CF of the resultant state is given by and ξ B and ξ * B are independent variables. For PA and PC, the CF of the state after the non-Gaussian operations can be obtained in a similar fashion. For PA the unnormalized CF is given by (41) For PC, the unnormalized CF is more involved, and is given by Additionally, we also investigate the sequential use of PS and PA. We assume the two non-Gaussian operations adopt the same beam-splitter transmissivity. For the scenario of PS followed by PA (PS-PA), the unnormalized CF is given by The unnormalized CF for PA followed by PS (PA-PS) is given by The normalized CFs after the non-Gaussian operations are given by
C. Results
We study the teleportation of coherent states using non-Gaussian entangled resource states, of which the CF is chosen from Eq. (45) depending on which non-Gaussian operation is performed to the mode at the ground station. We use the mean fidelityF given by Eq. (14) as our performance metric. We adopt the effective channel loss and the effective excess noise obtained from the phase screen simulations (see Fig 6).
In Fig. 10 we compare the maximizedF offered by various non-Gaussian operations against the effective channel loss T f [dB]. At each effective channel loss level, the maximization ofF is performed on the parameter space consisting of the transmissivity T b of the beam-splitter in the non-Gaussian operations and the gain parameter g of the teleportation protocol. For comparison, the case without any non-Gaussian operation is also included (i.e., the black curve in the figure). In each sub-figure, r is the squeezing parameter of the TMSV state generated by the satellite and σ is the variance for the distribution of the displacement of the input coherent state (defined by Eq. (13)). For r the conversion from the linear domain to the dB domain is given by r [dB] ≈ 8.67r. We see from Fig. 10 that among the five non-Gaussian operations we have considered, only PA-PS provides enhancement inF. PA always provides largerF than PS. When r is 5 dB, PA-PS provides the largestF over the entire range of effective channel loss we have considered.
We next compare the teleportation scheme with the non-Gaussian operation that provides the most improvement. That is, teleportation with PA-PS compared to the direct transmission scheme. The mean fidelity for the direct transmission scheme is given by Eqs. (12) and (14). The results are illustrated in Fig. 11, where we compare the maximizedF against r and σ for different satellite zenith angles ζ. Again the maximization ofF is performed over the parameter space of {T b , g}. We see in comparison to the original teleportation scheme (i.e. the TMSV case), the scheme with PA-PS can achieve the highestF for the entire range of σ we have considered. PA-PS can also reduce the requirement on r of the TMSV state prepared by the satellite (to reach a certain level of fidelity). We also notice that when σ is fixed,F provided by the original teleportation scheme decreases when r exceeds a certain value. The same trend is observed for the PA-PS scheme.
In summary, we have shown in this section how non-Gaussian operations at the ground station can enhance the fidelity for teleporting coherent states by up to 10%. In addition, using such non-Gaussian operations, we have shown how the demand on the squeezing of the TMSV state prepared by the satellite can be reduced.
V. DISCUSSION
The focus of the present work is the use of CV teleportation channels for the teleportation of coherent states, and the use of non-Gaussian operations to enhance the communication outcomes. However, it is perhaps worth briefly discussing the flexibility of our system in regard to the transfer of other quantum states in the uplink, and the use of additional quantum operations. It will also be worth discussing differences and advantages of our system relative to DV-only systems -after all the only currently-deployed qiantum satellite system is one solely based on DV states [3].
A. Other Quantum States and Operations
Our scheme is actually applicable to any type of quantum state -even DV based systems. Some DV systems, e.g. polarization, 3 may need to be transformed first into the number basis. In number-basis qubit-encoding, vacuum contributions enter directly, similar to what we have discussed earlier. In such schemes, the use of the TMSV entangled teleportation channel (a CV channel) can be utilized as the resource to teleport the DV qubit state [80], and so our proposed scheme operates directly. Our scheme also operates directly on more complex quantum states such as hybrid DV-CV entangled states -even on both components of such states [81]. This flexibility of CV entanglement channels over DV entanglement channels is another advantage offered by our scheme.
We also note, the non-Gaussian operations we have considered in this work represent a form of CV entanglement distillation [82]. There are, of course, many other forms of CV entanglement distillation we could have considered at the ground receiver (or on-board the satellite) -we have only investigated the simplest-to-deploy quantum operations. 4 As technology matures (e.g. the advent of quantum memory), more sophisticated quantum operations (and entangled resources) will become viable as a means of further enhancing teleported uplink quantum communications; most likely outcompeting any advances in the uplink-tracking technology that could assist direct communication. In principal, the teleportation fidelity could approach one.
B. DV Polarization -Micius
We discuss now, known results from the LEO Micius satellite in the context of teleportation of DV-polarization states from the ground to the satellite [3]. Different from our system model, the teleportation experiment reported in [3] does not use the downlink to create the entanglement, but rather utilizes the uplink as a means of distributing the entanglement. Therefore, the advantage of using the superior downlink channel is not afforded to that experiment. From the aperture used in [3] -a 6.5cm radius transmitter and a 15cm radius receiver telescope, a turbulence induced loss of 30dB is obtained at a 500km altitude, the zenith distance of Micius. This translates into a beam width of 10m at the receiver plane (30m beam width and 40dB losses at 1400km is also reported). Nonetheless, the experiment still clearly demonstrates a fidelity of 0.8 for the teleportation of singlequbit encoded in single-polarized photons (well above the classical fidelity limit of 2/3 for a qubit), proving the viability of teleportation over the large distances tested.
In the context of the main idea presented in this work, use of the downlink channel (to create the entanglement channel) in an experimental set up similar to [3] would be beneficial mostly in the context of an increased rate of teleportation, rather than an increase in fidelity. Our phasescreen simulations suggest (reversing the aperture sizes for a fair comparison, that is, 6.5cm radius transmitter at the satellite and 15cm radius on the receiving aperture) would result in a turbulence induced loss of 25dB, which would lead to a factor of ∼ 2−4 enhancement in the teleportation rate relative to direct transmission. Of course, if we increase the ground receiver aperture, larger enhancements could be found. The fact that it is much easier to deploy large telescopes on the ground, compared to in space, is another advantage of our teleportation scheme.
Let us briefly outline the main differences in DVpolarization teleportation relative to CV teleportation. In DVpolarization implementations the vacuum contribution does not enter the teleportation channel in the same manner it does in a CV entangled channel. In the DV-polarization channel the loss enters our calculations primarily via two avenues. One avenue is simply through the different raw detection rates set by the differential evolution of the beam profiles in the downlink and uplink. As discussed, in the downlink the beam width at the receiver will be smaller than in the uplink. For a given receiver aperture this translates into an increased detection rate in and by itself. We can use the phase screen calculations described earlier (e.g. Fig. 4 for equal transmit and receive apertures of 1m) to determine this rate increase. The second avenue is a manifestation of the vacuum through dark counts in the photodetectors. In real-world deployments of teleportation through long free-space channels [3], [83], [84] a coincidence counter is used to pair up entangled photons, typically with a time-bin width of 3ns [3]. Due to the presence of a vacuum in almost all time-bins, only of order 1 in a million events are triggered as a photon-entangled pair. Dark counts in the best photodetectors are currently in the range of 20Hz. However, in orbit, and because of stray light, combined background counts are more likely to be of order 150Hz [3]. A background count in one time-bin will lead to a false identification of an entangled pair generated between the satellite and ground station. This is different to the CV scenario where each time bin is assumed to contain a pulse -albeit one contaminated with a vacuum contribution.
Another major difference in DV vs. CV teleportation systems is contamination caused by higher order terms in the production of the (single) photons that are to be teleported in the DV systems. The optimal probability of single-photon emission (set by the user) decreases with increasing loss [85]. This is due to a lower probability leading to a reduction in the number of double pair emissions that lead to flawed Bell measurements. This effect is counteracted by the strength of the source that emits the two-photon entangled pairs (set by the user) -the optimal value of which increases with increasing loss. These two parameters can be jointly optimised for the loss anticipated, leading to asymmetric parameter settings for the downlink and uplink teleportation deployment [85]. An additional issue relevant to DV-polarization teleportation is partial photon distinguishably at the Bell state measurement which leads to a drop in interference at the beam splitter, and, of course, polarization errors (in production or measurement).
The relative importance of all the above terms for free space teleportation from ground to satellite are considered to be background counts (4%), higher-order photon emission (6%), polarization errors (3%), and photon indistinguishability (10%) [3]. In a series of experiments over 100km [84], 143km [84] and ground-to-satellite [3] a fidelity of teleportation in the range 0.8 − 0.9 was obtained by all.
Another issue in discussing DV relative to CV teleportation is the classical teleportation fidelity of both systems. That is, the fidelity that can be achieved by purely classical information being communicated across the channel (e.g. the classical information representing the outcome of a particular quantum measurement). This classical information allows the receiver to partially reconstruct the desired quantum state. In the coherent state teleportation discussed earlier this classical fidelity was 1/2. However, for DV qubits it is 2/3. This fact translates into a less useful range of teleportation fidelity for the DV scenario relative to the CV scenario. Finally, it is worth noting that the Bell state measurements used currently in DV systems are only 50% efficient. This is a consequence of the fact that Bell state measurements based on linear optics can only discriminate between two of the four Bell states. Although, in principal, full Bell state measurements in the DV basis are possible (eg via ancilla and two-qubit interactions), no real-world implementation of the latter exist -all current deployments utilize a linear-optics-only solution [3], [83], [84].
C. Future work
We recognise other input states may lead to an enhanced fidelity in both the direct uplink transmission channel and via the resource CV teleportation channel. It is likely that in these circumstances we will again find some channel parameter settings where teleportation leads to better communication outcomes. However, coherent states and TMSV states are easy to produce and are considered the "workhorses" of CV quantum communications, and are therefore the focus of this work. We also recognize more sophisticated set-ups could be considered, such as the use of classical feedback on channel conditions to optimise the parameters of the input states (e.g. squeezing levels and amplitudes). However, such improvements are at the cost of a considerable increase in implementation complexity. Again, it is likely that in these circumstances some channel parameter settings will provide for communication gains via teleportation relative to direct transfer. Future investigations that properly identify such channel settings would be useful. Our study has also been limited in terms of the aperture settings we have adopted. We have used what we consider to be aperture settings likely deployable in next-generation systems which take space-based quantum communication to the production phase. Further study of possible teleportation gains for a wider range of aperture settings would also be useful.
VI. CONCLUSIONS
In this work, we have investigated the use of a CV teleportation channel, created between a LEO satellite and a terrestrial ground station, as a means to enhance quantum communication in uplink satellite communications. Such communications are expected to be very difficult in practice due to the severe turbulence-induced losses anticipated for uplink satellite channels. Our CV teleportation channel was modelled using the superior (lower loss) downlink channel from the satellite as a means to distribute one mode of an in situ satellite TMSV state to the terrestrial station -a form of long-range entanglement distribution that may become mainstream in coming years. Our results showed that use of this teleportation channel for uplink coherent state transfer is likely to be much superior to coherent state transfer directly through the uplink channel. The use of non-Gaussian operations at the ground station was shown to further enhance this superiorly. Given the flexibility of CV teleportation as a means to invoke all forms of quantum state transfer beyond just coherent state transfer, it could well be the scheme introduced here could become the de facto choice for all future uplink quantum communication with satellites. | 2021-02-04T02:16:17.973Z | 2021-02-03T00:00:00.000 | {
"year": 2021,
"sha1": "a3877caa81aeb2fca69ba7086ebe38e269ae4d87",
"oa_license": "CCBYNCND",
"oa_url": "https://ieeexplore.ieee.org/ielx7/8924785/8961200/09463774.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a3877caa81aeb2fca69ba7086ebe38e269ae4d87",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.