text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Speech Spectral Transfer Function
The article discusses the problem of estimating the psychophysiological state of aircraft pilots by their speech. For this purpose, a new concept of an individual speaker’s transfer function is proposed. This definition is based on the classical results of automatic control theory. The article presents the algorithms for calculating transfer function and the examples of using this feature for medical purposes. 1 Statement of the problem Today man-machine interfaces based on various physical principles are intensively developed in aviation [1]. For example, the main trends of audio interfaces’ development are the 3D-audio (surround sound effect) and automatic speech recognition [2], which are used for controlling onboard systems [3, 4]. It is known, that operator’s speech characteristics depend on outer conditions and his own psychophysical state. Estimation of these changes is the key task for improving man-machine interface. There is a number of articles, where this problem has been investigated. For example, the impact of aircraft overload on operator’s speech characteristics is presented in [3]. The connection between the degree of operator’s fatigue and parameters of his speech is described in [4]. This relation is based on the A. M. Lyapunov’s theory of stability [5,6]. The influence of the acoustic noises on the speech recognition performance score is analysed in papers [7,8]. The speech recognition algorithm, resistant to the noise, correlated with the voice signal, is proposed in [8]. The paper [9] discusses the speech characteristics of pilots with hearing loss diagnosis. This problem is particularly relevant to the helicopter aviation pilots. All the above mentioned researches are held in order to improve speech recognition algorithms. But on the other side, speech characteristics may be used to estimate operator’s psychophysical state and the influence of various factors on it. Evidently, the principal changes in the speech signal occur in the frequency and time domains [2, 3 , 4, 8]. The analysis of these changes in absolute values is coupled with the problem of representing the information because, for example, the power of speech signal in different frequency ranges varies by decades of decibels [2, 7]. So, the analysis of speech characteristics’ changes should be carried out not in absolute, but in relative terms. It means, that the speech signal should be compared with a relevant one, for example, with the speech of another operator, considered as a reference, or standard. To form a reference it is possible to use the mean value for the group of speakers. Another way is to choose a speaker, whose pronunciation is very close to the norms of the literature language. In this paper the concept of the speech transfer function is proposed in order to analyze the changes of operator’s speech characteristics. 2 The algorithm for calculation transfer function of the operator Let us introduce the speech transfer function of the operator, that is similar to transfer function W(p), wellknown in the theory of automatic control [6]. This function is defined as the ratio of the Laplace transform of the input and output signals for zero initial conditions [6]. We arrive at a transfer function W(f), which depends on the frequency f, Hz, applying to the Laplace variable p [6] the substitution p = j2 f , where j – imaginary unit, that is j = –1. For an arbitrary value of argument f function W(f) is a complex number containing information about the signal change in amplitude and phase on the frequency f. As it is known, an operator perceives the amplitude of a speech signal, while the phase information is ignored. So, we consider only the absolute value ( ) W f . Let us call it the speech transfer function of the operator in frequency domain. According to the Wiener–Khinchin theorem [6], the following ratio holds for the absolute value of the speech transfer function ( ) ( ) ( ) y x W f s f s f , (1) where ( ), ( ) y x s f s f – spectral densities of the input and output signals. DOI: 10.1051/ , 01006 (2017) 71001006 10 ITM Web of Conferences itmconf/201 2017 Seminar on Systems Analysis © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). To form a practical algorithm for calculating estimates (1) let us use a parametrization algorithm, widely known in automatic speech recognition [2, 8]. According to this algorithm, the speech fragment, for example, a recorded word, is divided into t N time intervals (frames). The duration of each frame is 20 ... 40 ms. Then the Hann spectral window and fast Fourier transform are applied to each interval [2]. After that an absolute value of the Fourier transform is calculated for each frame. Then the whole frequency range, limited by Nyquist frequency [6], is divided into a predetermined number of bands f N = 20 ... 40, and the average absolute value (the square root of the spectral density’s estimation) is calculated for each band, as it is necessary for the formula (1). As a result, we obtain a matrix of the word’s parametric portrait: { }; 1, ..., , 1, ..., , ij f t x i N j N X (2) which dimension is f t N N ; each j-column describes the spectral content of the speech signal for the jframe; each i-row describes a change in time of the mean absolute value of the signal components belonging to the i-frequency band. It is known, that the error of the estimates, calculated by the formula (1), is considerable [2,6]. To improve the accuracy of estimation we will increase the amount of data. We take M = 10 ... 30 realizations for each word and determine the matrix of the mean parametric portrait
Statement of the problem
Today man-machine interfaces based on various physical principles are intensively developed in aviation [1].For example, the main trends of audio interfaces' development are the 3D-audio (surround sound effect) and automatic speech recognition [2], which are used for controlling onboard systems [3,4].
It is known, that operator's speech characteristics depend on outer conditions and his own psychophysical state.Estimation of these changes is the key task for improving man-machine interface.There is a number of articles, where this problem has been investigated.For example, the impact of aircraft overload on operator's speech characteristics is presented in [3].The connection between the degree of operator's fatigue and parameters of his speech is described in [4].This relation is based on the A. M. Lyapunov's theory of stability [5,6].The influence of the acoustic noises on the speech recognition performance score is analysed in papers [7,8].The speech recognition algorithm, resistant to the noise, correlated with the voice signal, is proposed in [8].The paper [9] discusses the speech characteristics of pilots with hearing loss diagnosis.This problem is particularly relevant to the helicopter aviation pilots.All the above mentioned researches are held in order to improve speech recognition algorithms.But on the other side, speech characteristics may be used to estimate operator's psychophysical state and the influence of various factors on it.
Evidently, the principal changes in the speech signal occur in the frequency and time domains [2, 3 , 4, 8].The analysis of these changes in absolute values is coupled with the problem of representing the information because, for example, the power of speech signal in different frequency ranges varies by decades of decibels [2,7].So, the analysis of speech characteristics' changes should be carried out not in absolute, but in relative terms.It means, that the speech signal should be compared with a relevant one, for example, with the speech of another operator, considered as a reference, or standard.
To form a reference it is possible to use the mean value for the group of speakers.Another way is to choose a speaker, whose pronunciation is very close to the norms of the literature language.In this paper the concept of the speech transfer function is proposed in order to analyze the changes of operator's speech characteristics.
The algorithm for calculation transfer function of the operator
Let us introduce the speech transfer function of the operator, that is similar to transfer function W(p), wellknown in the theory of automatic control [6].This function is defined as the ratio of the Laplace transform of the input and output signals for zero initial conditions [6].
We arrive at a transfer function W(f), which depends on the frequency f, Hz, applying to the Laplace variable p [6] the substitution p = j2Sf , where jimaginary unit, that is j 2 = -1.
For an arbitrary value of argument f function W(f) is a complex number containing information about the signal change in amplitude and phase on the frequency f.As it is known, an operator perceives the amplitude of a speech signal, while the phase information is ignored.So, we consider only the absolute value ( ) W f .Let us call it the speech transfer function of the operator in frequency domain.According to the Wiener-Khinchin theorem [6], the following ratio holds for the absolute value of the speech transfer function ( ) ( ) ( ) where ( ), ( ) To form a practical algorithm for calculating estimates (1) let us use a parametrization algorithm, widely known in automatic speech recognition [2,8].According to this algorithm, the speech fragment, for example, a recorded word, is divided into t N time intervals (frames).The duration of each frame is 20 ... 40 ms.Then the Hann spectral window and fast Fourier transform are applied to each interval [2].
After that an absolute value of the Fourier transform is calculated for each frame.Then the whole frequency range, limited by Nyquist frequency [6], is divided into a predetermined number of bands f N = 20 ... 40, and the average absolute value (the square root of the spectral density's estimation) is calculated for each band, as it is necessary for the formula (1).As a result, we obtain a matrix of the word's parametric portrait: { }; 1, ..., , 1, ..., , which dimension is the spectral content of the speech signal for the jframe; each i-row describes a change in time of the mean absolute value of the signal components belonging to the i-frequency band.
It is known, that the error of the estimates, calculated by the formula (1), is considerable [2,6].To improve the accuracy of estimation we will increase the amount of data.We take M = 10 ... 30 realizations for each word and determine the matrix of the mean parametric portrait where X -matrix of dimension operator's mean parametric portrait.The above mentioned parametrization algorithm is a standard procedure of the speech recognition theory [2,8].When analyzing the general change in the spectral properties of speech, the time domain quantification is not necessary, because we do not need to take into account the features of sounds or syllables.So let us calculate the mean over all time intervals, i.e., over elements of the matrix's X rows, and obtain the vector а of the mean amplitudes of the frequency components, belonging to the i-frequency band where ij x -the elements of the mean parametric portrait matrix (3).
In order to find estimates of the transfer function between two speakers it is necessary to calculate matrices 1 X and 2 X , vectors а 1 and а 2 for each frequency band and for each speaker's speech data.Then the formula (1) is to be applied.
where i ffrequency corresponding to the middle of the i-frequency band; 1i a , 2i aelements of vectors а 1 and а 2 , corresponding to the i-frequency band; 1ij x , 2ij x - elements of matrices 1 X and 2 X of the mean paramatric portraits.In ( 5) indices 1 and 2 denote respectively the first and second speakers.
The transfer function between the different states of the same speaker is determined by the same way.
The values of the transfer function ( 5) are expressed in dB as follows [6]: ( ) ( ) 20 lg ( ) 20 lg , ( ) The results of estimation will be improved, if the word is preliminarily divided into few parts in accordance with the algorithm [10].
Speakers in noisy conditions
Let us consider the applications of speech transfer functions in order to study the effects of noise on the speaker's speech.For this purpose the following experiment has been carried out.The noise recorded in the cockpit during the flight was fed only into the speaker's headphones, so as not to interfere with the record of the speaker's words.The registration frequency was 22 kHz.During the experiment audio data was recorded in three conditions: without noise; with noise in headphones 80 and 90 dB. Figure 1 shows plots of the speech transfer function for three speakers in noisy conditions (80 and 90 dB in the headphones).Transfer functions were calculated for the same speaker and the record without noise was taken as a reference.
The analysis of results shows that the impact of 80 dB noise causes a significant increase in the amplitude over the entire frequency range, with the largest rise in the range 1 ... 4 kHz, followed by decline in the range of high frequencies 4 ... 11 kHz.The maximum increase in amplitude is: for the first speaker 9 ... 10 dB with a decline to 3 ... 4 dB at high frequencies (figure 1a); second speaker 5.0 ... 6.5 dB with a decline to 3.5 ... 4.5 dB (figure 1b); third 4.5 ... 5.5 dB with a decline to 2.5 ... 3.5 dB (figure 1c).Noise augmentation from 80 to 90 dB increases the volume of speech by 1 ... 2 dB for all speakers.Thus, the proposed function allows to identify common and individual changes in the speech of speakers in noisy environment.
Speech transfer function in medical applications
Let us consider the use of the proposed function for investigating the speech of helicopter pilots with hearing loss diagnosis.A speaker without diseases of hearing and speech was chosen as a reference.Figure 2 shows plots of the speech transfer function for three speakers with a diagnosis of hearing loss with respect to the speaker without diseases of hearing and speech.The plots show transfer function estimates calculated from records of the russian words "пилотаж", "масштаб", "навигация", corresponding to such english words as "pilotage", "scale", "navigation", and the mean ("среднее") between them.
Plots have obvious individual characteristics, but the common feature is the wide variation of values, which makes approximately ± 6 dB (figure 2a, b), and ± 20 dB (figure 2c).
The final experiment, which is to be discussed in this paper, is related to means for correcting teeth.Speech data of the speaker before installing corrective means was chosen as a reference.Then speech transfer function was calculated before and after installation of these means.Obtained results are presented in figure 3.This figure shows the significant changes in speech for the frequencies above 6 kHz.
Conclusion
In this paper we introduced the concept of speaker's speech transfer function in frequency domain in order to analyze the changes in speech characteristics and the influence of the outer conditions and psychophysical state of the speaker.The paper proposes the algorithm for calculating the estimates of speech transfer function, using experimental data.Some applications of the proposed function are also presented.
DOI: 10 Fig. 1 .Fig. 2 .
Plots of speech transfer function for three speakers (a-c) with the noise in the headphone 80 (1) and 90 (2) dB.Plots of speech transfer function for three speakers (a-c) with a diagnosis of hearing loss with respect to the speaker without diagnosed diseases of hearing and speech.
Fig. 3 .
Fig. 3. Plot of speech transfer function for the same speaker before and after the installation of means for correcting the teeth. | 3,672.2 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
A regression system for estimation of errors introduced by confocal imaging into gene expression data in situ
Background Accuracy of the data extracted from two-dimensional confocal images is limited due to experimental errors that arise in course of confocal scanning. The common way to reduce the noise in images is sequential scanning of the same specimen several times with the subsequent averaging of multiple frames. Attempts to increase the dynamical range of an image by setting too high values of microscope PMT parameters may cause clipping of single frames and introduce errors into the data extracted from the averaged images. For the estimation and correction of this kind of errors a method based on censoring technique (Myasnikova et al., 2009) is used. However, the method requires the availability of all the confocal scans along with the averaged image, which is normally not provided by the standard scanning procedure. Results To predict error size in the data extracted from the averaged image we developed a regression system. The system is trained on the learning sample composed of images obtained from three different microscopes at different combinations of PMT parameters, and for each image all the scans are saved. The system demonstrates high prediction accuracy and was applied for correction of errors in the data on segmentation gene expression in Drosophila blastoderm stored in the FlyEx database (http://urchin.spbcas.ru/flyex/, http://flyex.uchicago.edu/flyex/). The prediction method is realized as a software tool CorrectPattern freely available at http://urchin.spbcas.ru/asp/2011/emm/. Conclusions We created a regression system and software to predict the magnitude of errors in the data obtained from a confocal image based on information about microscope parameters used for the image acquisition. An important advantage of the developed prediction system is the possibility to accurately correct the errors in data obtained from strongly clipped images, thereby allowing to obtain images of the higher dynamical range and thus to extract more detailed quantitative information from them.
Background
Confocal scanning microscopy is a commonly used method for acquisition of high-quality digital two-and three-dimensional images of molecular biological objects. The high quality of confocal images makes it possible to extract quantitative data at a single cell resolution, the availability of which is a necessary prerequisite for successful systems biology studies. However the data accuracy is limited due to errors that arise in the course of confocal scanning. In our recent papers [1,2] we analyzed the sources of errors introduced by twodimensional confocal imaging into the data on gene expression in situ and described algorithms for estimation and correction of these errors. For example, confocal images are inevitably contaminated by photon shot noise [3] and a common way to reduce the noise is the averaging of multiple separate scans. However, the information about the averaged image will be lost if pixels with high or/and low intensities are clipped in single scans. Image clipping is a form of signal distortion related to the limited grayscale range of an image. Pixel values that exceed an upper threshold of the grayscale range (e.g., 255 for an 8-bit format) are cut-off at the threshold value, all the pixels with negative intensities are set to zero. Such pixels are referred to as overand under-saturated, respectively. Averaging of clipped scans results in errors in the data extracted from the averaged image. In our previous work we developed a method [1] based on censoring technique for estimation and correction of this kind of errors, however the method implementation requires not only the averaged image but also all the confocal scans which are not provided by the standard procedure of image acquisition.
The degree of image distortion and hence the size of data error caused by clipping depends on microscope parameters, most of all on the values of gain and offset of the photomultiplier tube (PMT), the detection device to measure photons. These parameters are adjusted to control the dynamical range of an image: the PMT gain (voltage) exponentially amplifies a weak signal, while offset defines the background level of intensities subtracted from the image to increase its brightness. Although the PMT parameters are chosen to ensure that in the averaged image pixels take their value inside the grayscale range and do not look clipped, some of the pixels in single scans may be saturated due to photon noise and clipped off. The adjustment of PMT gain affects signalto-noise ratio (SNR) in the image amplifying the noise level exponentially. The severity of clipping increases with the increase of gain and offset values, the distortions being the largest when the photomultiplier is adjusted to the limits of its sensitivity. Besides the PMT adjustment SNR can be improved by increase of laser power, however, this approach leads to fluorophore saturation and photobleaching. In practice the laser power is kept at a constant high level and the amount of light admitted to the specimen is reduced through AOTF control, which does not amplify the noise. We have conducted experiments to estimate to what extent other microscope parameters, besides the PMT gain and offset, influence the size of data error.
In the present work we introduce a regression system for prediction of error magnitude in the data extracted from the averaged image. The learning samples are composed of images obtained at different combinations of gain and offset values of three different microscopes. The experiments were designed in a way that for each learning image all the scans were saved as separate image files. The linear regression model involves the values of gain and offset as independent variables while the error value estimated for the given mean intensity level is a dependent variable.
Obviously the magnitude of error may vary among the data obtained with different microscopes and under different experimental conditions, and thus application of the prediction system requires a representative learning sample obtained by the same confocal system and the same scanning experiment as the image subject to error correction. To apply the developed regression system for predicting errors in new data we standardize all our training data obtained with three microscopes; we combine them in one sample and train the system on the combined sample. The error prediction system was applied to correct errors in the data on expression of segmentation genes in Drosophila that are stored in the FlyEx database http://urchin.spbcas.ru/flyex/. This data are widely used in research labs. Our aim was to corroborate the highprecision of the data that was used for construction of the integrated atlas of segmentation gene expression.
The proposed method has important applications. Usually it is recommended to adjust the parameters of microscope to almost avoid pixel saturation in single frames; this approach limits the brightness and contrast of averaged images. The newly developed system provides an opportunity to obtain images in a higher dynamical range and thereby to extract more detailed quantitative information from microscope experiments.
Estimation of between-scan noise
The photon shot noise is an inevitable consequence of the basic properties of confocal microscopy. Among the main advantages of this imaging technology over conventional optical microscopy is the presence of a confocal pinhole, which let only light from the focus plane to reach the detector. Pinhole removes "out of focus" light from the image, thereby decreasing the number of photons reaching detectors. The photon noise arises from a discrete nature and small number of detected photons and in a properly aligned microscope is the major source of errors [3]. This noise is signal dependent and follows the Poisson distribution.
The noise level in the averaged image may be characterized by between-scan variance defined for each pixel in the image as a variance of values of the same pixel in all the scans. To illustrate how the between-scan variance depends on the PMT parameters the mean variances are plotted in Figure 1 against the mean pixel values for different combinations of gain and offset. Although the offset adjustment does not directly affect the PMT noise, subtraction of the background from an image decreases mean intensities leaving the noise unchanged and thereby decreasing the signal-to-noise ratio. For example, noise in the image obtained at the gain 1000V and offset -4% is noticeably higher than in the image from the same microscope obtained at the same gain and zero offset. As it is predicted by the optical theory the noise increases exponentially with gain and linearly with offset. It is clearly seen from the figure that in accordance with the properties of the Poisson distribution the variance linearly depends on the mean pixel value at low and intermediate intensities, while at high intensities the variance values dramatically fall as a result of image clipping.
The degree of the averaged image distortion due to clipping is characterized by the fraction of clipped pixels as a function of the pixel intensity. For each pixel the fraction is computed as a number of scans in which this specific pixel is clipped, divided by the total number of scans. Obviously for the pixels with the same intensity in two different images the fraction will be higher for the image with higher noise. The fall of between-scan variances at high intensities is explained by the fact that the fraction of clipped pixels approaches to 1 which results in saturation of the pixel value in the averaged image.
Estimation of errors due to image clipping
Quantitative data are read off from the averaged image. The quantification procedure includes the detection of object (nucleus or cell) borders and the subsequent averaging of the values of all the pixels assigned to an object. As a result the data are represented by the mean intensity and coordinates of each object in the image.
The error due to image clipping arises in data extracted from confocal images in the event that these images are obtained by means of averaging the clipped single scans. In the presence of all the scans the error magnitude can be estimated using the method based on the censoring technique [1]. Data errors due to clipping are defined as the absolute difference between the true (unknown) value of the mean intensity and the mean intensity corrupted by clipping that is obtained from the observed averaged image.
To estimate data errors we first introduce a pixel error as a value of distortion of the pixel value in the averaged image caused by clipping. Due to clipping at the upper grayscale threshold, over-saturation, the pixel intensity is reduced by the value where c a is the upper threshold, f a (x) = 1/(2πs) exp [(x -μ) 2 /s 2 ] is the Gaussian distribution density. The parameters μ and s are estimated for each pixel by the method of moments as described in [1]. After that the quantities (1) are averaged over all the pixels with equal intensities. Thus for any intensity k from the grayscale range [0..c a ] the averaged error, U k , is defined. We will name this type of error as upper error. The error in a data object is given by 1 N U k , where the averaging is performed over all the pixels belonging to the object and N is the number of such pixels.
As a result of offset adjustment a certain portion of intensities is subtracted from an image and any pixel value smaller than the subtraction threshold is clipped and set to zero. This type of distortion of single scans, under-saturation, yields the overestimated values of pixel intensities in the averaged image. In this case the pixel error is given by where c b is threshold defined by the value of offset, density. The distribution parameters are estimated analogously to those of the Gaussian model (1). The error estimates are averaged over all the image pixels with equal intensities; the averaged pixel error, lower error, is denoted as L k for any k [0..c a ]. The data error is also defined in this case as the averaged value of all the pixel errors computed for all the pixels assigned to an object, 1 N L k . Note, that the magnitude of pixel error is uniquely defined by the level of image noise for a given mean intensity.
Theoretically the method works at any degree of clipping but in practice its application is limited: for example the error estimation is infeasible if the true mean values of a pixel are clipped in all individual scans.
Construction of the regression model
The method described in the previous section allows to precisely estimate and correct errors in images and data but its application requires the availability of all the confocal scans. In this section we construct a linear regression model for prediction of magnitude of error in the data extracted from the averaged image based solely on information about microscope parameters. The information about image acquisition is normally contained in scanning protocols saved by the microscope software. Among the microscope parameters the adjustment of PMT gain and offset exerts the greatest influence on the error magnitude and these parameters are incorporated into the regression system as independent variables. As a learning sample we use confocal images scanned at different combinations of gain and offset, for which all the scans are saved along with the averaged images.
The regression algorithm is implemented in several steps. First, for all the elements of the learning sample pixel errors, U k and L k , are estimated for all the intensities that are present in the images. Then the regression functions are constructed for each intensity value from the grayscale range. As signal and hence the degree of image distortions depends linearly on offset and exponentially on gain, independent parameters are chosen as the values of offset and exponent of gain. The regression function involves the total estimated distortion caused by under-and over-saturation, E k = U k + L k , as the dependent variable. For each intensity level, k [0..c a ], the linear regression function is defined as The regression coefficients b offset,k , b gain,k and b 0,k are estimated by the least squares method, minimizing where the summation is done over all the images, elements of the learning sample. Each term includes the values of offset and gain that were applied for the acquisition of the corresponding image.
Normally a pixel can be noticeably corrupted either by under-or over-saturation and hence only one kind of error, either U k or L k , can take considerable values (see Figure 2).
The results of regression estimation are applied to predict the size of errors in data extracted from an averaged image non-belonging to the learning sample, for which single scans are not saved. As a first step, the regression equation (3) is used for prediction of error size in each pixel in the averaged image. Next the predicted values are averaged over all the pixels within the area of each object detected in the image. The errors estimated in this way are used to correct the data distortions that arise due to clipping of single scans both at the highest and the lowest mean intensities.
Image and data acquisition
Three learning samples were obtained from three confocal microscopes. All the images are two-dimensional of a size 1024 × 1024 pixels and have 8-bit grayscale resolution.
S1 sample
The images were obtained by Leica TCS SP5 confocal system (Institute of Cytology RAS, St.Petersburg, Russia). In the scanning experiments we used a specimen prepared from the lily of the valley (Convallaria) root, that is highly autofluorescent in wide spectrum. We also scanned three expression patterns of Drosophila embryo, that were also scanned on different microscope system when the S2 sample was constructed. The specimens were scanned 8 times using HCX PL APO 20.0x/0.70 IMM Lbd.BL objective and three lasers (Argon 488 nm, HeNe 543 nm, HeNe 633 nm) with different values of PMT gain and offset listed in Table 1. The power of Argon laser was normally set to 30% of it's maximal value. To check the effect of laser power variation on the noise, one specimen was scanned at 10, 20, 30, 40, 50, and 60 percent laser power at the same values of gain and offset.
S2 sample
The experiments were performed at JUC "Chromas" of the St.Petersburg State University, Russia. Eight wildtype (OregonR) Drosophila melanogaster blastoderm embryos were immunostained for the expression of hb, gt and eve segmentation genes as described in [4][5][6]. We used fluorescent labels Alexa Fluor 488 (Invitrogen) for detection of Hb and Cad and Alexa Fluor 555 for detection of Eve and Gt proteins. The embryos were imaged with a HCX PL APO lambda blue 20.0x/0.70 IMM Lbd. BL objective of a Leica TCS SP5 confocal system using Argon 488 and HeNe 543 lasers. Each embryo was stained for the expression of 2 genes, each staining was scanned several times with different values of PMT gain and offset, and for each experiment a series of 8 individual scans was saved together with the averaged image.
In total 59 averaged images (see Table 2) were obtained. To test whether the properties of lasers change with time six stainings were stored and scanned anew with the same values of PMT parameters several months after all the other series of experiments were performed.
S3 sample
12 embryos were immunostained for expression of one of four segmentation genes gt, eve, hb and bcd applying the same method as described above for construction of S2. Each embryo was scanned several times with different combinations of gain and offset settings (see Table 3). Fluorescent labels used were Alexa Fluor 488 (bcd ), Alexa Fluor 555 (eve, gt), and Alexa Fluor 647 (hb). Embryo images were taken with the 20X Plan Apo dry objective (numerical aperture 0.7) of a Leica TCS SP2 confocal system at Stony Brook University, NY, USA.
The quantitative gene expression levels in nuclei are extracted from the images belonging to S2 and S3 with the use of a nuclear mask as described in [6,7]. The mask is a binary image in which all the pixels located within a nucleus are white and the rest pixels are black. The mask is superposed on the image and the values of pixels belonging to a nucleus are averaged. As a result each nucleus in the expression pattern is characterized by x and y coordinates and mean intensity level.
Training of the system
The regression system is trained on images acquired from three different confocal microscopes. The learning samples S1 and S2 contain images scanned by two different microscopes Leica TCS SP5, the third sample S3 is obtained with a microscope Leica TCS SP2. Details of image acquisition are given in the Methods section. The values of gain and offset used for acquisition of all the samples are presented in Tables 1, 2 and 3. To bring the offset values of different microscopes to the common scale we measure offset in intensity units subtracted from an image. In this way we calculate that 1% offset for microscopes used to acquire S1, S2 and S3 Table 1 Learning sample S1 First of all, we analyze the photon noise as a function of PMT parameters in all the samples. The noise is estimated as the between-scan variance computed for each element of all the learning samples. The typical behavior of the between-scan variance is shown in Figure 1 and discussed in detail in the Algorithm section. The measured variances are given in Figure 1 for selected values of pixel intensity and PMT parameters that makes it possible to compare the noise level in images obtained with different microscopes and at different conditions. As expected, due to different properties of electronic devices included into the microscope configuration, the noise is not equally defined by the PMT voltage in different microscopes. For example, images obtained at zero offset and equal gain, 750V, from samples S1 and S2 have different level of noise (labeled as errors in the figure). For all our experiments the noise level coincides in images obtained with the same microscope in different channels using different lasers. The power of the laser used for excitation of specimen is another factor that influences the image noise. Although this parameter is normally kept unchanged from experiment to experiment the output laser power may slowly decrease with time as the laser tube ages. We compared the noise in images scanned on different days, even separated by long intervals (up a year), and have established that the noise has not noticeably changed with time. The results of these tests (data not shown) allowed us to assemble all the images obtained by the same microscope into one learning sample. However, the system trained on a learning sample can be only used to predict the error magnitude in data acquired with the same microscope. To be able to predict errors in any data we need to standardize all our training data obtained with three different microscopes and combine them into one sample.
The regression system uses the values of PMT gain and offset as independent variables which means that these parameters uniquely define the predicted error magnitude. To bring the values of these parameters to common scale we represent offset as measured in intensity levels subtracted from an image (see Tables 1, 2 and 3), and further need to find the way how to standardize the values of gain for different PMTs used in different microscopes. As it was already mentioned above, the image noise is unequally defined by the PMT voltage in different microscopes, and even using different lasers in the same microscope, while the noise level completely defines the value of pixel error. Hence it is sufficient to associate the gain values with the level of between-scan noise in images from different learning samples. As noise is known to exponentially increase with increase of gain, to bring the gain values of two samples to correspondence we used additive correction for the gain value in one of the samples. The correction in sample S2 with respect to sample S1 is found to be 80V, such that, for example, the gain value 800V in sample S1 corresponds to 880V in sample S2, which means that these values of gain generate the same level of between-scan noise in images. The correction shift between the gain values in samples S1 and S3 is 280V as the PMT of microscope used to acquire S3 produces much higher noise. For example the level of between-scan noise almost coincide in images from S1 and S3 obtained at zero offset and gain 380V and 650V, respectively (see Figure 1).
Taking into account these corrections we create a common sample consisting of all the learning data obtained from all the microscopes.
Regression estimation
The combined learning sample is used to fit the regression model (3) introduced in the Methods section. The regression estimation is separately performed for each intensity level k [0..c a ]. The value of dependent variables for each element of the learning sample is computed as the sum of upper and lower pixel errors U k and L k , if intensity level k is present in the image. All the images are 8-bit files, and hence the upper level c a is equal to 255. However the highest intensities are present just in few images and the upper error values can be estimated for intensities not exceeding~250. Besides, the highest pixel values are usually very strongly clipped which may lead to unreliable estimates. Examples of error estimates for different combinations of gain and offset are shown in Figure 2. The sample is composed of 29 images. For notations see caption to Table 1.
The regression coefficients b offset,k , b gain,k and b 0,k are estimated by the least square method (4). The results of regression estimation are summarized in Table 4, for the sake of space the estimated values of the coefficients are only given for selected values of intensities k. The regression results are visualized for low and high mean intensities in Figure 3. A close to 1 value of the determination coefficient R 2 is an evidence of adequacy of the regression model and its good prediction properties.
To cross-validate the accuracy of prediction we performed a so-called leave-one-out test. The test uses a single observation from the original sample as validation data and the remaining observations as training data. This procedure is repeated so that each observation in the sample is used once as validation data.
The test was slightly modified since some images were obtained at the same values of PMT parameters; all such images were together excluded from the training dataset to form the validation sample. At each step of the test procedure we apply the regression system to predict pixel errors for an image from the validation sample. For each pixel value the accuracy of prediction is characterized by the absolute difference between the computed and predicted error values. The cross-validation results are presented in Figure 4. The test confirms high accuracy of error estimation for samples S1 and S2, while for the images from S3 the deviation in error estimation attains 5 units in absolute value. The lower accuracy of error estimation for sample S3 is explained by much higher noise in the images from this sample.
The method was used to predict the sizes of errors in data on expression of 14 segmentation genes in Drosophila embryo generated in our previous work and stored in the FlyEx database (http://urchin.spbcas.ru/ flyex/; http://flyex.uchicago.edu/flyex/). This dataset consists of about 5000 confocal images, of which 1263 were acquired by microscope and lasers used to generate S3 sample (see the Methods section), while the rest images were obtained by microscope and lasers applied to scan S2 sample [4][5][6]. The images were used to extract quantitative data on segmentation gene expression by the method presented in [6], however this data is corrupted by clipping because all the images were obtained under the standard image acquisition procedure which precludes saving of single scans along with the averaged image.
The data error is usually computed as an average of errors of all the pixels in a data object (in our case an embryo nucleus). However, at high mean intensities this approach is likely to produce unreliable estimates. Due to photon noise the intensities of some pixels in nucleus with high mean intensity may reach values exceeding 250, while estimates of pixel errors are inconsistent at such intensities. In this case it is rather recommended to estimate the data error as the error of a pixel with the intensity equal to the mean intensity of the data object. We have tested this simplified approach on the available data and have observed that the error estimates computed by both methods did not have noticeable differences.
In general, to apply the regression system for prediction of errors caused by clipping in data extracted from a series of 8-bit images scanned by any confocal microscope it is sufficient to bring the values of PMT parameters used for image acquisition to the common scale. For this purpose there is no need to create a full representative learning sample but just to run a specially designed experiment on the same microscope in the same channel and under the same conditions. To measure the value of offset subtracted from images the same staining is scanned twice using the same gain and two different values of offset. The mean difference between the images divided by the difference between the offset values will give the standard measure of offset. To standardize the gain all the confocal scans are saved for an image scanned at zero offset and any given value of gain. Then the between-scan noise is computed as described in and its values are put into correspondence with those computed for our combined sample and presented in Additional file 1, Table S1. The difference between the gain voltage that generates the same level of noise in an image from the combined sample and new experiment will give the correction shift for the gain. Finally we come to the following scheme of the data error prediction algorithm: 1. Bring the values of PMT parameters used for image acquisition to the standard scale. 2. For any obtained image apply the regression system to to predict pixel errors using the standardized values of gain and offset as input parameters. 3. Compute the sizes of errors caused by clipping by averaging the predicted values over all the pixels within the area of each object detected in the image, or just take the pixel error corresponding to the mean intensity in the object.
Software tool
The algorithm for prediction of data errors in gene expression patterns is implemented as a software tool CorrectPattern freely available at http://urchin.spbcas.ru/ asp/2011/emm/. The main function of the program is to predict and correct errors due to pixel saturation in an input gene expression pattern. Input parameters of the program are the values of gain and offset used for image acquisition. The program provides a tool for automated parameter standardization. A user should provide a series of confocal scans obtained at zero offset for the gain standardization and two averaged images of the same specimen obtained with the same gain and different values of offset for offset standardization. The program computes and saves the corrections for gain and offset that are further used for standardization of the input parameters for any image obtained using the same microscope laser. Output data file is saved in the same format as the input file with the mean intensity values replaced by the corrected ones.
CorrectPattern is realized as a complex of programs in C, Java and JavaScript languages using the three-tier architecture. Program modules are installed on Linux server and use image processing libraries. The user interface is realized in the WEB browser in JavaScript language on the basis of AJAX technology. WEB server is used as an intermediary between the user interface and functional program modules.
Examples of the method application Correction of gene expression patterns
The method application is illustrated on two gene expression patterns. An example of error correction in the data extracted from an image belonging to S3 sample is shown in Figure 5b. Predicted error values reached 15-20 units at the highest mean intensities and 12 units at the minimal mean intensities present in the pattern.
An advantage of the proposed method is the opportunity to increase the dynamical range of low signal images and acquire accurate quantitative data from them. The second example illustrates the correction of an expression pattern of fushi tarazu (ftz) segmentation gene in Drosophila blastoderm obtained using a poor quality antibody. The specimen was scanned by the microscope and laser used to generate S2 sample.
According to the standard procedure for quantitative data acquisition [6] the gain and offset of the microscope photomultiplier should be adjusted so that the maximum level of gene expression corresponds to the maximum level of fluorescence intensity at an 8-bit scale. For immunochemical detection of the gene product we use rat antibody against Ftz [4] and the commercial secondary antibody anti-rat Alexa Fluor 488. Due to the long-term storage, the activity of the primary antibody decreased and even using very high antibody concentrations we had to raise the gain to almost maximum possible value to obtain images of intensity calibrated against the images of FlyEx embryos stained for the ftz expression. The noise level in such images is very high that gives rise to high errors in the data. The regression system was applied to correct these errors that reached considerable values of about 17 units at the intensity 200 as shown in Figure 5c.
Another source of errors in the data obtained with the use of the antibody at our disposal is a high non-specific background signal. We applied the method published in [8] to remove the background and normalize the data. Upon the application of this procedure the error magnitudes increased significantly up to 35 units at the intensity 150. This example shows that in cases when there is a need to use high values of PMT parameters the error correction method is very important to make the data suitable for analysis.
Estimation of error sizes in the FlyEx dataset
The FlyEx dataset is a valuable source of information about mechanisms of pattern formation in early development. Besides confocal images of gene expression patterns it contains quantitative data extracted from these images, as well as a set of reference images and data representing the most typical expression pattern for a given developmental time. This data attracts attention of many scientific groups, which widely use FlyEx to study the mechanism of pattern formation, infer regulatory interactions in the segmentation genetic network and develop new mathematical models http://urchin.spbcas. ru/flyex/refs.jsp.
In our recent publication [1] we have shown that this data is corrupted by clipping. Notwithstanding the fact that the sizes of the data errors are small due to proper choice of microscope parameters, the errors should be removed as the quality of conclusions drawn critically depends on the data quality. The general-purpose method for correction of pixel saturation requires all the confocal scans to be saved without averaging. The regression system which we have developed allows us to circumvent this limitation, predict the error magnitude in the data extracted from the average image and apply the error correction procedure to the whole dataset.
The important corollary of extension of the method to predict the error sizes on the whole dataset is the opportunity to accurately estimate the sizes of these errors. Almost all the images from the FlyEx dataset were acquired with the gain values within the range from 450V to 680V and offset values under 45%. The Figure 5 Correction of errors in quantitative data. a) Confocal image of Drosophila embryo stained for the expression of hb gene and scanned with offset -40% and gain 705V. The image belongs to S3. Quantitative data are extracted from the area outlined in the image. b) 1D expression pattern of hb gene extracted from the image (black circles) and corrected to eliminate the saturation effect (white circles). Differences between corrected and observed quantitative gene expression data are shown by crosses. c) Quantitative data on ftz gene expression extracted from the image taken by the same microscope as sample S2 with offset -2% and gain 1100V (the image is not shown).
predicted error values do not exceed 7-8% of the highest mean intensity present in expression pattern. The lowest value of mean intensity in nucleus is never zero due to inevitable presence of non-specific background staining [8] in the embryo. The background level is determined by the quality of antibodies used for staining, and in our data it varies between 2 and 100 units. The predicted errors take values not higher than 2-3 units in expression patterns with low background and are negligibly small in the case of high background. The detailed results of the error estimation are given in Additional file 2, Figure S1; Additional file 3, Figure S2 and Additional file 4, Figure S3. Errors at low intensities may slightly affect the estimation of background level but are small enough not to noticeably corrupt the pattern after background subtraction.
Discussion
In our recent publication [1] we described a new method for estimation and correction of errors in the quantitative data extracted from clipped confocal images. The method was applied to the data on segmentation gene expression in Drosophila. A necessary requirement for the method application is availability of all the individual scans that usually are not saved but directly averaged by the microscope software to reduce the photon noise in images. Due to this requirement the method could not be used to correct errors in data obtained by the standard scanning procedure; the method only allows to determine the range of settings that provides acceptable level of errors in a specific microscope.
To extend the applicability of the method we have created a linear regression system and software to predict the magnitude of errors in the data obtained from a confocal image based on information about microscope parameters used for the image acquisition. The system was trained on three samples of images obtained from different microscopes with different combinations of the PMT gain and offset adjustments. As adjustment of PMT gain and offset to the same values in different microscopes produces different level of noise, the scales of these parameter were calibrated to achieve standardization. The standardized parameter values were used by the regression system as independent variables.
To estimate regression function each image in a training sample was saved together with all the individual scans. The computed errors were included as dependent variables into the regression model taking into account a known fact [3] that the error size depends linearly on the offset value and exponentially on the gain value. The system predicts the magnitude of errors in data extracted from an 8-bit image obtained by a confocal microscope using the values of standardized PMT parameters as input. The cross-validation tests demonstrated high accuracy of predictions.
It should be stressed that the standardization of the microscope parameters is very important as it puts into correspondence the properties of images obtained with different microscopes, lasers and under different experimental conditions. All what is needed is to perform a simple experiment to measure the photon noise via the estimation of between-scan variance and standardize the parameter values used for image acquisition against the scale utilized in the training sample. Upon that the regression system can be used on the data extracted from a series of images obtained at the same conditions.
We envisage one additional application of the regression system developed in this work. This system allows a user to extract more detailed quantitative information from the images, thereby increasing the accuracy of gene expression data. The confocal scanning experiment directed to the acquisition of quantitative gene expression data possesses certain specific features. For example, images used to acquire data on segmentation gene expression in Drosophila embryo are standardized against the image of an embryo exhibiting the pattern characteristic of maximal expression, that is normally observed at the late stages of development of a wild type embryo. The gain and offset values of the microscope photomultiplier are adjusted for this embryo and kept constant in all the series of scanning experiments. Because of this arrangement it may happen that images of embryos at early stages of development, and especially of mutants, are of very low contrast since the level of gene expression in these embryos is low. This is especially typical for images of expression patterns in embryos stained with the antibodies of poor quality, that give rise to a high nonspecific background. To be able to extract more detailed information from such images it is necessary to increase their intensity range by setting high values of PMT parameters; however this may lead to pixel saturation and errors in quantitative data extracted from the images. Our regression system provides means to estimate and correct errors in data obtained with an extended range of microscope parameters and hereby makes it possible to obtain more accurate quantitative information on gene expression.
Conclusions
• A regression system is created for error magnitude prediction in data obtained from an 8-bit confocal image. The prediction is based on information about microscope parameters used for image acquisition.
• The method demonstrates high prediction accuracy and was applied for correction of errors in the data on segmentation gene expression in Drosophila blastoderm stored in the FlyEx database (http:// urchin.spbcas.ru/flyex/, http://flyex.uchicago.edu/ flyex/). • An important advantage of the developed prediction system is the possibility of error correction in data obtained from strongly clipped images, thereby permitting acquisition of higher dynamic range images, which would aid extraction of more detailed quantitative information. | 9,265.2 | 2011-08-04T00:00:00.000 | [
"Biology",
"Computer Science",
"Engineering"
] |
COSPAR Sample Safety Assessment Framework (SSAF)
The Committee on Space Research (COSPAR) Sample Safety Assessment Framework (SSAF) has been developed by a COSPAR appointed Working Group. The objective of the sample safety assessment would be to evaluate whether samples returned from Mars could be harmful for Earth’s systems (e.g., environment, biosphere, geochemical cycles). During the Working Group’s deliberations, it became clear that a comprehensive assessment to predict the effects of introducing life in new environments or ecologies is difficult and practically impossible, even for terrestrial life and certainly more so for unknown extraterrestrial life. To manage expectations, the scope of the SSAF was adjusted to evaluate only whether the presence of martian life can be excluded in samples returned from Mars. If the presence of martian life cannot be excluded, a Hold & Critical Review must be established to evaluate the risk management measures and decide on the next steps. The SSAF European Space Agency, Mars Exploration Group, Noordwijk, The Netherlands. NASA Headquarters, Office of Planetary Protection, Washington, DC, USA. Goethe University, Department of Geoscience, Frankfurt, Germany. UK Health Security Agency, Rare & Imported Pathogens Laboratory, Salisbury, UK. NASA Johnson Space Center, Astromaterials Research and Exploration Science Division, Houston, Texas, USA. Clarkson University, Department of Mechanical and Aeronautical Engineering, Potsdam, New York, USA. NASA Goddard Space Flight Center, Solar System Exploration Division, Greenbelt, Maryland, USA. Security Programs, Engineering Biology Research Consortium, Emeryville, USA. Rutgers University, Department of Earth and Environmental Sciences, Newark, New Jersey, USA. The Open University, Faculty of Science, Technology, Engineering & Mathematics, Milton Keynes, UK. NASA Goddard Space Flight Center, Astrochemistry Laboratory, Greenbelt, Maryland, USA. Japan Aerospace Exploration Agency ( JAXA), Institute of Space and Astronautical Science (ISAS), Chofu, Tokyo, Japan. New Mexico Institute of Mining and Technology, Biology Department, Socorro, New Mexico, USA. Erasmus University Medical Centre, Department of Viroscience, Rotterdam, The Netherlands. NASA Headquarters, Planetary Science Division, Washington, DC, USA. Centre National d’Études Spatiales (CNES), Nancy, France. Princeton University, Department of Geosciences, Princeton, New Jersey, USA. London School of Hygiene & Tropical Medicine, Department of Medical Statistics, London, UK. Indiana University Bloomington, Earth and Atmospheric Sciences, Emeritus, Bloomington, Indiana, USA. Imperial College London, Department of Earth Science & Engineering, London, UK. RISE, Research Institutes of Sweden, Department of Methodology, Textiles and Medical Technology, Stockholm, Sweden. Japan Aerospace Exploration Agency ( JAXA), Institute of Space and Astronautical Science, Sagamihara Kanagawa, Japan. University of Tokyo, Graduate School of Science, Tokyo, Japan. Université de Paris, Institut de Physique du Globe de Paris, Paris, France. European Institute for Marine Studies (IUEM), CNRS-UMR6538 Laboratoire Geo-Ocean, Plouzané, France. Conseiller Scientifique, Innovaxiom, France. Gerhard Kminek et al., 2022; Published by Mary Ann Liebert, Inc. This Open Access article is distributed under the terms of the Creative Commons License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ASTROBIOLOGY Volume 22, Supplement 1, 2022 Mary Ann Liebert, Inc. DOI: 10.1089/ast.2022.0017
Introduction
A nalyzing martian samples in terrestrial laboratories would advance our understanding of Mars in multiple ways that are impossible when using in situ missions or martian meteorites alone. Most recently, the Mars Sample Return (MSR) Science Planning Group 2 (MSPG2) produced an up-to-date status of MSR science planning Meyer et al., 2022).
With the expected benefits of MSR, however, come responsibilities. If life is present on Mars, then samples from Mars could be a source of extraterrestrial biological contamination for Earth. In line with Article IX of the United Nations Space Treaty (UN Space Treaty, 1966), a range of measures that are described by the Committee on Space Research (COSPAR) Policy on Planetary Protection would have to be employed to prevent undesirable consequences for Earth's systems (DeVincenzi et al., 1998;COSPAR, 2021). One of these measures is to conduct a timely safety assessment of any unsterilized material from Mars. The first step to develop such a safety assessment began under the leadership of the National Aeronautics and Space Administration (NASA), with contributions from the Centre National d'É tudes Spatiales (CNES), in 2000 with a series of five workshops that led to a Draft Test Protocol (Rummel et al., 2002). An important recommendation of this earlier work was to periodically review and update the Draft Test Protocol by taking into account new scientific findings and advances in instrumentation. As an intermediate step and response to this recommendation, NASA and the European Space Agency (ESA), in coordination with COSPAR, organized a life detection conference and workshop in 2012 to discuss the latest concepts and methods to search for life and identify relevant elements for a safety assessment (Allwood et al., 2013;Kminek et al., 2014).
With an increased interest in a joint NASA-ESA MSR Campaign and associated planning activities under-way , the need to produce an updated version of the safety assessment became evident. This is reflected in one of the recommendations of the International Mars Architecture for the Return of Samples (iMars) Phase II Working Group (Haltigin et al., 2016): ''A Planetary Protection Protocol should be produced as soon as it is feasible by an international working group under the authority of COSPAR or another international body.'' This need is also described with additional contextual information in the work of Rummel and Kminek (2018).
COSPAR swiftly reacted and established a Sample Safety Assessment Protocol (SSAP) Working Group in 2018. This Working Group had the mandate to review existing literature and the planned MSR Campaign architecture to produce a sample safety assessment protocol. The mandate for the SSAP Working Group specifically excluded biosafety control and management aspects, that is, sterilization of material from Mars, environmental and health monitoring, containment elements, and contingency planning. The Working Group had members covering the relevant expertise in life detection, public health, infectious diseases, physical and chemical composition of expected material from Mars, extraterrestrial sample analysis, sample curation, and statistical analysis. Additional experts have been invited to participate in specific meetings. In particular, a team from the US Centers for Disease Control and Prevention (CDC) was invited to comment on the draft and final versions of this report. Collectively, the external input has produced substantial added benefits in the SSAP Working Group's deliberations.
Toward the end of the Working Group's term, the name for our product was reconsidered. It was felt that it would be more appropriate to call this a framework rather than a protocol (or a draft protocol) to better represent the content. A detailed (or even draft) protocol will need to be developed once a number of open issues addressed throughout this report and summarized in Section 5 are resolved and more information about the samples is available.
Some general remarks to better understand this Sample Safety Assessment Framework (SSAF) include the following: For the purpose of formulating the SSAF, we considered the NASA-ESA MSR Campaign . We are using the term sterilization in a generic way to include both overkill (i.e., a process with substantial margin that does not require viability testing after application but would typically render samples useless for further biochemical investigations) and inactivation (i.e., a process with less margin that requires viability testing after application and would likely allow for conducting certain biochemical investigations after it has been applied).
The following sections describe the scope, structure, and content of the SSAF. Section 5 describes the key elements of the SSAF. These key elements are not independent and must be taken together with the remainder of the report for context and for additional and essential information. For elements of the SSAF the Working Group considered mandatory or very important, we use the term ''must'', while for elements that only have an indirect effect (e.g., making the assessment faster or using less material) we use a conditional form.
Objective and Scope of the SSAF
The objective of the safety assessment is to assess whether martian life is present that would pose a risk for Earth's systems (e.g., environment, biosphere, geochemical cycles) in samples intentionally returned from Mars. Traditionally, risk is defined in terms of probability of occurrence and consequences. In our case, we do not know and could only speculate about the consequences of releasing potential martian biology on Earth. For the purpose of the SSAF, we use the term risk not in relation to consequences but to the release of active martian biology exclusively. The associated risk mitigation is based on two pillars: performing a safety assessment and/or sterilizing the material from Mars.
One of the assumptions we use in terms of potential martian biology is that it is based on carbon chemistry. The likelihood that extraterrestrial life is carbon based has been suggested and discussed in various publications with arguments focused on the versatility and abundance of carbon in our Solar System and beyond (e.g., NRC, 2002;Allwood et al., 2012;Craven et al., 2021). It is worth noting that even theoretical concepts of silicon-based life still employ organic moieties (Petokowski et al., 2020). Another assumption used is that any potential life on Mars utilizes soluble organic compounds. Organic molecules used by terrestrial life are soluble in either polar or nonpolar solvents. Any potential life based on insoluble organic molecules would be unlikely to cause harm to terrestrial systems (Dirk and Irwin, 2005). In addition, solid-solid reactions are very slow compared to those in solution. An intractable solid may be hazardous, though without the capability of interacting in fluids it would not be able to replicate on timescales that compete with those of terrestrial biological systems.
We agree with the National Research Council (NRC) Committee on the Review of Planetary Protection Requirements for Mars Sample Return Missions that ''the potential risks of large-scale effects arising from the intentional return of martian materials to Earth are primarily those associated with replicating biological entities, rather than toxic effects attributed to microbes, their cellular structures, or extracellular products'' (NRC, 2009). In addition to replicating biological entities, we consider it prudent to include biologically active molecules in the sample safety assessment (ESF, 2012;Craven et al., 2021). This expansion of the SSAF covers the incorporation of potential martian non-self-replicating biological agents that could lead to a redirection of life processes on Earth (i.e., viruslike, stray RNA or DNA-like, and prion-like entities) and even theoretical concepts of propagating catalytic reactions that may directly precede de novo life. Although it might be easier to find life that is the producer or host for such agents, the Mars returned sample safety assessment must have the capability to detect biologically active molecules independently as well. Throughout the SSAF, we use the term martian life to include both de facto martian life and biologically active molecules produced by martian life. There is also a very real possibility that there are toxic compounds present in the samples, for example, inorganic species such as perchlorates. Toxic effects that originate from the samples are not covered in the SSAF because they are limited to an occupational hazard and can be managed accordingly.
In the case that martian life is found in samples returned from Mars, large-scale negative effects on Earth's systems are not expected (NRC, 1997;NRC, 2009;ESF, 2012). However, it is impossible to exclude absolutely such consequences. Thus, a prudent and conservative approach is the most appropriate response-be ready for the unexpected (NRC, 1997;NRC, 2009;ESF, 2012).
There are many ways an alien life form could be harmful to Earth's systems. The possible interactions could include not only direct effects on humans, animals, and plants or their associated beneficial microbes, but also indirect effects of a competitive interaction with various terrestrial species. Examples abound of the detrimental effects that result from terrestrial invasive species being introduced into new environments (e.g., van der Putten et al., 2007;Litchman, 2010;Randolph and Rogers, 2010). There are also more subtle effects that can be imagined. Some microorganisms, while not obviously beneficial to humans, are actually keystone species, the loss of which could cause irrevocable harm to an ecosystem (e.g., Mills et al., 1993) or disrupt essential biochemical cycles (e.g., Jardillier et al., 2010). Unfortunately, we have only a limited ability to predict the effects of terrestrial invasive species, emerging pathogens, and uncultivated microbes on Earths' ecosystems and environments. This is true even for cultured and fully genomesequenced terrestrial organisms and more so for potential extraterrestrial life. Thus, conducting a comprehensive sample safety assessment with the required rigor to predict harmful or harmless consequences of potential martian life for Earth is currently not feasible. This situation is not likely to change substantially within the next decade. On the contrary, the increased knowledge accumulated over the last decades has shown many more unexpected effects and dependencies in the various ecosystems of Earth (e.g., Pejchar and Mooney, 2009). Therefore, the scope of the SSAF is limited to evaluating whether the presence of martian life can be excluded in the samples without pretending to assess the potentially hazardous nature of the samples-except that if there is no life, there is no biological hazard. This position is in line with the NRC Committee on Mars Sample Return Issues and Recommendations: ''Evaluation of the sample for potential hazards should focus exclusively, then, on searching for evidence of living organisms, their resting states (e.g., spores or cysts), or their remains in the sample'' (NRC, 1997).
Although this approach might lead to an impression that the SSAF is essentially a life detection framework, this impression would be incorrect. There are very important and clear distinctions between the general search for martian life in returned samples for purely scientific purposes and the assessment to exclude the presence of martian life in them. The SSAF is starting from the positive hypothesis-''there is martian life in the samples.'' Testing this hypothesis, that is, excluding the presence of martian life, is complementary to the scientific objective to search for martian life. Science investigations and the sample safety assessment use the same scientific methodologies, though the purpose and associated burden of proof is reversed. Disproving either the positive (safety) or null (science) hypotheses to a certain level of confidence can only be accomplished by collecting sufficient statistical data. Meeting the objective to disprove the null scientific hypothesis is typically constrained by the available resources in terms of budget and time. The constraints of disproving the positive safety-relevant hypothesis is much less dependent on resources and more dependent on the acceptable risk or the acceptable level of assurance that a risk will be avoided. As a consequence, the search for martian life science objective will benefit from the increased rigor required by the safety assessment, given that it will utilize the same scientific methods and tests required to address the search for martian life objectives. Thus, all samples used for the safety assessment and all tests done on these samples will have a scientific value.
To emphasize again, the SSAF is not a life detection framework. There are life detection frameworks in discussion and under development (e.g., Green et al., 2021;Graham et al., 2021). These are timely efforts to assess the validity and confidence for evidence of extraterrestrial life and ways to communicate this information effectively. Finding evidence for life typically follows an incremental path until definitive evidence is reached by a consensus in the scientific community (Green et al., 2021). Due to the reversed burden of proof for the safety assessment, any ambiguous results (e.g., maybe abiotic, maybe terrestrial contamination, maybe masking martian life) would not disprove the positive hypothesis until a clear root cause is identified and confirmed. Any step toward an agreed upon framework for life detection established by the science community would certainly help to reduce some uncertainties in the safety assessment and is therefore encouraged.
The following principles, derived from a Life Detection Conference & Workshop (Allwood et al., 2013;Kminek et al., 2014), reflect the interplay of science and sample safety assessments and provide the basis for the SSAF: 1. Use of a hypothesis-driven approach in the development of life detection investigation strategies and measurements for science (null hypothesis) and sample safety assessment (positive hypothesis). 2. The same types of scientific measurements inform the scientific understanding of the samples and their safety assessment.
3. A sample safety assessment must be data-driven, i.e., responsive to the results of individual or combined investigations. 4. The distinction between the scientific objective to search for martian life and the sample safety assessment is mainly the degree of rigor and supervision applied, which is described in this framework.
Unlike a scientific objective to search for life on Mars, the scope of the SSAF is limited to exclude the presence of martian life in the samples from Mars. Taking into account the diversity of samples and the microscopic distribution of potential life in macroscopic samples, every sample tube is considered a separate sample. A negative result (i.e., no martian life) associated with samples from a sample tube would provide a certain pre-defined level of assurance that there is no life and therefore no hazard for Earth in that sample tube. Such a determination cannot be extrapolated to other sample tubes nor can it be extrapolated to the planet Mars. A positive result for one or more samples would not necessarily mean they are hazardous for Earth. Any positive result would lead to a Hold & Critical Review (see Section 3.4). A deeper understanding of how any newly discovered biology works and what kind of capabilities it has would require detailed understanding of the metabolism, informational macromolecules, and replication of this extraterrestrial life. As on Earth, it is unlikely that life is represented only by one of its members, that is, if we discover a single martian life form, we would possibly discover more than one member of a martian biology. This, together with the fact that we do not even know how to cultivate most terrestrial microorganisms makes it essential to manage expectations in terms of the possibility to conduct a proper hazard assessment. This aspect is further detailed in the implementation part of the SSAF (Section 4).
There is one open parameter that must be introduced to the sample safety assessment-the level of assurance required to declare a sample safe. This parameter would describe the stopping threshold, that is, level of confidence in the statement ''the presence of martian life is excluded in this sample.'' Setting such a level is important to avoid open-ended discussions and better estimate the efforts and resources necessary to conduct the sample safety assessment. For the purpose of running simulations and test cases, we have taken a value of ''1 in a million chance of failing to detect life if it is there.'' For details on the background of this canonical value, the reader is advised to consult the ESF Study Group Report on MSR Planetary Protection Requirements (ESF, 2012).
Elements of the SSAF
There are four elements in the SSAF. Each is necessary, though on its own not sufficient to qualify for a safety assessment. The four elements are ( Fig. 1): of techniques or instruments that could provide the information required by the SSAF. This list of candidate instruments is not a set of required or endorsed instruments but has been established for planning purposes.
Bayesian statistics
Bayesian reasoning and methods of statistical analysis are standard approaches with which to address complex statistical issues (Greenland, 2021) and are widely used in medical decision making (Hunink et al., 2014). Bayesian statistics can accommodate various forms of information and help to optimize limited resources, like sample material or time. Therefore, Bayesian statistics is considered an appropriate tool for the SSAF.
When little prior information is available, the Bayesian and frequentist statistics will generally yield very similar results (Rothman and Lash, 2021). When prior knowledge can be incorporated, whether for decision making in medicine or assessments of Mars samples, and a series of tests are to be used with the results being updated after each test, Bayesian statistics is more applicable and appropriate (Hunink et al., 2014;Greenland, 2021). In our case, it is necessary to specify an a priori probability that there is martian life in a sample tube. The information acquired by the NASA Mars 2020 mission (Farley et al., 2020) can be used to make an informed judgement about the a priori probability of finding martian life in a sample tube before actually starting any testing. This informed judgement must reflect the conservative posture of a positive hypothesis. The results of applying tests on one sample tube can also inform, together with the other Mars 2020 information, the a priori probability of finding martian life in subsequent sample tubes. Recall, however, that a sample safety determination cannot be directly extrapolated from one sample tube to another one.
3.1.1. Sensitivity and specificity. In addition to establishing a pre-test probability (a priori), the other quantities that need to be estimated before Bayesian statistics can be applied are the sensitivity (Sn) and specificity (Sp) of the test. One complication is that terrestrial biological contamination would impact the specificity of the test, that is, leading to a false positive. There is also another complication: even if there is life somewhere in the sample tube, there is no guarantee that there will be life in the subsamples that are examined. Thus, the sensitivity of the test for the sample of a specific sample tube depends on both the sensitivity of the test and the capture rate, that is, the probability of finding martian life in a subsample, if there is in fact martian life somewhere in the sample material inside the sample tube, which is certainly less than 1.0 (i.e., less than 100%). The effective sensitivity (ESn) is the product of the sensitivity and the capture rate.
3.1.2. Driving factors for the safety assessment. If the presence of martian life in one of the subsamples cannot be excluded, then in terms of the safety assessment, we need to assume that there is a high probability that life is present.
FIG. 1. The four elements of the SSAF. There a multiple interdependencies between the various elements. The major external input parameter required is the level of assurance that something is safe. Some of the parameters need informed judgements based on Mars 2020 in situ data and tailored analogue test programs.
Note that this conservative approach, taking the precautionary principle into account (Pearce, 2004), assumes that the specificity of the test is 1 (i.e., 100%), i.e. that a false positive cannot occur. Although the test would certainly be chosen to minimize the chance of a false positive, there is still the issue of terrestrial biological contamination that could at least bias the results (see Section 4.3 for more details). With this background information, how many negative test results are actually required before it can be concluded that the positive hypothesis (i.e., that there is martian life in the sample tube) has been ''refuted'' (i.e., the probability of life being present is less than a pre-defined level of assurance)? A theoretical example can illustrate this, including the dependency of the required number of negative tests on the various parameters. More information about the relationship of samples and subsamples is described in Section 3.2.
Using the following assumptions: Pre-test probability = 0.50 Sensitivity = 0.99 Capture rate = 0.75 Specificity = 0.99 The following can be derived: Effective sensitivity = 0.99 · 0.75 = 0.7425 Pre-test odds = 0.50/(1 -0.50) = 1.00 Positive Likelihood Ratio (PLR) = 0.7425/0.01 = 74.25 Negative Likelihood Ratio (NLR) = (1 -0.7425)/0.99 = 0.26 If then the test is applied to the first subsample, and the results are negative, then the post-test probability can be calculated as follows: Post-negative-test odds = pre-test odds · NLR = 1.00 · 0.26 = 0.26 Post-negative-test probability = 0.26/(1 + 0.26) = 0.206 Having one negative test reduces the probability that there is life in the sample from 0.50 (pre-test) to 0.206 (posttest); this value is now used as pre-test probability for a second test; a second negative test reduces the probability further to 0.063, etc. Table 1 shows the number of sequential negative tests required for the post-test probability to become less than 1 · 10 -6 (i.e., 1 in a million), under a variety of assumptions.
This exercise shows two important results ( Table 1): The capture rate is crucial to this process. With a capture rate of only 0.25 (25%), a pre-test probability of 0.95 (95%) and both sensitivity and specificity at 0.95 (95%), it would require at least 77 negative tests (and no positive test) before one could conclude that the post-test probability is less than 1 · 10 -6 . On the other hand, if the capture rate is 0.75 (75%), then only 15 negative tests (and no positive test) are required. For an unrealistic capture rate of 1 (100%), only 6 tests would be required. The sensitivity and specificity of the overall test sequence is important as well but to a lesser extent than the capture rate. Only when values go much below 0.9 (e.g., 0.7) would this markedly increase the number of negative tests required before one could conclude that the post-test probability is less than 1 · 10 -6 .
The capture rate for a natural sample and the sensitivity and specificity of a real test sequence can only be estimated by an informed judgement. The elements that are necessary to enable such an informed judgement are described in Sections 3.2 and 4.2.
Subsampling strategy
As described in the previous section, the capture rate has a major impact on the number of negative tests required before the samples from a sample tube can be declared safe with a pre-defined level of assurance. The number of negative tests (as defined in Section 3.3) required is equivalent to the number of subsamples of a sample in a sample tube that need to be tested. As some parts of natural samples are more likely to contain life than the rest (e.g., Onstott et al., 2019), it is typically not appropriate to apply random sampling. To maximize the probability that a subsample will contain martian life (i.e., increase the capture rate) if there is martian life somewhere in the sample of a sample tube, informed, targeted sampling is required. Random sampling or a poorly informed targeted sampling will reduce the effective sensitivity and thus would lead to a substantially higher number of negative tests (i.e., number of subsamples) required before a sample from a sample tube could be declared safe with a pre-defined level of assurance. This is illustrated in Table 1: a poor capture rate of 0.25 (25%) compared to a good one of 0.75 (75%) would require more than 60 additional subsamples to be processed (each with a negative result) before reaching the same level of assurance. Thus, an informed targeted sampling strategy needs to be applied to reach capture rate levels, ideally, above 0.5 (50%). Such a strategy requires a focus on the areas, characteristics, and features of the samples that are likely to contain martian life, taking into account the type of sample and the expected distribution and patchiness of life in the samples associated with fractures, veins, and general interconnected spaces as well as chemical interfaces and boundaries (e.g., Gorbushina, 2007;Cockell et al., 2019;Onstott et al., 2019;Brady et al., 2020;Suzuki et al., 2020). Many sample tubes will undoubtedly contain samples with diverse features that, depending on their categorization, could number from a few to a large number of distinct sections of each sample. However, it should be noted that such targeted sampling needs to be balanced, for example, by containing an appropriate mix of high-probability and medium-probability sites, rather than solely sampling from those sites with features that are considered to have the highest probabilities of containing life. The first step in this process is to obtain information about the 3-dimensional (3-D) morphological characteristics of the external and internal structures of the sample at a micrometer scale (i.e., 1 · 10 -6 meter) resolution. Though this spatial resolution is not necessarily sufficient to find morphological evidence of life, it is sufficient to image physical features that could contain such information (e.g., fractures, veins, and general interconnected spaces). To select optimal targets and establish priorities for subsampling, spatially correlated chemical and mineralogical information is required as well (e.g., Onstott et al., 2019).
Airfall or windblown dust is a special case in this sampling context. Although dust might be sorted to a certain degree during sample acquisition and transport, random sampling for dust samples is an adequate approach as long as the dust samples are homogenized (i.e., mixed) before subsamples are taken. It is worth noting that the serendipitous dust on the sample tubes is likely not of sufficient quantity to perform a sample safety assessment. This would be the case as well for any dust components inside the various sample tubes; hence, it is questionable whether small quantities of dust can be declared safe (i.e., free of martian life) based on a sample safety assessment. Clays are another special case. There are clays formed by local aqueous alteration (e.g., smectite coatings on weathered feldspars), and there may be clay-rich mudrocks that are typically homogenous in terms of the distribution of the clays. For determining the right subsampling approach, in-formed targeted subsampling is appropriate for clays formed by local alteration associated with distinct features (e.g., fractures) within lithified clay rocks. By contrast, clay-rich ''muds'' would be more suited for random subsampling. This informed approach can be generalized for various types of fine-grained minerals, that is, targeted subsampling for localized fine-grained alteration products or localized features, such as fractures in lithified fine-grained rocks, and random subsampling for unconsolidated fine-grained sediments. These approaches must be tested and confirmed by using terrestrial analogue material (see Section 4.2).
A further consideration in selecting subsamples is that each subsample must be independent (conditional on the targeted sampling strategy), that is, each subsample needs to be from different parts of the sample. If this is not done, for example, if all subsamples are selected from the same section of the same crack, the subsamples would not be independent, and the assumptions of the Bayesian analysis would no longer be valid (and neither would the assumptions of standard ''frequentist'' statistics). The independence of sampling is not an issue for dust samples since these are assumed to be homogenous.
The information about the sample, however, is only one element in developing a credible and robust targeted sampling strategy. The specific martian sample information must be linked with a knowledge base, that is, experience with similar terrestrial sample types, including dust samples. To establish such a knowledge base requires an analogue test program tailored to the expected types of samples from Mars (i.e., information from Mars 2020) and the kind of measurements that will be used to establish the information for deriving the capture rate (i.e., 3-D structural information and the spatially associated chemistry and mineralogy). For more information see Section 4.2.
Bayesian statistics provide an estimate of the number of subsamples necessary to reach a pre-defined level of assurance that the sample in a sample tube is safe. This is a very important aspect of the sample safety assessment because it facilitates planning with regard to the resources (e.g., time, number of subsamples) required for individual sample tubes. It also helps to establish a strategy that optimizes the sequence of investigations required for analyzing the available sample tubes. The amount of material for each subsample depends on the sensitivity of the test in relation to the resolution required for the sample safety assessment. Thus, any available technique that has been properly vetted and meets or exceeds the measurement requirements should be considered for use.
Test sequence
In the previous sections about Bayesian statistics and the subsampling strategy, the term ''test'' is used in a very generic form. Unfortunately, there is no single ''test'' that can be applied to acquire all of the information necessary to perform a sample safety assessment. What is actually required is a set of investigations in a specific logical order that will inform the sample safety assessment. This set of investigations, referred to specifically as the ''test sequence,'' is focused on the type of information necessary for sample safety assessment rather than the specific techniques and instruments required to gather it. Any statements made about a ''test'' in the previous sections (e.g., number of negative tests, dependencies of tests, etc.) are actually applicable to the test sequence. For example, the number of negative tests required to reach a pre-defined level of assurance that there is no martian life in a sample tube is actually the number of negative test sequences applied on the equivalent number of subsamples, that is, 3 negative tests means that the test sequence was applied on 3 different subsamples, each time with a negative overall result. It should be emphasized that all of these tests will produce scientifically important data.
3.3.1. Test sequence steps. The first investigations in the test sequence are derived directly from the need to support the targeted subsampling (see Section 3.2).
Three-dimensional structural information-Step 1. This investigation is focused on the 3-dimensional (3-D) morphological characteristics of the external and internal structural features of interest at micrometer-level spatial resolution while the samples are still in the sealed sample tubes. The importance and usefulness of this kind of analysis has been previously described by McLennon et al. (2011) and Tait et al. (2021). Together with the information provided by the Mars 2020 mission (Farley et al., 2020), this information will enable the assignment of a risk level (i.e., chance to find martian life, pre-test probability) to each sample tube. The use of Bayesian statistics and considerations of the dependencies of the various parameters (pretest probability, capture rate, sensitivity, specificity) makes it obvious that the samples with the highest risk index (i.e., highest chance to find martian life) should be opened and analyzed first because these samples will provide the highest confidence in the results-positive or negative. They would also produce the most robust pre-test probability for any subsequent samples and altogether optimize the use of sample material. After having selected a specific sample tube for further analyses, the 3-D structural information forms the basis of a risk-informed subsampling strategy.
Gas analysis-Step 2. There are many scientific reasons for analyzing the headspace gas inside a sample tube (Swindle et al., 2022;Tosca et al., 2022;Velbel et al., 2022). In fact, analysis of the sample tube headspace gas is likely to be one of the first direct measurements of martian material beyond the indirect investigation described in Step 1. At the most basic level, analysis of the headspace gas might inform whether a tube has leaked (Parai et al., 2021). Having been sealed on Mars, the tubes will be at negative pressure relative to ambient Earth pressure, so immediate leakage will be of terrestrial atmosphere into the sample. If terrestrial atmosphere has leaked into the tubes, then it would have occurred during atmospheric entry or when the capsule was on the ground awaiting recovery. In either case, bacteria, dust, or other air-borne particulate matter may have been carried into the sample tubes as well, depending on the nature of the leak. Deposition of such matter on the martian samples has the potential to create false positives, assumedto-be martian species, or overprint a true positive signal of martian life (Milam et al., 2021). The gas analysis would be important for planning the sequence of operations for opening the individual tubes, as well as for the interpretation of the data to know early on which tubes might be com-promised by terrestrial contamination. Knowing which tubes are compromised will also be a key factor in determining the extent to which contamination knowledge samples will be required to deconvolve any terrestrial life signals in a sample from any potential martian signals that are also present (refer to details in Section 4.3).
Chemistry and mineralogy associated with the 3-D structural information-Step 3. These investigations focus on the acquisition of information about the chemistry and mineralogy associated with 3-D structural features of interest in a sample (e.g., fractures, veins, and general interconnected spaces). This could be done at the same time that 3-D structural information is acquired on samples while still in their respective sealed sample tubes (Step 1) or, subsequently, once sample material is removed from the sample tubes. The benefit of the latter approach is that the quality and spatial resolution of the chemical information acquired on sample material removed from the sample tubes might be better. Such chemical and mineralogical information is essential to refine the subsampling strategy (Tait et al., 2022;Carrier et al., 2022), which is based on the 3-D structural information, and of particular importance to optimize the subsequent use of sample material.
To put these first investigations in the proper context, it is useful to describe briefly the expected initial sample characterization steps in the frame of the sample curation activities (Tait et al., 2022). The initial sample characterization covers three distinct phases: Pre-Basic Characterization (Pre-BC), Basic Characterization (BC), and Preliminary Examination (PE) (Fig. 2). The first investigation required in the SSAF, 3-D morphological characterization of the external and internal structures at micrometer-level spatial resolution (Step 1) while the samples are still in sealed sample tubes, overlaps with the Pre-BC investigations.
Step 3-chemical and mineralogical information associated with features of interest in the sample structure-overlaps with the BC and PE investigations (Fig. 2). These overlaps are beneficial because the set of investigations serve three functions-curation, science, and sample safety assessment.
Steps 1 and 3 of the test sequence provide information about how many and which subsamples to take from the sample of a sample tube. Products and effects of life in a host rock are generally volumetrically more significant than life itself (Onstott et al., 2019). Therefore, it is possible that the results of these first steps would provide initial indications of life, in addition to refining the targeted subsampling. Morphological indications consistent with life are a special case in this context. Independent of the analytical process used, morphology alone can be misleading. There is a long history of incorrect interpretations of cell-like morphologies as evidence of fossilized life (see Section 3.3.2). What actions follow, in particular for Step 4, depend upon the associated chemistry and whether any morphological feature of interest is unique and an isolated observation or a common constituent of a sample. Unlike the scientifically relevant null hypothesis, the sample safety assessment is focused on the positive. Therefore, targeted investigations for Step 4 require morphological and chemical information that exclude a martian biological origin of common and unique features in the samples rather than just attempt to confirm their potential biological origin.
Organic molecules-Step 4. This step initiates the search for molecular evidence of martian life in targeted subsamples. The search strategy is based on the assumption that potential martian life is based on carbon chemistry. Therefore, the subsequent investigations (Steps 4, 5, and 6) must include a focus on locating, identifying, and characterizing organic compounds in the subsamples. Steps 1 and 3 of the test sequence are critical in the search for any organic molecules that might be associated with life because, like on Earth, it is expected that life is spatially clustered and not homogenously distributed in the host rock, and that the bulk organic content of the host rock is not necessarily correlated with the presence or absence of life (e.g., Onstott et al., 2019;Suzuki et al., 2020).
For the purpose of the SSAF, organic molecules are defined as a group of covalently bonded molecules that contain carbon and at least one other element. We exclude CO, CO 2 , CO 3 2-, carbides, graphite, and steel from this functional definition of organic. Insoluble organic matter (IOM), as delivered by meteorites, and kerogen that originated from extinct life are also excluded because such substances consist of molecular compounds that are not soluble in polar or non-polar solvents. Examples of included organic compounds are mellitic acid, urea, CS 2 , CCl 4 , methane, carbon suboxide, Prussian blue, polycyclic aromatic hydrocarbons, and obviously organic species like lipids, amino acids, aldehydes, etc. To properly characterize any specific organic compounds in returned samples from Mars, it is necessary to use destructive techniques. The decision about whether to apply in situ techniques or bulk extraction-based techniques will require information from the previous investigations (i.e., Steps 1 and 3). In situ based techniques are less likely to conclusively identify any specific organic molecule because they rely on identification of only one type of information-either mass (e.g., Matrix Assisted Laser Desorption/Ionization Mass spectrometry (MALDI-MS), Time of Flight Secondary Ion Mass Spectrometry (ToF-SIMS)), or functional group (e.g., Raman spectroscopy, infrared spectroscopy, deep ultraviolet fluorescence). For some techniques, a substantial interference from the mineral matrix is expected. The advantage of in situ based techniques is that a result can be spatially associated with observed features. In situ analysis is the preferred approach in those cases where compelling morphological or chemical evidence of life are detected. Bulk extraction-based techniques (e.g., Liquid Chromatography Mass Spectrometry (LC-MS), Gas Chromatography Mass Spectrometry (GC-MS)) provide two types of information for identification of organic molecules-the time it takes for a compound to pass through a chromatographic column (retention time), and the mass and fragmentation behavior of the molecule as measured by the mass spectrometer. Although these two types of information improve the reliability of identifying specific organic molecules, extraction-based techniques eliminate the direct spatial association with structural features of the sample and could dilute a localized low biomass signal (i.e., reduce the sensitivity). However, in those cases when the evidence for possible life is more widespread in a sample, extraction-based techniques could also increase the sensitivity because they typically sample a larger volume. It is acknowledged that organic molecules occupy a wide range of polarity space, and thus no single solvent will extract all compounds. This affects the amount of sample material that needs to be used for bulk analysis of each subsample. Further, any binding of life and organic compounds to mineral surfaces will require additional steps (such as hydrolysis) to release them (Mitra, 2004). Sample extracts from a specific subsample could be split for analyses by multiple complementary techniques. It is very important that all blanks be processed in the same manner as a sample of interest.
Molecular patterns-Step 5. If regions of organic-rich material are identified in a subsample, it is necessary to characterize the molecules present. Compound-specific measurements are required to search for molecular patterns. The targets of interest are small organic compounds, such as those found in biological monomers or biochemical intermediates, while the characterization of larger molecules is covered in Step 6. Molecular patterns are defined as a limited suite of organic compound abundances distinct from what would be produced abiotically with respect to structural diversity, chirality, and stable isotopes. For example, abiotic reactions tend to produce organic compounds at decreasing abundance with increasing molecular weight and show a lack of chemical specificity (e.g., biological vs. meteoritic amino acid abundances or Fischer-Tropsch hydrocarbons vs. even-numbered biological fatty acids). With the exception of certain meteoritic compounds, molecules produced from abiotic reactions show no chiral preference (Glavin et al., 2019). Glavin et al. (2019) provided a framework for using structural diversity, chirality, and stable isotopes together to evaluate possible biological origins of a compound, and they cautioned that any one of these indicators would be insufficient to indicate biology. Though this framework is science driven, it should be acknowledged that, for sample safety assessment purposes, the aim is to exclude biological origin. It is difficult to generate a predetermined life detection or life exclusion test from molecular patterns because of the likely co-existence of mixtures of several end member organic compounds that include those from active and prolific biology, degraded biological compounds, degraded abiotic organic compounds, and abiotic chemistry. As in the previous step, sample extracts could be split for analysis by multiple complementary techniques, and blanks must also be analyzed in parallel. Molecules most likely to be detected in this step are amino acids, nucleobases, sugars, lipids, and pigments.
Macromolecules-Step 6. The next step in the SSAF is designed to target polymeric or other large molecules to search for patterns in order to differentiate abiotic macromolecules, such as meteoritic insoluble organic material, from biological molecules, including, but not limited to, deoxyribonucleic acid (DNA), ribonucleic acid (RNA), proteins, and polysaccharides. For the purpose of the SSAF, a macromolecule is defined as an organic compound with molecular weight greater than 2500 Da (Dalton). This limit is derived from taking one half of the mass of the smallest known functional macromolecules in terrestrial biology. For example, the smallest prion is 300 kDa (Silveira et al., 2005), the smallest enzyme is 66 residues, or 6811 Da (Chen et al., 1992), and the smallest ribozyme is 16 nucleotides, or 5233 Da (Scott et al., 1995). This limit is also smaller than the smallest of the well-studied RNA in vitro replicating systems with a mass of about 15 kDa (Oehlschläger and Eigen, 1997). It should be noted that the smallest amyloid is an 8-residue domain, or 800 Da (Gazit, 2007;Sabate et al., 2015), and that amyloids, transmissible epigenetic regions in a larger protein, need to be in high enough concentrations to form fibrils (Sabate et al., 2015). Such a concentration of peptides would be strong evidence for life, but defining a macromolecule so broadly is likely to generate more falsepositive detections than is useful. Metabolic only hypercycle-like life (Eigen and Schuster, 1997) would lack informational macromolecules but is unlikely to be able to outcompete terrestrial biology and pose a threat. Nevertheless, such a biological system would show a strong positive signal for the previous investigations but fail the current investigation step, and must be investigated further for the potential for life. Similar to the previous steps, sample extracts could be split for analysis by multiple complementary techniques, and blanks must also be analyzed in parallel. Molecules likely to be detected in this step include proteins.
Life as we know it-Step 7. Hallmarks of terrestrial life include ATGC-based DNA, AUGC-based RNA, proteins comprising 20 L-amino acids, lipids (i.e., fatty acids, phospholipids, etc.), and glycopeptides, such as peptidoglycan and polysaccharides (e.g., cellulose). Detecting life as we know it assumes that, if there is a living organism, it relies on the same chemical processes as terrestrial organisms and thus differs from the agnostic approach described in Step 8. To improve the sensitivity in what is expected to be a low biomass scenario requires the use of amplification steps (see Section 3.4). There are two types of life detection techniques that amplify specific targets of interest: cultivationdependent and cultivation-independent, both with varying degrees of sensitivity and specificity ( Table 2). The most useful techniques would be highly sensitive and have low specificity. Cultivation techniques theoretically have extremely high sensitivity in that one can grow a culture from a single cell, but the narrow bandwidth of any one combination of culture medium and growth condition makes culture-based approaches unpractical (see also Section 3.3.2). Alternatively, one can apply cultivation-independent techniques with amplification steps, like the polymerase chain reaction (PCR) for the amplification of nucleic acids. PCR sensitivity is high because billions of copies of a gene of interest can be derived from as little as one template copy. The usual target gene encodes small-subunit ribosomal RNA (SSU rRNA), which is a component of all terrestrial cells. PCR can also be very non-specific in that primers for SSU rRNA genes have been designed to have homology to all, or nearly all, members of each of the three evolutionary domains of life. In fact, these universal primers are routinely used to characterize microbial communities on Earth, including those in extreme environments. Sequencing of PCR-amplified SSU rRNA genes has revealed many new phyla (i.e., taxonomic rank in biology) of previously unknown life as we know it (e.g., Lloyd et al., 2018).
If life as we know it is detected in samples from Mars, the most likely explanation for this would be contamination from a terrestrial source. Contamination can occur during the assembly of the spacecraft and proceed all the way to analyses of returned material (McCubbin et al., 2019;Chan et al., 2020). Sequences of any PCR-amplified SSU rRNA genes derived from such samples could easily be compared to those from known spacecraft and spacecraft assembly facility contaminants (e.g., La Duc et al., 2014;Moissl-Eichinger et al., 2015;Koskinen et al., 2017;Regberg et al., 2020). Slim as the possibility is, there could exist life as we know it that is otherwise very different from known life on Earth, e.g., that evinces nucleic acid sequences and protein sequences that are so distinct from those in existing databases that one might conclude that they represent a life form that evolved on Mars rather than Earth. Such a conclusion would have to be made with the utmost care and with the hindsight that we are continually discovering novel terrestrial life forms and an increasing body of unannotated sequences in metagenomic datasets (i.e., microbial dark matter, Rinke et al., 2013). This has occurred partly through the development of new tools. Sequencing approaches revealed the existence of a third branch of life, the Archaea, only about 40 years ago; high-throughput DNA sequencing continues to unearth new microbial phyla. Also, the exploration of new, extreme habitats such as the deep ocean and the continental subsurface has greatly expanded our datasets. In other words, life as we know it is much more diverse than we knew just a few decades ago and may encompass even more forms by the time extraterrestrial samples are examined on Earth. Beyond self-replicating life, there are new viruses discovered on a monthly basis. This includes some very different classes of viruses, such as the giant viruses found across widespread habitats and ecological systems (e.g., Brandes and Linial, 2019) and newly confirmed bacteriophages that employ an alternate nucleobase (2-aminoadenine) in the genome (Zhou et al., 2021;Sleiman et al., 2021).
Another consideration if life as we know it is detected will be to ask whether it is alive. This is especially important for the sample safety assessment but also impacts the science. A range of analytical methods is available for determining microbial viability, each with its own sensitivity and specificity (e.g., Emerson et al., 2017). Each method uses a single criterion for determining life vs. death along what is actually a continuum, given that cells proceed from active to The search for life as we know it is facilitated by a vast knowledge of terrestrial life and the development of powerful tools for life detection and characterization. All techniques require an extraction step (e.g., solvents and/or physical agitation) to release the target of interest (i.e., life-form) from the mineral matrix and are destructive for the potential life-form under investigation (except for successful cultivation). Compatibility of using aliquots of one extract for more than one technique might only be possible for a few cases.
inactive and subsequently senesce and eventually disintegrate. Besides cultivation, currently available viability assessments are made on the basis of metabolic activity, positive energy status, and the detection and abundance of ribosomes, RNA transcripts, or intact membranes. Viruses and other infectious nucleic acids do not have any universal genes, and hence, they have no non-specific PCR primers or genetic probes. PCR of the main functional motifs (e.g., polymerases, helicases, receptor binding domains) that are most conserved among virus families could be used to look for viral signatures. Virus-like particles can also be stained with general nucleic acid stains and viewed by epifluorescence microscopy (Suttle and Fuhrman, 2010), though results can be ambiguous. Prions would likely not be distinguished by mass spectrometric analyses since they are misfolded versions of naturally occurring host proteins, yet a sensitive cyclic amplification of a protein folding assay that tests for protein misfolding of common proteins (e.g., Saborio et al., 2001) could be applied.
Life as we know it is quite varied, and the full range of possible lifeforms and their structure or the range of conditions within which they can survive remains unknown. In assessing the possibility of life on another planet, it is necessary to take into account the possibility that alternative nucleic acids, amino acids, electron transfer systems, and high energy bonds for driving metabolic activity could exist. Investigating such additional considerations is described in the next step.
Agnostic life detection-Step 8. Analytical methods that do not presuppose knowledge of the chemistry of a target life form (agnostic approaches) are especially useful for analyzing samples that contain unanticipated complexity. There are different metrics for complexity in chemistry that are typically associated with specific analytical techniques. Detection of complexity not seen in controls or anticipated by statistical models developed for agnostic analytical methods are interpreted as an indication of potential life that requires further study.
Earlier steps in the test sequence, especially Step 5, include analytical techniques that may inform an agnostic approach with regard to such features as particular classes of molecules, patterns within the molecular weights, or even intrinsic molecular complexity. There is a distinct need for novel techniques specialized for biochemical systems that do not share a chemical heritage with life on Earth. An expanded agnostic search for life could include molecules that are sufficiently complex but not associated with life on Earth (e.g., Marshall et al., 2021), discrete metastable accumulations of elements or isotopes that are not typical of abiotic geological or mineralogical process (e.g., Kempes et al., 2021), and disequilibrium redox chemistries that are not consistent with abiotic redox reactions (e.g., Frank et al., 2013).
To cast the widest possible net for life detection, the range of allowable interpretations for life must broaden. In addition to the expanded interpretive frameworks for typical methods, we present two concepts for agnostic life detection (see Section 3.3.3). Both require amplification and sequencing and explore the possibility for novel metabolisms that would not be detectable by typical biological methods (i.e., Step 7) yet also identify particles with surface chemistry characteristics typical of living organisms. These concepts could be used to recognize organic or inorganic evidence of life. Any concepts to be used, like those presented here, must address different forms of complexity (e.g., molecular vs surface binding complexities) and use orthogonal techniques in a sense that they use different interactions of analytical technique and sample. The logical consequence of this is also that one agnostic life detection methodology is not sufficient.
Diagnostic elements not explicitly used in the test sequence
Carbon. Life on Earth is based on carbon, which is present as a mixture of simple and complex organic molecules. As a guide to the search for life on Mars, it was assumed that carbon plays a similarly significant role. So, the search for life (extinct or extant) on Mars could be cast as the search for carbon. The rationale for searching for organic molecules is described in the work of Neveu et al. (2018). This search should be performed at the detection limits of available instrumentation, though it is acknowledged that the organic compounds released from a single cell in a given sample tube would be below the limit of quantitation (i.e., as required by Neveu et al., 2018) of the most likely instrumentation and that, in many cases, the detection of compounds of interest also means destroying them and disrupting any life present. A cell contains about 40 fg (femto-gram, 10 -15 gram) of organic molecules (Braun et al., 2016), and even the most sensitive technique likely requires at least 100s-1000s of cells in the sampled volume in order to be detected (e.g., Summons et al., 2014;Bhartia et al., 2010;Braun et al., 1999). Thus, the corollary, that if no carbon is detected there is no life, does not hold true. Hence, no lower limit for carbon detection is set for the test sequence in the SSAF.
Stable isotopes. The stable carbon isotopic compositions of living organisms on Earth are determined by the metabolic pathways that operate in the organisms. However, there is such a wide diversity of carbon isotopic compositions, and no single diagnostic composition or defined fractionation between nutrients and organisms, that the use of carbon isotopic composition as a diagnostic tool for life is substantially compromised. Combinations of isotopic compositions, for example, carbon, nitrogen, and sulfur, might help improve these limitations, though without knowing all the abiotic sources, sinks, and fractionation processes possible, this approach is still considered a weak diagnostic tool. Given that there are distinct isotopic differences between martian geological materials and geological materials from Earth (e.g., Barnes et al., 2020;Franchi et al., 1999;Franz et al., 2017;Füri and Marty, 2015;Shaheen et al., 2015), it is logical to presume that similar isotopic differences might persist between possible martian organisms and terrestrial organisms. Furthermore, it may be tempting to use isotopic composition as a means to differentiate between terrestrial and martian organisms. However, organisms often acquire the isotopic composition of their primary energy sources (i.e., their food) (e.g., Berry et al., 2015;Boschker and Middelburg, 2002;Jennings et al., 2017;Tykot, 2003); so terrestrial organisms that have subsisted on the elements in martian rock would likely inherit an isotopic composition like that of its environmental components. For these reasons, it is not advisable to rely primarily on the isotopic composition of potential biological material to identify martian life or differentiate whether any given life form discovered had a martian vs a terrestrial origin.
Solubility. Solubility is an important aspect in evaluating the potential harmful consequence of martian life on Earth. Terrestrial biology is solution based. Biochemical intermediates and macromolecules are soluble (or can be dispersed) in water or lipids. Cells and viruses can also be dispersed in water. Exceptions are various types of (naked) viruses or biological systems that could have minerals that cover the outside of a cell. With regard to the sample safety assessment, therefore, the concern is whether martian life is, and martian organic molecules are, soluble under physiological conditions. What is important for the test sequence is whether there are soluble organic molecules in extracts that can be detected and characterized by the analytical tools to discern what they are (i.e., via Steps 4-7). A separate investigation to assess the solubility of (organic) material in a sample is not considered required and would also unnecessarily consume sample material. For these reasons, solubility is not considered a standalone diagnostic tool but is indirectly addressed by the extraction processes for some chemical and biological analyses.
Metals. Living systems on Earth interact with a range of metals, including those that act as cofactors with enzymes. Roughly one-third to one-half of all known enzymes depend upon metal ions (e.g., Mounicou et al., 2009;Banci and Bertini, 2013). The most common metallic cofactors are Mg, Ca, Zn, Mn, and Ni; Fe and Cu are commonly redoxactive; and Co and Mo interact with coenzymes (Banci and Bertini, 2013;Madigan et al., 2019). The concentration of metals in cells and their association with cellular organic compounds suggest that metal profiles might be useful biosignatures. Indeed, the systematic biological study of these metal profiles has been termed ''metallomics,'' and the suite of metals associated with a cell is known as the ''metalome.'' Problems with relying on metal data for a sample safety assessment include: Their occurrence in concentrated form due to purely abiotic processes; The collection of cellular metal proportions, which varies among phylogenetically diverse microbial cells and in response to environmental parameters; The evolution of life on Earth to use particular metal ions based, in part, on their availability suggesting that life on Mars could potentially select for utilization of an entirely different set of metal ions than its counterparts on Earth.
For these reasons, metallomics is not considered a strong diagnostic tool for the SSAF.
Morphology. Morphological evidence of life can compound the challenges of life detection as cell-like forms can easily be produced by non-biological processes. The selfarrangement of lipid molecules with hydrophilic heads and hydrophobic tails in water is an example of how molecules with cell-like morphologies can be formed abiotically (e.g., Dworkin et al., 2001;Jordan et al., 2019). The chemical behavior and relative size of the hydrophobic heads of lipid molecules causes them to pack into a cell-like arrangement called a micelle, in which the hydrophilic heads face outwards toward the water and the hydrophobic tails are positioned toward the center of the 3-D micellar structure. Similar structures can be generated by polymers in water where a dense phase forms droplets within a more dilute phase and the droplets represent cell-like compartments. These entities, known as coacervates, were implicated in early origin of life models proposed by Alexander Oparin, who hypothesized that coacervates could have operated as protocells. Spontaneously formed cell-like structures can also leave residues that can be misinterpreted as life, what J.D. Bernal called ''jokes of nature'' (Urey, 1962). The early 1960s saw reports of ''organized elements'' in carbonaceous meteorites derived from asteroids. Claus and Nagy (1961) believed that these entities could be microfossils indigenous to the meteorite. Subsequent studies revealed that these entities were either exogenous materials, such as pollen and fungal spores that had contaminated the sample, or endogenous materials such as olivine crystals (Fitch and Anders, 1963). Observations of cell-like morphologies have also been used to suggest evidence of life in meteorites from Mars. Scanning electron microscope (SEM) images of ALH84001 revealed segmented tubular structures that were interpreted as fossil nanobacteria (McKay et al., 1996), though later work implied such features were related to crystalline pyroxene and carbonate growth steps (Bradley et al., 1997). Cell-like morphologies have also led to misinterpretations of evidence for early life on Earth. The 3.5 Ga Apex Chert in Western Australia contains filament structures that were once interpreted as oxygen-producing cyanobacteria (Schopf, 1993), yet modern interpretations of the host rocks suggest that the structures originated in a hydrothermal vent rather than the originally proposed shallow sea floor setting (Brasier et al., 2002). The Apex Chert filament morphologies that were assigned to a biological origin have also been reinterpreted as carbon that may be organic compounds generated by Fischer-Tropschtype reactions during hydrothermal serpentinization of ultramafic rocks (Brasier et al., 2002) and as organic molecules that adsorbed onto self-organized crystal aggregate biomorphs (Garcia-Ruiz et al., 2003) or exfoliated phyllosilicates (Wacey et al., 2015). The filamentous morphologies have also been reinterpreted as aggregates of hematite microcrystals (Marshall et al., 2011). It is worth noting that a biologic origin of the filamentous microstructures has not been demonstrably excluded, since they could represent remnant chemolithoautotrophs that lived in a hydrothermal setting (Schopf et al., 2018). In general, cell-like morphologies remain controversial because there are many processes in nature that generate life-like microscale objects that include tubular, filamentous, framboidal, and dendritic structures (e.g., Cosmidis and Templeton, 2016;Garcia-Ruiz et al., 2009;Kotopoulou et al., 2020;Muscente et al., 2018;Rouillard et al., 2018;McMahon et al., 2021). Given the extensive history of incorrect interpretations for life based on morphological evidence alone, morphology is not considered a reliable stand-alone criterion for or against life, though it may be useful when associated with chemical information or to inform subsequent steps in the test sequence (e.g., Step 4).
Cultivation. The SSAF is in agreement with the position of the NRC Committee on Mars Sample Return Issues and Recommendations that ''Attempts to cultivate putative organisms, or to challenge plant and animal species or tissues, are not likely to be productive'' (NRC, 1997). The major limitations of this approach are that cultivation is not even possible for most terrestrial organisms and challenge tests are typically tailored to one or a few targets of interest. In addition, it is not considered advisable to multiply viable organisms that could have unknown and potentially harmful consequences. Therefore, cultivation is not considered a diagnostic tool used by the SSAF. As an indirect consequence and due to the limited diagnostic scope that covers the potential avenues of causing harm, animal and plant inoculation are ruled out as well.
3.3.3. Integrated test sequence and candidate instruments. Figure 3 describes the test sequence, and Fig. 4 explains the nomenclature used in the context of the test sequence. Rather than applying a scattergun approach (i.e., using all techniques available) or a piecemeal approach (i.e., focusing on individual steps or using a particular technique), it is critical to establish an ensemble of techniques and instruments capable of producing the information required for the safety assessment. Table 3 includes a number of techniques and instruments that could provide this information. The list of analytical instrumentation draws heavily on the list prepared by the MSPG2 . Some techniques are complementary and overlap with other techniques, which, from a science point of view, is advantageous. From a safety assessment point of view, complementary or overlapping information acquired with different levels of sensitivity and specificity could lead to challenges in its interpretation if this is not considered in advance. In FIG. 3. Overview of integrated test-sequence. The test-sequence is a set of sequential investigations (i.e., steps), each one responsive to the previous steps. There is only one real gate-Step 8-in terms of stopping any further investigations and declaring a sample tube safe within the pre-defined level of assurance.
Step 9 establishes a Hold & Critical Review for any sample investigations and executes a set of activities to evaluate all relevant data and the risk management measures, before deciding on the next steps.
FIG. 4. Nomenclature used in the context of a test-sequence. The elements of the test-sequence (i.e., investigations) address individual questions. Each investigation includes typically more than one measurement technique or instrument. The measurements provide the data that are discussed at the level of investigations. The safety assessment for one sample tube is based on the scientific assessments of the individual investigations carried out on the subsamples. this context it is considered essential that, regardless of the instrumentation or techniques that are ultimately selected, their limitations are well-understood and their performance is known and dependable (see chapter 4.2 for a way to address these issues).
Steps 1 to 3 of the test sequence are concerned with identification of features that are not associated specifically with living entities, although they may have been formed by life. Measurements focused on analyses of textures, mineralogy, chemistry, and gases are the same types of analyses that are currently used to identify possible biological and biogenic features in geological materials. These first 3 steps employ routinely tested and well-understood analytical techniques, such as microscopy, spectroscopy, and chromatography. For some steps, only one technique might be applicable, though it is inevitable that, for many of the steps, several different instruments could deliver the required results. For example, given appropriate calibrations, the mineralogy of a sample could be determined by optical or electron microscopy, IR spectroscopy, Raman spectroscopy, or X-ray diffraction. It is also the case that the same instrument could deliver required information for several steps. For example, Raman spectroscopy can identify the mineralogy of a specimen (Step 3) and the types of organic molecules (Step 4) that it contains.
Steps 4 to 6 of the test sequence cover analysis of organic material, including organics that are not necessarily of biological origin. There is a wide variety of techniques and instrumentation available for the required analyses.
Step 4 is a measurement of the presence or absence of organic compounds. Moving from Step 4 to Step 6 employs techniques of increasing specificity to enable acquisition of the required information: if organic material is present, what are its characteristics? The information includes recognition of molecular patterns and isomeric variations associated with individual species (e.g., amino acids, lipids) as well as the presence of macromolecules (which may, or may not, be polymeric). When using the techniques available at the time of this writing, progression to Step 6 requires increasing invasion of the selected subsample through treatment with a sequence of solvents (i.e., polar, non-polar, acidic, alkaline) to produce solutions for introduction of the processed sample into appropriate analytical instrumentation. The main technique for analysis of organic species is mass spectrometry, though the differing chemistries and molecular masses of the components require specific methods to introduce samples into the analyzer. Examples in use currently include Gas Chromatography (GC), Capillary Electrophoresis (CE), and High Performance Liquid Chromatography (HPLC). Alternatively, high molecular mass compounds can be analyzed by imaging mass spectrometry techniques (e.g., MALDI-MS, Laser Desorption/ Ionization Mass Spectrometry (LDI-MS), Desorption Electrospray Ionization Mass Spectrometry (DESI-MS), nano-DESI-MS and ToF-SIMS). These techniques enable in situ molecular analysis at high spatial resolution when coupled to optical and electron microscopy and constitute an area of research that is rapidly developing. Depending on the ionization method, the techniques can analyze a wide mass range (1-100,000 Da) with a spatial resolution down to less than 1 micrometer with minimal sample preparation (Watrous et al., 2011;Heeren, 2015;Bodzon-Kulakowska, 2016). At Step 7, the question changes from how best to identify the characteristics of organic material to whether the material has come from a living (or dormant) biological form of life. The equipment proposed for Step 7 assumes that any organisms present have characteristics that produce analogous signals to those that we observe on Earth, and hence, they can be detected by the same instruments used for determination of terrestrial evidence of life.
Step 7, then, is looking to sequencing techniques for amplification of genetic material. Nucleic acids are relatively easy to detect, and moreover their genetic sequences can reveal a vast amount of information about the life forms that synthesized them. Variations of PCR can provide further information. For example, qPCR can quantify gene copy number (and thus estimate cell number), and reverse transcriptase PCR (RT-PCR) can bias the assay in favor of active, rRNA-rich cells. High throughput metagenomic and transcriptomic sequencing is increasingly being used to more fully characterize microbes and their activities, which requires greater amounts of nucleic acids for analysis since there is no amplification step. The technology has now advanced such that as few as 50 cells may be fully characterized (Minich et al., 2018). Additional analytical methods include single cell genomics (Woyke et al., 2017) and a mini-metagenomics approach, which can characterize the genomic features of 5-10 cells (Yu et al., 2017). Both single-cell genomics and mini-metagenomics require the amplification of DNA from cell(s) and are designed for samples with low cell abundance. These detection and characterization techniques are relatively mature, such that while we anticipate incremental improvements in the coming decade, the fundamental principles will likely still apply.
Step 8 is going beyond the familiar, that is, agnostic life detection with an amplification step and minimal assumptions. From a sample safety assessment framework point of view, this is the most important element in the test sequence and the only one with a clear gate. At the same time, it is the least defined step in terms of techniques and robustness that can only be addressed by targeted developments. Two concepts are described that could benefit from such targeted developments. The first concept is focused on the identification of non-canonical information polymers. Current nanopore-based sequencing technology is well suited to expanding the search for informational molecule patterns beyond the specific amino acids and nucleotides conserved in contemporary extant life on Earth. This technique is amenable to analysis of the diversity of informational polymers that might have been common in a pre-RNA or RNA world, before the diversification of life and dominance of DNA-and protein-based life ( Joyce, 2002). Polymerase evolution and design experiments have found six additional possible RNA alternatives and precursors, such as threose nucleic acid (TNA), hexose nucleic acid (HNA), and other xenonucleic acids (XNAs, which are nucleic acids not found in nature), all of which can store and transmit genetic information (Pinheiro et al., 2012). Strands of XNAs can also bind to target ligands with high affinity and specificity, which demonstrates the capacity for preferential folding that is associated with Darwinian evolution. While this study is speculative about the nature of RNA alternatives and precursors, there are many examples of DNA and RNA alternatives used in nature that include methylated forms of DNA (Moore et al., 2013), a 2,6-diaminopurine found in the DNA of bacteriophages (Sleiman et al., 2021), and over 120 modified forms of RNA found in ribosomal and transfer RNAs (Schaefer et al., 2017). These exceptions to the highly conserved structures of DNA and RNA only strengthen the need for a capability that extends beyond characterization of the standard forms of DNA, RNA, and proteins when searching for unfamiliar life. Life as we know it is generally based on multiple classes of polymers with conserved sets of monomer units. The differences in number and sequence order of these monomers--their informational content--are what distinguish the structure and function of these types of polymers. Repetitive polymers are not necessarily informational or biological molecules, however; and abiotic polymers of carboxylic acids and amines (e.g., nylon or polyester) represent a case where neither is true. Biology with a unique origin may capitalize on the informational capability of unique semirepetitive polymers based on alternative genetic alphabets (monomer chemical structures), which would require analysis of any polymer that contains a set of semi-repeating monomers. This type of sequencing is possible with nanopore (electrochemical) devices that can detect a broad range of water-soluble, charged molecules (nucleic acids, proteins, polyions, etc.) with simpler and faster sample preparation than required by other commercially available sequencing platforms. Nanopore sensing is ''agnostic'' in that it analyzes any linear polymer that enters the pore. Nanopore devices can distinguish between monomers with slight differences in shape, volume, or polarity and only require a template to tune for the voltage-driven translocation rate for identification (Branton et al., 2008). Nanopore analyses have been used to sequence RNA (Garalde et al., 2018), inosin-bearing oligonucleotides (Carr et al., 2017), methylated nucleobases (Rand et al., 2017;Simpson et al., 2017), and even proteins (Ouldahli et al., 2020). The proposed concept can be used to interrogate returned samples for non-canonical polymers that could indicate a novel informational or catalytic polymer distinctive from those used by biology on Earth.
The second concept is focused on randomly generated oligonucleotides to build an informatics fingerprint that represents the binding complexity of a particle surface. The patterns of nucleic acid binding to surfaces, independent of their biological function, can be used to probe and report on any chemical environment, which opens up a new way to detect evidence of life. This concept ( Johnson et al., 2018) targets the secondary and tertiary structures that oligonucleotides naturally form that can have affinity and specificity for a variety of moleculesfrom specialized biomolecules, such as peptides and proteins (e.g., Jayasena, 1999), to non-linear polymers, and even to inorganic substrates such as mineral and metal surfaces (Cleaves et al., 2011;Ye et al., 2012). Short DNA sequences (*15 nucleotides) or ''aptamers'' will bind to all types of chemical structures in complex samples, similarly to how antibodies bind to analytes. Unlike antibodies, however, aptamers are agnostic in that they comprise a nearly unlimited variety of binding specificities, whereas antibodies have been selected for recognition of limited types of biomolecules. Aptamer binding is driven by the surface chemistry of the analyte and limited only by chemical characteristics that discourage DNA binding, such as occurs in those regions of strong negative charge or when there is a deficit of aromatic or hydrophilic moieties. By accumulating large numbers of binding sequences that reflect different compounds in a mixture, statistical data analyses of aptamer motifs and sequence counts generate patterns associated with increasing levels of complexity that distinguish biological surfaces to be analyzed. This pattern recognition, known as ''chemometrics,'' represents a set of protocols that can be applied to find patterns in chemical data sets (Nie et al., 2015), which in turn can be used to fingerprint agnostic evidence of life. The statistically derived level of complexity in aptamer sequences can be analyzed to generate highdimensionality chemometric score plots that reflect the complexity and assumed biogenicity of the resulting pattern.
To optimize the use of sample material, it may be possible to use a sample for more than one investigation or analysis. In the context of the SSAF, this approach would only be acceptable if it is shown that multiple uses of sample material cannot lead to an increased false-negative rate in the overall assessment. Figure 3 shows the entire test sequence. Investigations in Steps 1 and 3 inform two kinds of decisions:
Decision criteria
Sequence of opening and investigating the individual sample tubes from Mars. Number, type and locations for subsampling the sample in each sample tube.
There are no yes/no criteria or specific thresholds levels to reach a decision for these two steps. The decisions will need to be based on informed judgements. A positive test for organic compounds in Step 4 is suggestive of the potential for biology, although abiotic chemistry (e.g., that found in carbonaceous chondrite meteorites) or terrestrial contamination can result in the presence of organic compounds as well. A negative test for organic compounds in Step 4 does not necessarily indicate the complete absence of organic molecules. Rather, it would indicate that--if any molecular evidence of biology is present in the sample--it is in very small concentrations that are below the level of detection or strongly bound to the substrate. A positive test for molecular patterns (Step 5) should be viewed as highly suggestive of the potential for active or recent biology. The abundance and the signal-to-noise of the patterns (for example homochiral in all species vs. 20% enantiomeric excess in some species) must be compared to plausible abiotic formation and preservation processes for such compounds and the best current knowledge of the samples and martian environment. A negative test for molecular patterns with a positive test for organic compounds suggests that if biology is present, it is overwhelmed by organics or degraded organic material or that biology is absent. A positive test for macromolecular patterns should be viewed as highly suggestive of the potential for active or recent biology or terrestrial contamination. The nature of the macromolecules would need to be assessed in the next steps to determine whether they arise from terrestrial contamination or martian biology and if these macromolecules are suggestive of extant or preserved extinct biology. A negative test for macromolecular patterns with a positive test for organic patterns suggests that, if biology is present, it is a metabolic hypercycle (Eigen and Schuster, 1997) or uses macromolecules that are resistant to analysis or that the life died and its macromolecules degraded before analysis. Best current knowledge of the samples, martian environment, and the environments the samples have experienced from collection to analysis must be used collectively to assess whether the molecular patterns observed could have originated from degraded biological macromolecules.
Failure to detect organic compounds, molecular patterns, or macromolecules is not considered sufficient to declare a sample safe. Among other reasons for a negative detection (e.g., strong binding to the mineral matrix), the sensitivity of the available techniques could miss the equivalent organic molecules of hundreds to thousands of terrestrial cells (see Section 3.3.2). As a consequence, a negative detection in Step 4-6 must be followed up with an amplification step (i.e., Steps 7 and 8).
Step 7 is important for two reasons-to detect any remnant terrestrial biological contamination in the samples and to detect evidence of martian life that is similar to terrestrial biology. It is expected that this step could lead to a number of positive events that are likely associated with terrestrial contamination. However, until any evidence for life can be clearly associated with terrestrial contamination, the conservative assumption (positive hypothesis) is that it could be martian biology. A negative detection in Step 7 would demonstrate that the samples are free from terrestrial biological contamination, within the detection limits of the analytical techniques. Even so, the potential for martian life to be present still cannot be excluded because this step is highly biased toward life as we know it. The only definitive gate is actually Step 8. If there is no evidence for the presence of martian life in the samples and there are no open, uncertain, or ambiguous issues remaining that could associate sample characteristics to martian biology, then the sample of a sample tube would be deemed safe within the pre-defined level of assurance.
In the case that potential evidence of extant martian life is detected, a Hold and Critical Review (HCR) must be initiated to evaluate the status quo before proceeding. This approach is similar to having a spacecraft enter a safe mode: until it is understood what triggered the safe mode and it has been concluded that it is safe to proceed, normal spacecraft operations would be suspended. Details of the HCR must be described in the Sample Safety Assessment Protocol (SSAP). The Critical Review must include a comprehensive and holistic evaluation of all relevant data acquired, the analytical techniques and specific instruments and equipment used, the methods and procedures used to control the safety of Earth (e.g., containment design and operations, sterilization procedures and criteria), and the overall risk assessment. Only then could it be decided as to whether the Hold would apply to investigations on subsamples from the one sample tube being analyzed, on samples in other sample tubes, and/or on samples already released from containment. Further investigations that are responsive to the data and the understanding at that time would likely be required to assess whether and how a hazard analysis could be executed. While not directly a concern for the safety of a specific sample, finding evidence of extinct martian life must also lead to an HCR. In such a case, the overall risk posture reflected in the level of assurance must be reviewed. Establishing the initial level of assurance typically follows a conservative approach. However, there is a significant difference between the a priori assumption that there is life on Mars and having evidence that life emerged on Mars. The need for this is further illustrated by samples from Earth that simultaneously contain evidence of both extinct and extant life (e.g., surface exposed rock on Earth that contains evidence of ancient fossils and viable microbial inhabitants).
The HCR approach would have to be reflected in agreements that cover the release of samples from the SRF and their subsequent use. An important aspect in terms of managing expectations is to acknowledge that an HCR might be a re-occurring event due to possible terrestrial biological contamination. Comprehensive contamination knowledge (CK) could expedite the HCR. The HCR and any decision associated with it must be performed by an independent team that has decision authority for executing the SSAP (see additional details in Section 4.1). Figure 5 describes some of the possible outcomes of going through the test sequence. The element of terrestrial biological contamination is specifically highlighted in several cases.
Implementation of the SSAF
The implementation of the SSAF is focused on the safety assessment of each individual sample tube. As already pointed out in Section 3.3.1, the most effective approach is to start with the sample tube(s) that have the highest pre-test proba-bility with respect to finding martian life. In the case in which a dedicated dust sample is returned to Earth, this might be a good starting point. The result of such an assessment can inform the pre-test probability for other sample tubes.
In estimating the amount of sample material needed to inform the safety assessment, it must be taken into consideration that the sample safety assessment and many mission science objectives are complementary, overlapping, and apply similar methods. None of the samples used to inform the sample safety assessment should be considered wasted, as scientifically useful data will be generated and will inform each step. The amount of sample material required is naturally inversely proportional to the amount of biological material present and strongly depends upon the extraction FIG. 5. Eight generic cases for one sample tube are shown in this figure. This example is based on a situation where 14 negative results running the test-sequence on every single subsample are required to achieve a pre-defined level of assurance. Cases A and C are straightforward. Case B represents a situation where we have either terrestrial contamination or evidence for abiotic martian organics. Cases D and E represent a situation where we have evidence of martian life that is quite different from terrestrial life. Cases F, G, and H represent a situation where we either have terrestrial contamination, evidence of martian life that is quite similar to terrestrial life or a combination of both. processes and analytical techniques available at the time. Considering collectively the current capabilities for measurements on natural samples, the number of subsamples per sample tube required to reach a certain level of assurance that no martian life is in a sample (Section 3.2), and the quality control necessary to achieve confident results (Section 4.1), it is estimated that hundreds of milligrams to a few grams of sample mass per sample tube are required to inform the safety assessment. It is expected that targeted developments in extraction processes and advances in analytical techniques would further reduce the required amount of sample material to be processed and inform the sample safety assessment.
In addition to the four elements of the SSAF (Fig. 1), there are a number of implementation constraints that are part of the SSAF and are described in the following section.
Quality control
The consequence of an incorrect safety assessment could range from reduced sample access to harmful impacts on Earth's systems. To increase confidence in the results of the assessment, it is essential to have decision-critical investigations in the test sequence performed independently by more than one team. The detailed implementation of this approach will depend on the nature of the investigation and the associated measurements. There are three possible implementation approaches: Investigations where a single measurement conducted (e.g., an XCT scan of the sample) is deemed to be determinative, and two different teams independently analyze and interpret the data. Investigations where two independent measurements utilize a single technique, for which significant expertise and experience are required to obtain reliable results (e.g., two GC teams analyze aliquots of an extract). Investigations where two independent measurements made with complementary techniques are utilized to increase the predictive value of the results (two different techniques are used, e.g., spectroscopy and spectrometry, to analyze aliquots of an extract).
The use of complementary techniques increases the probability that a given result is true if the datasets agree and that they will trigger further investigations if disparate. Although use of the same technique more than once compensates for intra-technique variability, it is more critical to address the measurement accuracy in the safety assessment context (since precision is already accounted for in the selection and validation of the chosen techniques). The execution of the test sequence must follow an approach typically used in science and engineering when assessing public safety or environmental impacts, namely deploy two independent teams to perform the measurements or data analysis (see three cases above), with a third independent team responsible for decision making. This approach must be considered in the planning of opportunities for science teams that will cover the objective-driven science investigations on the samples, some of which will inform the safety assessment, and in the planning and operation of the associated infrastructure (e.g., SRF).
All analytical methods used for the sample safety assessment must be documented and independently reviewed in advance. Any variations that occur as a single incident or result in a change of the test must be assessed and their consequences recorded at the time. ISO 17025 (ISO 17025, 2017) and equivalent standards are the mark of a laboratory with good quality systems, record keeping, and general operation, including appropriate staff training. For laboratories that test samples from Mars, these standards are a sound foundation on which to build to ensure reliable results. To allow scientific inquiry to follow a thread that is informed by successive findings that may not have been foreseen, while at the same time maintain a high standard of quality control, record keeping, data integrity, and data security, it is essential to apply methods of Good Laboratory Practice (GLP) (e.g., OECD, 1998) and Hazard Analysis at Critical Control Points (HACCP). These are routine methods used in the clinical setting and in industrial process quality control. HACCP was developed for the food industry, including to assure the safety of food products for the U.S. space program, but can be adapted for almost any complex operation in which safety risks and potential risks to a product are concerned (e.g., Hulebak and Schlosser, 2002). An HACCP-like process is a dynamic way to predict problems in advance and put in place risk-mitigating steps before any experiment or process is performed. In a general application, these risks may be anything from instrument failure and external contamination of a key sample or product to a human mistake, and can include factors that affect safety, quality or scientific output, and integrity. In any process there are steps where something could go wrong, and HACCP-like analysis concentrates on these points. Inevitably ''stuff happens,'' and the lessons from these events are used to update the HACCP-like assessment and mitigations in a continuous fashion. Traceable records that maintain a log of any alteration in procedures or risk assessment performed and by whom are kept throughout. The HACCP-like process can be applied theoretically while mapping the process. For the SSAF, this must be supplemented through full scale sample safety assessment simulations with analogue materials and implementation of the final test sequence. HAZOP (Hazard and Operability) is an example implementation of the basic HACCP process for general industrial use and is explained in detail in the IEC 61882 standard (IEC 61882, 2016). Figure 6 shows the basic steps involved in setting up a HACCP-like system that can be adapted to fit the workflows around handling samples from Mars. It shares some common features (e.g., documentation) with the ISO 17025 standard, and the two processes can be combined. Aspects of the HACCP-like process can be developed alongside the set up and calibration of the instrumentation that, under ISO 17025, must be performed at the actual site of use before any genuine part of the test sequence is undertaken.
Humans are a key factor in creating errors, most of which arise from a lack of training, lack of experience in a particular situation, poor management practices that place workers in a situation they cannot control, which can garner fear of raising concerns or simple lack of coordination within the team. A related cause is an ergonomic one, where poorly laid out displays, inaccessible controls, poor seating or repetitive manual tasks predispose operators to errors and disasters. Aircraft flight crews and space crews are trained to work together, especially if something goes wrong, and are usually deployed in teams where such skills and leadership are essential. Human factors are a critical part of the SSAF and must take into account deliberate wrongdoing as part of a framework for detecting and mitigating an adverse event.
The sample safety assessment, must always consider what will ultimately be done with a sample. Information gathered even after the samples are sent for further analysis (i.e., after they are declared safe) could indicate the need to update the safety assessment for all or specific samples. Therefore, if the sample handling and analysis protocols are modified from those in the submitted program of work, the sample safety assessment protocol will also need to be reviewed to ensure it is still valid for the new circumstances.
Analogue test program
To make an informed judgement for subsampling and to optimize the capture rate requires high-resolution physical and chemical information about the samples-covered by Steps 1 and 3 of the test sequence. This information is a necessary, though not sufficient, part of the process of rendering a robust and credible informed subsampling decision. The second and essential part of the process requires correlation of this information with knowledge base information obtained from subsampling terrestrial samples. To establish such a knowledge base requires an analogue test program tailored to include the types of materials expected from Mars that are analyzed with the types of instruments to obtain the kinds of measurements that are planned to be used to establish the physical and chemical sample information from returned martian samples.
Steps 1 and 3 are only part of the test sequence. The overall test sequence is a series of individual tests of dif-ferent types, each with its respective sensitivities and specificities. In terms of the Bayesian analysis, what matters is the overall sensitivity and specificity of the test sequence. This overall sensitivity and specificity might be derived from combining the sensitivity and specificity of the individual measurements of the test sequence. Although it is important to establish the sensitivity and specificity of the individual measurements to optimize the use of samples and maximize the incremental flow of information, it is not likely that a pure mathematical combination of these values would correctly reflect the sensitivity and specificity of the overall test sequence. By exercising the test sequence on terrestrial samples that represent the expected material from Mars using an analogue test program, it is possible to optimize details of the test sequence (i.e., by guiding selection of appropriate measurement techniques and instrument ensembles as well as the order of applying them on samples). General considerations for optimization of the test sequence are: The selection of instruments that will generate the highest true-to-false positive ratios; Ordering of the sequence of analyses to start with instruments with a higher sensitivity before moving to instruments with higher specificity.
To illustrate the utility of this approach to optimize the test sequence, a common issue for almost all investigations that target the organic content of samples is the separation of organic molecules, cell debris, or whole life forms from a mineral matrix. The best sensitivity and specificity of the analytical techniques used in the test sequence can only be employed if the targets of interest (i.e., molecules, cell FIG. 6. The example described here aligns with the ISO 17025 process in many areas ð , so that the same documentation can be shared between the two systems. The difference is that HACCP-like analysis covers risk in addition to defining a set of documented processes and allows that risk to be mitigated by forward planning and regular review. This approach is described for application on the detailed SSAP, derived from the SSAF. The term hazard in the context of the HACCP-like quality control measure proposed describes hazards to the SSAP process and not the potential biological hazard of material from Mars.
debris, cells) are presented in a useful form. A poor extraction efficiency increases the chance for false-negatives, independent of however good the analytical techniques are. Some mineral matrices are well known for their ability to retain organic compounds, owing to their surface properties and structure, for example, clay minerals. Other matrices are known for their propensity to attract organic compounds as a function of their chemical properties and structure, for example, macromolecular organic matter and carbonaceous materials. Mineral matrices that retain organic compounds are actually used on Earth in analytical and industrial processes as fluid filters to remove organic compounds, for example, clay and activated carbon gas filters for analytical chemistry. For organic compound extraction chemistry, the maxim is that ''like dissolves like,'' that is, to isolate a compound it must be matched with a solvent of similar properties. However, due to the different polarities of organic molecules, for example, amino acids are relatively polar, while hydrocarbons are relatively non-polar, no single solvent system is able to extract all organic molecules in a sample (Mitra, 2004). The type of matrix (rock) that the organic molecules are trapped in also affects the solubility of the analyte (Mitra, 2004). It is possible to use mixtures of less and more polar solvents to extract organic compounds of different polarity or change the solubility of the analyte using ultrasonication of the solvent or to use supercritical fluids (Mitra, 2004). In general, extraction protocols need to consider the full range of polarities presented by the potential target materials. Streamlining the extraction protocols, in particular consideration of the solvent strength for sharing an extract for multiple different analyses, would be beneficial to limit the use of sample material and support the independent analysis approach required in the frame of the test sequence. In this context, it is important to be aware that some solvents might interfere with other types of analysis (e.g., phenols used for certain omics investigations might interfere with other organic analyses) or exhibit inhibitory effects (e.g., denaturing). The complete extraction of organic compounds from highly retentive matrices may be unachievable, though the most efficient levels and knowledge of the extraction efficiencies are essential for the sample safety assessment. It is also necessary to exercise all four elements of the safety assessment (not only the test sequence) in end-to-end tests that utilize analogue samples. These end-to-end tests must include blind testing as an integral and essential part of the quality control measures. The added value of blind testing, however, can only be realized if the blind tests are well prepared and properly executed (e.g., Ginsburg, 1997;Casertano et al., 2008;Evans, 2014;van Driel et al., 2019). Such end-to-end tests can be used to optimize the sample flow and help to estimate the resources needed to perform the safety assessment. In addition, end-to-end tests serve to educate and train the personnel and test the various elements of the infrastructure, equipment, and instrumentation necessary to conduct the sample safety assessment.
In summary, there is a need for a tailored analogue test program that covers the following components to transition from the SSAF to an SSAP: 1. Assess and improve the capture rate and the associated subsampling strategy.
2. Optimize the selection of the instrument ensemble to be used for the test sequence and estimate the overall sensitivity and specificity, including the efficiency required to extract evidence of life from the host materials. 3. Exercise all four elements of the safety assessment, including blind testing, to optimize processes, test equipment and infrastructure, and train personnel and science teams.
The selection of the analogue samples to be tested needs to be based on the specific environment (e.g., Cockell et al. 2019) and the information obtained during sample collection on Mars. The analogue materials could include synthetically made samples, natural terrestrial analogs, and meteorites. Due to the special role of clays (see Section 3.2), the assumptions about random and targeted subsampling must be verified as part of the analogue test program. The analogue samples must include both negative and positive controls. These could include sterilized and/or organic-free analogue samples (negative controls) and samples doped with microbes and/or organic molecules or well-characterized natural terrestrial analogues known to contain life and/or organic molecules (positive controls).
Terrestrial biological contamination
Martian meteorites have been shown to be colonized by terrestrial organisms (Toporski and Steele, 2007). In the same way, terrestrial biological contamination of martian samples returned to Earth by the MSR Campaign would reduce the specificity of the overall safety assessment test sequence (see Section 3.1.1). It might also lead to a reoccurring Hold and Critical Review (HCR) of activities on the samples until the root cause of a detection can be clearly identified as terrestrial biological contamination (see Fig. 5). The contamination baseline for returned martian samples must be established from the CK obtained during the assembly of the various spacecraft that will fly as part of the MSR Campaign, along with blanks and witness samples returned with the martian samples. Of particular importance in this regard are the M2020 Witness Tube Assemblies (WTA), which are opened and sealed during different mission phases including pre-launch, launch, cruise, and Mars Entry Descent and Landing (EDL), and M2020 surface operations, as well as the M2020 drillable blank which can provide CK of the M2020 drilling operation. CK samples should also be collected during the construction of the SRF to establish a complete archive of potential contaminants, including biological contaminants that may come into contact with martian samples during sample analysis. Minimizing terrestrial biological contamination in the samples and a higher CK would reduce uncertainty in the scientific interpretation of the data and ease handling and treatment of the samples.
To differentiate between martian or terrestrial origin, the field of omics will play an essential role. The use of transcriptomics, proteomics, and metagenomics can provide a predictive comparison of material at protein, mRNA, and DNA levels, respectively. The biological CK samples (e.g., fallout coupons, spare hardware, microbial DNA and isolates collected from the assembly and test phases during pre-launch, etc.) can be used as a reference library of pre-flight conditions that can be directly compared to any signals from the potential biological material. The direct comparison may allow for assessments in expression profiles, unique or modified proteins, and changes in the DNA that occur during spaceflight. These advanced molecular techniques are commonly used to study both complex microbial communities and the evaluation of environmental stressors, such as the space environment and catabolism of pollutants in bioremediation (e.g., Biljani et al., 2021;Chandran et al., 2020;and Kumar, 2020).
Life detection and machine learning
The sample safety assessment defined by the SSAF depends upon the simultaneous interpretation of numerous variables and criteria as well as proper statistical treatment of large datasets that exclude false negatives and false positives. In several ways, this challenge is similar to that of biogenicity tests for putative traces of fossil life in the Earth's rock record. Classical tests of biogenicity in deep time involve the evaluation of multiple biosignature characteristics and context-and contamination-related criteria that need to be satisfied to substantiate a claim (e.g., Buick, 1990;Schopf et al., 2010;Brasier and Wacey, 2012;Neveu et al., 2018). The number and combinations of these characteristics and criteria, however, are subject to debate, since it is easy to include false positives or exclude false negatives. In reality, there are no clear yes/no answers in biogenicity tests since all biosignatures have a certain probability that life created them and a certain improbability that an abiotic process created them (Des Marais et al., 2008). The qualitative nature of many individual biosignatures (e.g., morphological characteristics) add further ambiguity, as they are often not standardized and depend on the interpretation and experience of individual observers. To overcome this inherent uncertainty in life detection and during efforts to exclude a biological origin, several recent studies have expressed the specific need for standardized criteria and a more quantitative approach in data treatment (Chan et al., 2019;Neveu et al., 2018;Rouillard et al., 2020Rouillard et al., , 2021. The use of multiple well-defined and quantifiable variables for life detection could greatly benefit from recent advances in statistical methods and machine learning methods to find commonalities in large datasets. While standard statistical methods passively draw inferences from a dataset, machine learning methods create mathematical models based on training data and use this ''experience'' to find predictive patterns in new datasets ( Jordan and Mitchell, 2015;Hastie et al., 2017). Typical tasks carried out by machine learning include classification, regression, ranking, clustering, and pattern recognition. A so-called supervised machine learning algorithm builds a mathematical model based on a training dataset that contains both input and desired output. The program thus responds to feedback. In contrast, an unsupervised machine learning algorithm builds a mathematical model without any desired output. It finds patterns in a dataset, and then attempts to find similar ones in newly supplied datasets. With the ongoing increase in computing power and the development of artificial neural networks, these collective methods are rapidly improving. Machine learning is currently transforming the field of medical diagnostics (e.g., Aggarwal et al., 2021), and is widely applied for face recognition in forensic applications (e.g., Phillips et al., 2018), general image recognition (e.g., Krizhevsky et al., 2017), use of large data sets of space missions (e.g., Kronberg et al., 2021), and speech recognition (e.g., Hinton et al., 2012). Some machine learning methods have already found applications in the detection and classification of life. For instance, convolutional neural network classification models were developed and trained to perform visual palynological identification and taxonomic classification of fossil pollen (Romero et al., 2020). For the purpose of the SSAF, there are three categories to be considered: (Buick, 1990;Neveu et al., 2018;Schopf et al., 2010;Brasier and Wacey, 2012;Rouillard et al., 2021a). There is, however, a growing literature on ''false biosignatures'', i.e., physiochemical processes that lead to the formation of minerals or molecules with life-like features (Cosmidis and Templeton, 2016;Garcia-Ruiz et al., 2003;Jordan et al., 2019;Kotopoulou et al., 2021;Rouillard et al., 2018Rouillard et al., , 2021bMcMahon et al., 2021). It may thus prove difficult to define clearly the exact difference between category 1 ''no life'' and category 2 ''life as we know it''. An important future scientific challenge lies in properly defining this difference, and subsequently creating widely accepted training datasets for categories 1 and 2 that can be used for developing a machine learning protocol. This supervised machine learning protocol, however, would not work for category 3 ''life as we don't know it,'' since training datasets, as well as a desired output, are fundamentally missing from Earth-based analogue samples. It may be possible, though, to identify this third category by exclusion of the first two categories. This effectively involves searching for levels of complexity that are incompatible with ''life as we know it'' (category 2) and with the absence of life (category 1). Thus, a complete machine learning protocol would start with a supervised algorithm 1 to find ''no life.'' If it fails to find ''no life,'' then it is possible that there is some form of life there. Supervised algorithm 2 would then be applied to find ''life as we know it.'' If both algorithms fail, then the sample must fall into category 3 ''life as we don't know it.'' An unsupervised machine learning algorithm can potentially be applied to find as of yet unidentified patterns that may be assigned as provisional biosignatures, which can then be searched for in other samples.
In general, it is not envisioned at this point that such work will be entirely dependent on these forms of artificial intelligence. At this time, an experienced human observer is superior to a set of algorithms. However, the use of these machine learning methods in the treatment of large datasets could assist in finding patterns and focus attention on specific features of astrobiological interest. For instance, in a large set of close-up images of martian sediments, it may be useful to have a supervised machine learning program with a category 1 algorithm for ''no life'' that has been trained in grouping crystal types, grain sizes, and distinguishing common sedimentary patterns, followed by a category 2 algorithm for ''life as we know it'' that checks for a list of known biosignatures. A human observer can then discard any ''non-life'' data and focus entirely on samples identified with ''life as we know it'' or define subsets of data that can be studied for ''life as we don't know it.'' The most important application of machine learning for the sample safety assessment may be data reduction and generation of data sub-sets for subsequent study by the science and safety assessment teams.
Conclusions
The SSAF would be incomplete without pointing out the importance of transparent and professional risk communication (e.g., ESF, 2012). The information presented and the risk perception of the various stakeholders will evolve over time and might be influenced by events that have nothing to do with space exploration. It is therefore crucial to reevaluate assumptions and strategies described in this SSAF on a regular basis, and to communicate the results of this reevaluation process in a timely manner in order to build trust and preserve the sovereignty of information. A robust quality control program (see Section 4.1) is a fundamental prerequisite to achieve this aim.
The following is a summary that captures the major elements of the COSPAR Sample Safety Assessment Framework (SSAF): The Sample Safety Assessment Framework (SSAF) Safety Approach (a) Organic molecules are defined as a group of covalently bonded molecules that contain carbon and at least one more element. (b) Macromolecules are defined as organic compounds greater than 2500 Daltons.
6. The investigations that are part of the SSAF must be able to detect evidence of self-replicating biological entities (e.g., cell-like), biological entities that are replicated by other life (e.g., virus-like), and biologically active molecules (e.g., prion-like, gene transfer agent (GTA)-like molecules).
(a) The SSAF must include two or more orthogonal agnostic life detection investigations, with amplification steps. (b) Investigations that lead to safety-critical decisions must be carried out by two independent teams, after which decisions are made by a third independent decisionmaking group. (c) The conduct of tests that are part of the safety assessment must comply with ISO 17025, or equivalent quality standards, and apply GLP and HACCP methods to demonstrate the required competence and quality control.
7. The test sequence, using a stepwise approach from more chemistry-based investigations (e.g., organic molecules, molecular patterns and macromolecules) to more biologically based investigations (e.g., life as we know it, life as we don't know it), must cover both common and unique features of the samples. 8. The level of assurance needed to declare a sample safe must be specified by the appropriate regulatory authority and incorporated into the SSAF.
(a) If evidence of extinct or extant martian life is detected, a Hold and Critical Review (HCR) must be established to evaluate the relevant data and the risk management measures before deciding on the next steps. (b) No samples can be released from containment during the HCR and a procedure must be developed for samples already released from containment.
There are a number of activities that need to feed into the SSAF and some consequences of the SSAF that would need to be reflected in future sample science plans (see Table 4).
The most important near-term Research and Development (R&D) activities to enable the preparation and execution of the SSAF are: 1. Establishing an analogue test program to inform and improve the capture rate, extraction efficiency, sensitivity and specificity of the overall test sequence, and exercise the entire sample safety assessment before it is used on samples returned from Mars. 2. Maturing agnostic life detection techniques.
(c) The absence of detecting organic compounds, molecular patterns, and macromolecules is not sufficient to declare a sample safe (i.e., devoid of martian life). (d) The positive hypothesis (i.e., there is martian life in the samples) can be rejected if there is no evidence for the presence of martian life in the samples and there are no open, uncertain or ambiguous issues remaining that could associate sample characteristics to martian biology. In such cases, the tested sample of a sample tube would be safe within the predefined level of assurance.
9. The sample safety assessment is not a one-time exercise but rather a dynamic process that must respond to the results of various investigations. It must be updated if a subsequent investigation on any of the martian samples invalidates the original sample safety assessment or any of the assumptions used.
Execution 10. Every sample-tube is considered a separate sample. 11. Bayesian statistics together with the subsampling strategy and the sensitivity and specificity of the overall test sequence, allow estimating the number of subsamples necessary to reach a predefined level of assurance that a sample tube is safe. 12. The sample with the highest pre-test probability to contain martian life provides the best and most economic (time and material) starting point for executing the sample safety assessment. 13. A targeted subsampling strategy must be used to optimize the number of subsamples from one sample tube that need to be tested with the safety assessment. Three elements are required to develop such a strategy: (a) Information about the 3-dimensional morphological characteristics of the external and internal structures of each sample at a micrometer-level spatial resolution, while still in the sealed sample tube, is the required basis for planning and executing the sample safety assessment in general, and the subsampling strategy in particular. (b) Information about the chemistry and mineralogy associated with the 3-D structure to refine the targeted subsampling strategy. (c) An analogue test program to correlate the specific martian sample information to a relevant terrestrial sample knowledge base.
14. Depending on the type of fine-grained minerals, targeted subsampling (e.g., for localized finegrained alteration products or localized features, such as fractures in lithified fine-grained rocks) and random subsampling (for unconsolidated finegrained sediments) are appropriate approaches.
15. Random sampling can be applied to dust samples though it is unlikely that any serendipitous dust would be of a sufficient quantity that it can be declared safe based on a safety assessment, except for a dedicated dust sample.
Development needs
16. An analogue test program is necessary to: (a) Assess and improve the capture rate and the associated subsampling strategy. (b) Optimize the selection of the instrumentensemble to be used for the test sequence and estimate the overall sensitivity and specificity of the test sequence, including efficiency to extract evidence of life from host materials. (c) Exercise all elements of the sample safety assessment, including blind testing, to optimize processes, equipment and infrastructure, train personnel and science teams, and build confidence.
17. Contamination Knowledge (CK) covering all flight (Mars 2020, MSR program) and ground (SRF) elements is critically important to reduce uncertainty in the interpretation of the data and, as a consequence, avoid unnecessary rigor in handling and treating the samples. 18. Use of machine learning to support the sample safety assessment, in particular for data reduction and pattern recognition in large and diverse datasets, has the potential to improve the quality of the results and accelerate the process. 19. Once the MSR science investigations are selected, the appropriate regulatory authorities are in place, and any open development needs with respect to the overall sample safety assessment are addressed, this SSAF must be critically reviewed by the relevant stakeholders. The latest applicable version of the SSAF would be the basis for developing a detailed Sample Safety Assessment Protocol (SSAP). 20. A transparent risk communication throughout the development and execution of the SSAF and subsequent SSAP is essential to preserve the sovereignty of information.
Targeted investments in developing tailored machine learning capabilities to support the data reduction and data cross-correlations is considered beneficial to optimize the time economy once the samples start to enter the curation and science analysis stage. The development of such machine learning tools would need to be integrated in the analogue test program.
The only impact identified for the Mars 2020 mission and the MSR program is the need to provide Contamination Knowledge (CK) from all relevant mission phases (ground and flight) and mission elements with a potential to introduce terrestrial contamination to the samples during nominal and off-nominal events. This is considered critical for the interpretation of the data used for the safety assessment and any subsequent decisions. This CK is directly linked to the achievable specificity of the test sequence and to rectify events that would lead to a Hold & Critical Review. Therefore, the CK is an important element and driver in the schedule of sample analysis and consequences that could lead to an unnecessary rigor in handling and treating the samples.
To optimize the use of precious martian samples and remain aligned with the stated goal to use the scientific investigations of competitively selected science teams to inform the sample safety assessment, a number of elements need to be considered for planning the future selection of science teams to cover the objective-driven science for MSR: Investigations described in the test sequence. The need for independent analyses for certain investigations (i.e., more than one science team working on certain investigations). Optimizing the overall sensitivity and specificity of the test sequence (i.e., consideration of using a complementary instrument-assemble with known sensitivities and specificities).
If all elements cannot be satisfied in the course of the science team selection, then directed investigations to fill in the gaps would need to be considered by the MSR Campaign Partners.
For all practical purposes, the sample properties that need to be measured to inform the SSAF fall under the sterilization-sensitive and time-critical categories as defined by MSPG2 (Velbel et al., 2021;Tosca et al., 2021). This means that most of these investigations would need to be conducted within biological containment, i.e., a Sample Receiving Facility (SRF).
The Sample Safety Assessment Framework (SSAF) has been established with sufficient detail to allow for proper planning for a Sample Receiving Facility (SRF) and preparations for the scientific analysis of the samples. At the same time, the SSAF avoids being overly prescriptive. The SSAF is using an iterative approach to risk, combining multiple types of data and analyses, to derive an evidence-based safety assessment. As long as martian life is based on carbon chemistry, the SSAF and the subsequent SSAP would be able to identify it. The one parameter that must be set by the appropriate regulatory authorities is the level of assurance to exclude the presence of martin life. This would be the stopping threshold, i.e., level of confidence in the statement ''there is no martian life in the sample''. Setting such a level is important to avoid open-ended discussions and to better estimate the efforts and resources necessary to conduct the sample safety assessment.
Once the MSR science investigations are selected, the appropriate regulatory authorities are in place, and any open development needs with respect to the overall safety assessment are addressed, this SSAF must be critically reviewed by the relevant stakeholders. The resulting updated version of the SSAF would be the basis for developing a detailed Sample Safety Assessment Protocol (SSAP). CO-SPAR would provide an appropriate international forum to review the SSAF and develop the SSAP.
The SSAF is developed specifically for assessing samples from Mars in the context of the currently planned NASA-ESA MSR Campaign (Meyer et al., 2021) though it can actually can be used for any Mars Sample Return mission concept, with only minor tailoring. This minor tailoring would be required for the following aspects of the SSAF: Representing the specificity of sample type, acquisition, and packaging, reflected in point 10 of the SSAF. Representing the necessary CK of the applicable flight and ground elements, reflected in point 17 of the SSAF.
In addition, the SSAF is considered a sound basis for other COSPAR Planetary Protection Category V, restricted Earth return, mission concepts beyond Mars. | 27,566 | 2022-06-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Conceptual Framework: Determinant Factors for Paying Zakat Fitrah Via Fintech
- . Historically, Muslims used various payment forms to contribute to their Zakat Fitrah, from rice to fiat money and digital payment as the medium. Paying Zakat Fitrah in Malaysia using FinTech is getting attention in 2020 due to the COVID-19 pandemic. A rigorous study must be conducted to understand users’ behavioural intention to adopt the technology. The behavioural model of intention from the Theory of Planned Behaviour and its development, including technology as per the UTAUT model, can be used as the underpinning theory. Some insights from the past studies become the literature reviews that were used to develop the hypotheses. As a result, a conceptual framework is proposed using the modified UTAUT model to study the acceptance of Muslims towards paying Zakat Fitrah via FinTech.
INTRODUCTION
Islam encourages peace and brotherhood, and one of the Islamic pillars is the payment of Zakat, which helps the needy by redistributing the wealth from the fortunate (Avazbek & Sherzodjon, 2020).Zakah contributors need to pay a small amount of their income as an obligation and purify their income (Mohd Faisol & Aman, 2020).Ideally, Zakat should reduce the income gap in the economy between the have and have not.Al-Quran has explained who is eligible to receive Zakat, which is alms.
Alms are for the poor and the needy, and those employed to administer the (funds); for those whose hearts have been (recently) reconciled (to Truth); for those in bondage and debt; in the cause of Allah.And for the wayfarer: (thus it is) ordained by Allah, and Allah is full of knowledge and wisdom (Al-Qur'an.Surat Al-Tawbah, 9:60).Based on several Hadith Sahih Al-Bukhari, Zakat Fitrah is an obligatory donation which rate is one sa' of the main crop for everyone who lives during Ramadhan and has sufficient food to eat for Eid Fitr to the needy (Ronny et al., 2020;Tafsiruddin, 2020).
Earlier, Malaysians contributed rice for Zakat Fitrah but then pay it with fiat money, but now the transition is to digital payment (Aman Shah et al., 2020;Ab Rahman et al., 2017).Financial technology (FinTech) uses digital payment that keeps our fiat money safely without bringing it everywhere.Moreover, it helps users to do online activities like shopping (Aastha, 2021), have contactless payment (Martell, 2018) and access money everywhere via mobile devices (Walden, 2020).
As a result, Zakat institutions migrate and amalgamate FinTech in their management to ease the collecting and distribution of Zakat Fitrah (Siti Nabihah et al., 2018).Therefore, FinTech can become the technology to help Muslims pay Zakat Fitrah quickly using mobile devices (Mohd Faisol, 2020a).
A study by Ahmad (2018) disclosed that Zakat institutions in Malaysia use FinTech.Zakat can be paid digitally via payment gateway, online, or mobile banking (Ahmad, 2018).Another study by Mohd Faisol (2020b) in Malaysia revealed that an excellent strategic alliance with the banks enables some Zakat institutions to leverage online and mobile banking services (Mohd Faisol, 2020b).
The Malaysian government encourages Malaysians to adopt FinTech inspired by the Shared Prosperity Vision 2030.The Vision established Islamic Finance Hub 2.0 (Islamic FinTech) as one of the 15 Key Economic Growth Activities (KEGA) (Ministry of Economic Affairs, 2019).Moving forward, Malaysia Digital Economic Blueprint is becoming the policy driving Malaysia to become a digital economy champion (Economic Planning Unit Department, 2021).This policy drives digitalisation to generate wealth and catalyse the Malaysian economy (Othman, 2021).Therefore, nurturing Malaysians to adopt FinTech in daily activities like paying Zakat Fitrah is crucial for the digital economy leapfrog.
Interestingly a domestic study by Imran Mehboob Shaikh et al. (2020) revealed that Malaysians are ready to adapt to FinTech as a modern product.Many influencing factors can be considered.In the context of paying Zakat Fitrah digitally, are performance expectancy, effort expectancy, social influence, and facilitating conditions the characteristics, drive the behavioural intention of Muslims to pay Zakat Fitrah via FinTech?If yes, it is in line with the words of the Messenger of Allah that all actions are according to intentions, and everyone will get what was intended.Furthermore, understanding user feedback is essential and can contribute to Malaysia's Digital Economic Blueprint by providing some status of acceptance level.
Therefore, various studies on the perception of Muslims towards using and adopting FinTech are important.As a result, this paper will assist related future scholars by proposing a conceptual framework for determining the acceptable level of digitally paying Zakat Fitrah using the UTAUT model which can become the path of the research.It is developed by referring to several pieces of past articles using keywords of Zakat, the UTAUT model, and FinTech.The sources from the internet vary from seminars and proceeding papers, to journal articles, and books.
LITERATURE REVIEW
The Hadith Sahih al-Bukhari.#1503 explains the Muslims' obligation to pay Zakat Fitrah (Ronny et al., 2020).Zakat Fitrah needs to pay by everyone who lives in Ramadhan and has sufficient food for Eid Fitr (Tafsiruddin, 2020), which in Malaysia is paid in the currency of Ringgit Malaysia by equalling the weight of a gantang of rice (2.6 / 2.7 kg) (Ab Rahman et al., 2015).The responsible party must distribute this contribution to the asnaf and needy to celebrate Eid Fitr (Ronny et al., 2020).
Furthermore, Ronny et al. (2020) explained the importance of paying the Zakat Fitrah as early as possible based on The Hadith Sahih al-Bukhari.#1511.It is necessary to have a systematic mechanism for collecting and distributing the Zakat Fitrah to expedite the process (Tafsiruddin,2020), and avoid leakages (Mohd Yahya et al., 2017).Hence, a smooth Zakat system with integrity will improve people's confidence in Zakat institutions (Mohamad Zulkurnai et al., 2016) and help those vulnerable, especially during a crisis like the COVID-19 pandemic (Tafsiruddin, 2020).FinTech may become the solution (Hasnan Baber, 2019), and several scholars like Mohd Nor et al. (2021) and Nordin et al. (2021) studied how Malaysians adopt the technology.
METHODOLOGY
The literature review is one of the vital parts of writing research.It reviews existing knowledge of the study area and identifies similarities and differences (Mweetwa, 2020).The researchers widely use several techniques like systematic literature review (SLR), forward snowballing and backward snowballing.Some techniques to discover the relevant articles for the literature review are forward snowballing, which follows the citation and backward snowballing tracks through the reference list (Badampudi et al., 2015).Both techniques can help improve the study's understanding by following the thrill of the discussion and argument.
For example, some related articles of Malaysia's Zakat Fitrah collection explain the mechanism that is supported by several Hadith; applying forward and backward snowballing will help to discover the history trail.As a result, some relevant articles discussing the related matter together with the divine revelation that became the foundation of the study can be cited.Although this method may help identify past articles related to Zakat Fitrah, more comprehensive search techniques are vital to determine enhanced and latest references (Papaioannou et al., 2010).
Next, using SLR in this study will give a better chance to retrieve valuable information from the existing body of knowledge since the method has clear, specific, and structured procedures ( Mohamed Shaffril et al., 2020).Under SLR, a proper research protocol and writing will be prepared.
Step 1 of SLR Methodology: Protocol
The scholars need to have a proper plan for the research review which is known as a research protocol (Mohamed Shaffril et al., 2020).Firstly, the scope of research needs to be determined by developing research questions and identifying suitable methods.This study applies the PICO (Population, Intervention, Comparison and Outcome) model to construct the research questions since it applies both qualitative and quantitative (Mohamed Shaffril et al., 2020), which are: 1. What are the factors that contribute to the use of FinTech for the payment of Zakat Fitrah by Muslims? 2. Which model is suitable to evaluate the acceptance of FinTech for payment of Zakat Fitrah by Muslims? 3. Does the UTAUT model suitable to apply as the test tool to evaluate the use of FinTech for payment of Zakat Fitrah by Muslims?These three questions will be answered by applying the SALSA framework as the research boundaries (Fernández et al., 2018).SALSA consists of four important steps-search, appraisal, synthesis and analysis (Booth et al., 2021).
Step 2 of SLR Methodology: Search Then several keywords were developed to help answer the existing research questions.The chosen keywords are "UTAUT", "Zakat" or "Zakah", "Islamic", "FinTech" or "Financial Technology", and "Malaysia", representing the study's theory, theme and geographical location (Kuhzady et al., 2021).Furthermore, these keywords shall describe the research with neither general nor too specific to avoid extraneous articles without losing the related writing (Mohamed Shaffril et al., 2020).
Four research databases, EBSCOhost, Wiley, Scopus and Science Direct, were used to retrieve the articles based on criteria like full index and Boolean functional.These research databases are recommended by Gusenbauer and Haddaway (2020) and cover the study of social science.
The search string focuses on the keywords by using a Boolean operator (Hayrol Azril et al., 2020).Choosing a search string with a general keyword like "UTAUT" will retrieve many articles (2,196), including vague write-ups.On the other hand, using specific keywords inside the search string like the combination of "UTAUT" and "Islamic" and "FinTech" or "Financial Technology" and "Malaysia" may filter out most of the writings, including the relevant articles.Based on the chosen keywords, seven search strings which cover the PICO model are being considered to be used in Table 1.
The last process under this step was to cut off some articles by only focusing on the search strings which consist of lesser articles (Nuradli Ridzwan Shah et al., 2022).Table 1 shows that lesser articles were retrieved when the search string became more specific (general search terms of 'UTAUT' to more specific search terms of "UTAUT" and "Islamic" and "FinTech" or "Financial Technology" and finally, " UTAUT" and "Islamic" and "FinTech" or "Financial Technology" and "Malaysia").It indicates that study on the "UTAUT" and "Islamic" and "FinTech" or "Financial Technology" and "Malaysia" seems like under-research and need more attention by future scholars.
Therefore, there are only two search strings which are a combination of "UTAUT" and "Islamic", and "FinTech", or "Financial Technology", and "Malaysia" (178 articles), and the search string consists of "UTAUT", and "Zakat", or "Zakah", and "Malaysia" (14 articles) were chosen.As a result, the total retrieved write-ups were 192 articles."UTAUT" and "FinTech" or "Financial Technology" and "Malaysia" 100 218 16 1 "UTAUT" and "Islamic" and "FinTech" or "Financial Technology" and "Malaysia" 17 141 3 17 "UTAUT" and "Zakat" or "Zakah" and "Malaysia" 0 11 0 3 Step 3 of SLR Methodology: Appraisal Next, all 192 articles were evaluated to select only articles that were relevant to the scope of research.(Fernández et al., 2018).In the beginning, inclusion and exclusion criteria were determined through the screen filter and advanced search on all research databases.The inclusi on and exclusion criteria were listed in Table 2.Then, all duplicated or inaccessible articles were also filtered out (Mengist et al., 2020a).After that, each selected article is screened again by reviewing the title, abstract, introduction and conclusion.In the end, only 20 articles matched the main topic: user adoption or behavioural intention towards FinTech, especially in Islamic FinTech like Zakat.Throughout all 20 selected articles, only seven studies were conducted in Malaysia, while the remaining is either worldwide meta-analysis or research done in other countries.
Most of the balance of 172 articles discussed either technology or financial sectors in various contexts, but neither on FinTech nor the perspective of the FinTech's users.This situation happened due to selecting keywords for searching, which covered all conditions.
Therefore, these 20 articles are being reviewed and segregated based on themes in the following stage.
Step 4 of SLR Methodology: Synthesis
Under this phase, all data related to the scope of research were extracted from the 20 articles to the Excel spreadsheet for data processing as in Table 3 (Mengist et al., 2020a).Reviewing the articles would discover similarities or differences according to specific themes.Table 4 shows that seven themes were identified, which are 1) acceptance of FinTech, 2) UTAUT, 3) behavioural intention (BI), 4) performance expectancy (PE), 5) effort expectancy (EE), 6) social influence (SI) and 7) facilitation condition (FC).These keywords are associated with UTAUT.Grouping the articles into thematic groups will create knowledge mapping and ideas
Daniel & Shahriar Mohammadi (2017)
Modified TAM 1. H1 Perceived usefulness (PU) positively affects users' attitudes (Att).2. H2 PU has a positive effect on intent to continue using mobile banking.3. H3.Perceived Ease of Using (PEOU) has a positive effect on PU. 4. H4.PEOU has a positive effect on users Att. 5. H5.Social norm (SN) has a positive effect on PEOU.6. H6.SN has a positive effect on PU. 7. H7.Trust (Tr) has a positive effect on PEOU.8. H8.Tr has a positive effect on PU. 9. H9.Att has a positive effect on the intention to continue using mobile banking Own Model 1. H1a.Economic benefit will have a positive impact on perceived benefit.2. H1b.Convenience will have a positive impact on perceived benefit.3. H1c.Smooth transactions will have a positive impact on perceived benefit.4. H2a.Financial risk will have a positive impact on perceived risk. 5. H2b.Legal risk will have a positive impact on perceived risk.6. H2c.Security risk will have a positive impact on perceived risk.7. H2d.Operational risk will have a positive impact on perceived risk.8. H3.The perceived benefit will have a positive impact on trust.9. H4.Perceived risk will harm trust.
1. Economic benefit, convenience and smooth transaction influenced positive and significant perceived benefits.2. Financial, legal, security, and operational risk significantly impacted perceived risk.3. Financial, legal, security, and operational risk significantly impacted perceived risk.4. Trust positively affected intention to adopt Islamic Fintech.
Osman & Leng (2020)
Modified UTAUT 2 1. H1: PE has a significant and positive influence on the BI in adopting mobile banking among the students of UPM. 2. H2: EE has a significant and positive influence on the BI in adopting mobile banking among the students of UPM. 3. H3: SI has a significant and positive influence on the BI in adopting mobile banking among the students of UPM. 4. H4: Hedonic motivation has a significant and positive influence on the BI in adopting mobile banking among the students of UPM. 5. H5: Habit has a significant and positive influence on behavioural intention in adopting mobile banking among the students of UPM. 6. H6: Perceived credibility has a significant and positive influence on behavioural intention in adopting mobile banking among the students of UPM.Mohamed Asmy et al. (2019) Modified TAM 1. H1.The lower the perceived risk associated with Islamic mobile banking transactions, the higher the intention to use and adopt it.2. H2.The higher the PEOU Islamic mobile banking services, the higher the intention to adopt them.3. H3.The higher the PU of using Islamic mobile banking services, the higher the intention to adopt it.4. H4.The higher the relative advantage of using Islamic mobile banking services, the higher the intention to use and adopt them. 5. H5.The social norms positively and directly affect Islamic mobile banking services adoption.
1. Perceived risk and PU of using were positively significant towards adopting Islamic mobile banking services in Malaysia.2. PEOU, relative advantage and social norms are insignificant 19.Alkhaldi & Qasem (2019) Modified UTAUT 1. H1a: The positive relationship between mobile phone experience and PE is stronger for the younger age group.2. H1b: The positive relationship between mobile phone experience and EE is stronger for the younger age group.3. H1c: The negative relationship between mobile phone experience and perceived risk is stronger for the older age group.4. H2a: The positive relationship between awareness of services and EE is stronger for users with higher education.5. H2b: The positive relationship between awareness of services and PE is stronger for users with higher education.
1. Extending the UTAUT is valid for studying demographic factors for accepting and adopting m-banking.2. User experience with mobile devices, user awareness of m-banking services, user PE, and EE influence BI to adopt m-banking services; 3. Age, educational level, and income are demographic factors influencing the adoption of m-banking.
No. Author (Year) Theory/ Model
Variable/ Theme/ Instrument Finding 6. H2c: The negative relationship between perceived risk and BI to use m-banking is stronger for users with lower education.7. H3a: The negative relationship between perceived cost of use and BI to use m-banking is stronger for female users.8. H3b: The positive relationship between EE and BI to use m-banking is stronger for female users.9. H3c: The negative relationship between perceived risk and BI to use m-banking is stronger for female users.10.H4a: The negative relationship between perceived risk and BI to use m-banking is stronger for users earning low incomes.11.H4b: The positive relationship between EE and BI to use m-banking is stronger for users earning low incomes.12. H4c: The negative relationship between perceived cost of use and BI to use m-banking is stronger for users earning low incomes.20.Raza et al. (2019) UTAUT 2 1. H1. PE has a significant positive effect on an individual's intention.2. H2.EE has a significant positive effect on an individual's intention.3. H3.SI has a significant positive effect on an individual's intention.4. H4.FC has a significant positive effect on an individual's intention.5. H5.Hedonic motivation (HM) has a significant positive effect on an individual's intention.6. H6.Price value (PV) has a significant positive effect on an individual's intention.7. H7.Habit has a significant positive effect on an individual's intention.8. H8.The behavioural intention has a significant effect on the actual usage of M-banking.
1.All the variables of UTAUT2, except social influence, significantly affect the individual's acceptance of Islamic Mobile banking.2. H7 is also supported and shows that BI has a significant positive effect on the actual usage of the technology Step 5 of SLR Methodology: Analysis Analyzing the data helped to discover valuable information that answered the research questions.
A suitable model with related determinant factors for this study was discovered.Developing the hypotheses created a proposed conceptual framework for the study.The finding of the analysis from the 20 articles answer the research questions by the following conclusion.
Acceptance On Fintech
Hazra and Priyo (2020) highlighted some arguments among scholars to have more research to study the phenomena between humans and FinTech.Rabbani et al. (2021) revealed that Muslims would adopt FinTech if it follows Shariah principles.Therefore, FinTech must be free from Riba (Mohamed Asmy et al., 2019).Several Islamic developing countries, like Malaysia (Mohd Nor et al., 2021), Indonesia ( Hazra & Priyo, 2020), Saudi Arabia (Nashwan, 2021) and Pakistan (Rabbani et al., 2021), adopt FinTech in Zakat management, especially during the COVID-19 pandemic.Nashwan (2021) revealed that the government of Saudi Arabia had introduced ZAKATY.This digital payment system allows the Saudis to pay their Zakat through the portal or a mobile app and ironically generated high collection during the pandemic.Rahmatina and Adela (2021) also discovered that Indonesians accept to pay Zakat via FinTech.
Unified Theory of Acceptance and Use of Technology (UTAUT)
Source: Venkatesh et al. ( 2003)
Behavioural Intention (BI)
Mohd Nor et al. (2021) define BI as an association of effort, motivation, plan, and actual behaviour towards doing something.For instance, Mohd Nor et al. (2021) and Nashwan (2021) discovered that BI attracts people using new technology like FinTech, which may bring them to adopt it.At the same time, Alkhaldi & Qasem (2019) revealed that due to BI, people would decide to adopt FinTech.Therefore, several factors influence people's BI to accept the technology, which UTAUT Model well defines.
Compared with the traditional payment through amil, paying Zakat Fitrah via FinTech helped the payee to save time and resources (Nashwan, 2021) and increased the efficiency and effectiveness of the system (Rahmatina & Adela, 2021).However, will Malaysians continuously use FinTech to pay Zakat Fitrah or only temporarily use it during the pandemic to avoid direct contact?Therefore, testing PE as an attractive factor in paying Zakat Fitrah via FinTech is vital to understanding the user's behavioural intention.
H1:
The performance expectancy positively affects the behavioural intention to use FinTech to pay the Zakat Fitrah Effort Expectancy (EE) Venkatesh et al. (2003) found that effortless technology has encouraged people to adopt it (Samsudeen et al., 2020).Rahmatina and Adela (2021) and Samsudeen et al. (2020) agreed with the finding as their research showed that the easiness and effortlessness of using FinTech attracted people to use and adapt.
However, Nashwan (2021) and Mohd Nor et al. ( 2021) found it differently.Nashwan (2021) revealed that Arabs digitally paid their Zakat Fitrah to comply with their religious commandment during the pandemic ( Nashwan, 2021).At the same time, easiness is not the main reason for Malaysians to accept paying Zakat using blockchain, as the technology is still new in Malaysia (Mohd Nopet al., 2021).
Perhaps, Malaysians are ready to adopt FinTech, including blockchain, to pay any transaction related to Islamic finance.Nevertheless, are easiness and effortlessness becoming the primary factors they adopt in FinTech?H2: The effort expectancy positively affects the behavioural intention to use FinTech to pay the Zakat Fitrah Social Influence (SI) Venkatesh et al. (2003) defined social influence (SI) as groups like relatives and friends giving opinions that affect personal beliefs on a particular technology (Samsudeen et al., 2020).A convinced technology user may recommend others ( Johar & Suhartanto, 2019).Nashwan (2021) revealed that family and friends who gave positive feedback created positive perceptions and influenced Arabs to adapt to ZAKATY.Nordin et al. (2021) discovered that SI had affected the respondents in Pengkalan Chepa, Kelantan, Malaysia, to adopt blockchain to pay Zakat.Are Muslims in Malaysia, besides Kelantan, also encouraging their community to pay Zakat Fitrah via FinTech?H3: The social influence positively affects the behavioural intention to use FinTech to pay the Zakat Fitrah Facilitating Conditions (FC) Venkatesh et al. (2003) defined facilitating conditions (FC) as how the accessibility of organisational and technological resources have facilitated users to adapt to the technology (Nashwan, 2021).Yassine et al. (2021) discovered that banking institutions that applied FinTech provide the facilities, including a helpdesk and technical support.
Both studies by Rahmatina and Adela (2021) and Nashwan (2021) revealed that facilitating conditions are essential in encouraging Muslims to pay Zakat via FinTech.In addition, those scholars foresee the importance of improving the quality of the organisational and technical infrastructures by upgrading the online infrastructure, improvising the portal's content, and developing technical services.A decent quality support system will encourage Muslims to pay Zakat digitally.Step 6 of SLR Methodology: Report All the protocols and results of research on SLR need to be properly explained and presented in a report like publishing a journal article (Mengist et al., 2020a).This study is not only preparing a report on SLR but also proposing a conceptual framework as the objective of the paper.
The Conceptual Framework
Source: (Sulaeman & Ninglasari, 2020) Li (2020) supports using the UTAUT model without moderating variables as he argued that the variable is needless and only exaggeratedly increases the value of the coefficient of determination (R²).He believed a simple model could also provide excellent predictive accuracy by applying appropriate initial screening procedures.Adopting a similar conceptual framework will create generalisability for the reference of future scholars.
The conceptual framework will help demonstrate the relationship of factors influencing Muslim behavioural intention to pay Zakat Fitrah digitally (Sekaran & Bougie, 2017).Based on the conceptual framework in Figure 2, effort expectancy, performance expectation, facilitating conditions and social influence are the determinants that become the independent variables (IV) to the behavioural intention, which become the dependent variable (DV).Therefore, the research would test each IV correlation with the DV and become the Ha1, Ha2, Ha3 and Ha4.On the other hand, the link's strength between IV and DV would predict the acceptance level of the sample.
Based on Figure 3, the modified UTAUT model was developed by linking with respective instruments using SmartPLS.The conceptual framework is known as a structural model.The coefficient paths connect all the independent variables (left side) to the dependent variable (right side).Each latent variable will have its group of items.
CONCLUSION
This study will help clarify the acceptance level of Malaysians on using FinTech for religion, which is paying Zakat Fitrah, and the determinant factors.Many theories or models can be evaluated but finding the most suitable is crucial to understanding the users' perception.
In the past, scholars widely used the Unified Theory of Acceptance and Use of Technology (UTAUT) to measure new technology inception by the user.Some scholars applied the underpinning theory to study the behavioural intention towards FinTech and Islamic finance like Zakat.Therefore, applying a similar model to test the acceptance of FinTech towards payment of Zakat Fitrah will create generalisability.
Under the UTAUT, the user's behavioural intention towards performance expectancy, effort expectancy, social influence and facilitating conditions on using FinTech to pay Zakat Fitrah digitally will be tested.The finding will help related parties like Zakat institutions and FinTech providers to gain feedback from the users to improvise the system.Finally, this research can confirm whether the UTAUT model can become a good predictor of such a study for future studies.
capable of navigating due to lack of knowledge.Users may trap in ambivalence and rely upon another party to help.However, those who can establish control with minimum assistance will feel empowered.2. Mobile Financing Service (MFS) helps users self-control their account, like keeping it secret from family members.3.As MFS can be accessed everywhere, it helps users reduce the transaction costs of accessing financial services.4. Users keep their networking by transferring money to their family but expecting a variables (DV): Usage intention (UI) and Usage behaviour (UB) 2. Independent variable (IV): Performance expectancy (PE), Effort Expectancy (EE), Social Influence (SI) and Facilitation Condition (FC) 3. Moderators: Sample size, Economic level, Innovation level and culture 1.All constructs are positively significant, which PE is the most prominent towards UI. 2. UI is the most robust antecedent of UB. 3. Sample size and culture are moderators which affect the FC on UI, EE on UI, and UI on UB.
. PE positively affects the BI of customers to adopt m-banking services.2. H2.EE positively affects the BI of customers to use m-banking services.3. H3.SI positively affects the BI of customers to use m-banking services.4. H4.FC positively affects the BI of customers to use m-banking services.5. H5.Hedonic motivation positively affects the BI of customers to use m-banking services.6. H6.Habit positively affects the BI of customers to use m-banking services.
Figure 1 :
Figure 1: The UTAUT Model Fred (1989) introduced the TAM, derived from the Theory of Planned Behaviour, to determine the intention and acceptance of the technology (Mohd Nor et al., 2021).Venkatesh et al. (2003) formulated the UTAUT as an extension of TAM in explaining user intentions and usage behaviour towards information technology based on four core determinants and four moderators (Yassine et al., 2021).The four variables are effort expectancy, performance expectancy, social influence and facilitating conditions, and the moderators are gender, age, experience and voluntariness of use (Engku Mohamad et al., 2018; Alkhaldi & Qasem, 2019).According to Mohd Nor et al. (2021), Nashwan (2021) and Alkhaldi and Qasem (2019) studies, UTAUT is a good predictor for behavioral intention (BI).
Figure 2 :
Figure 2: The Conceptual Framework of the Modified UTAUT Model
Figure 3 :
Figure 3: The Modified UTAUT Research Model
Table 1 :
Total Articles Based on Search String and Online Database
Table 2 :
Inclusion and Exclusion Criteria is not supported
Table 4 :
Number of Articles Based on Themes Table 4 is the list of instruments.
Table 4 :
List of Instruments I think the procedures of using FinTech in paying the Zakat Fitrah is easy to learn. | 6,351.8 | 2022-12-01T00:00:00.000 | [
"Business",
"Economics",
"Computer Science"
] |
Transmission electron microscopy reveals clusters of Au–Ag nanoparticles formed in TiO2 thin film, with enhanced plasmonic response
This work reports on the influence of nanoparticle (NP) size distribution and the chemical nature of gold (Au) and/or silver (Ag) NPs in the localized surface plasmon resonance (LSPR) responses. The NPs were produced embedded in a titanium dioxide (TiO2) thin film, deposited by reactive magnetron sputtering technique followed by in-vacuum thermal treatment at 400 °C. High-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) gave quantitative key information in terms of both the size and distribution of the noble metal NPs. The average Feret diameter was 17 nm (σ = 8) and 55 nm (σ = 28) for Au/TiO2 and Ag/TiO2 films, respectively, while the Au–Ag/TiO2 film showed intermediate values, with an average size of 22 nm (σ = 9). HAAD-STEM, complemented by EDX chemical mapping, revealed an unusual formation of cluster structures containing local distributions of bimetallic (alloyed) Au–Ag NPs. The synergetic characteristics and properties of such bimetallic Au–Ag NPs resulted in an outstanding LSPR sensitivity compared to the monometallic counterparts. Furthermore, the analysis of the average nearest neighbor distances (about one order of magnitude lower than counterparts) suggests the existence of plasmonic hotspots relevant to be explored in sensing and surface-enhanced spectroscopies.
Introduction
Localized surface plasmon resonance (LSPR) based optical transducers have gained significant importance in various fields due to their unique optical properties and versatile applications [1,2].LSPR is an optical phenomenon in which localized surface plasmons are excited on the surface of a metallic nanostructure (e.g. a nanoparticle, NP) by an incident electromagnetic (EM) field [3,4].Such excitation results in a collective and coherent oscillation of the conduction band electrons when the NP size is one order of magnitude lower than the incident radiation wavelength [5,6].The LSPR band characteristics are highly dependent on the chemical and physical properties of the NPs.They can be tuned by changing the NPs' size, interparticle distance, shape, chemical composition, and surrounding dielectric [7,8].
Gold (Au) and silver (Ag) NPs are the most extensively studied materials in plasmonics since their resonance conditions are met in the visible range of the EM spectrum [9].As a result, Au and Ag are the main focus of plasmonic research for the numerous applications that can originate from LSPR, such as bio [10][11][12][13] and chemical sensing [14,15], optical imaging [16], phototherapies [17,18], catalysis [19,20], and surfaceenhanced spectroscopies [8,21].Au NPs are chemically inert and highly biocompatible [4], while Ag NPs display sharper LSPR bands than other metals [7].Thus, research in alloyed Au-Ag NPs gains momentum for their improved chemical stability and optical properties [22].As both Ag and Au display face-centered cubic (fcc) features while crystallized, it is possible, theoretically, to produce Au-Ag bimetallic NPs in a wide compositional range [23].
One method to prepare NPs is to synthesize them from metals dispersed in a dielectric matrix, and this topic has gained attention in recent years [24,25].Physical Vapor Deposition techniques, such as magnetron sputtering, allow the production of thin films with randomly dispersed noble metal atoms in a dielectric matrix.By adjusting the deposition parameters, such as current, deposition rate [26], target material [27], and noble metal content, it is possible to optimize the thin films to produce reliable LSPR-based optical transducers.However, just using magnetron sputtering does not ensure the formation of NPs with LSPR behavior in the thin film.As the deposition temperature normally reaches below 200 • C, only small (less than 10 nm) and non-crystallized NPs are formed due to the nucleation of dispersed atoms [28].
The growth of NPs follows well-established mechanisms, and to obtain such nanostructural domains from dispersed noble metal atoms and a few clusters, it is necessary to apply energy to favor atom diffusion.In this case, a thermal post-treatment ensures NPs growth and crystallization, resulting in nanocomposite thin film [29][30][31].
In the present work, thin films of Au, Ag, and Au-Ag NPs hosted by a TiO 2 dielectric matrix were prepared to study their nanostructure and correlate it with the optical and plasmonic responses.The thin film deposition parameters were adjusted to obtain thin films with similar total noble metal content.The resulting thin films were evaluated using scanning transmission electron microscopy (STEM) and energy-dispersive xray spectroscopy (EDX), and the resulting NP size and interparticle distance distributions were correlated with the optical Transmittance measurements and refractive index sensitivity (RIS), using T-LSPR spectroscopy.
Experimental details
Nanoplasmonic thin films composed of Au, Ag, or Au-Ag NPs dispersed in a TiO 2 dielectric matrix were prepared using a custom-made reactive DC magnetron sputtering system.The sputtering cathode was a rectangular pure titanium (99.99%) target with noble metal pellets placed in its erosion track, as summarized in table 1.The deposition parameters were optimized in previous work, suggesting that the use of a total noble metal content of approximately 20 at.%, and a thickness below 50 nm [32] are optimal conditions to enhance the LSPR sensitivity.In-depth chemical composition was obtained by Rutherford backscattering spectrometry [33], and the thickness of the thin films was obtained by cross-section scanning electron microscopy (NanoSEM-FEI Nova 200 (FEG/SEM) scanning electron microscope).Both the chemical composition and thickness of the films are displayed in table 1.
The target was sputtered with the current densities described in table 1, in a plasma composed of Ar and O 2 (3.8 × 10 −1 Pa and 3 × 10 −2 Pa, respectively), and a base pressure below 6.0 × 10 −4 Pa, resulting in TiO 2 thin films with embedded noble metal atoms aggregates, as illustrated in figure 1.
The thin films were deposited onto fused silica (SiO 2 ) and NaCl for optical measurements and TEM analysis, respectively.All substrates were plasma cleaned and activated.This procedure was performed using a Zepto Plasma System (Diener Electronic) using a 13.56 MHz generator at a power of 50 W and 80 Pa of working pressure, firstly for 5 min in O 2 , and then 15 min in Ar, for SiO 2 .To avoid damaging the NaCl substrates, a mixture of Ar and O 2 at the same working pressure for 1 min.The objective of this plasma treatment is to remove contaminants from the substrate and increase the thin film adhesion.To induce NP growth in the TiO 2 matrix and achieve LSPR responses, thin films were annealed at 400 • C.This temperature was selected because higher temperatures trigger the diffusion of noble metals to the thin film surface, especially Ag, forming larger particles and degrading the LSPR signal, in accordance with previous optimization studies [32].Additionally, with the objective of producing low-cost LSPR sensors, the annealing temperature should be kept as low as possible to allow the use of inexpensive substrates, such as glass).This thermal treatment was performed in a vacuum furnace with a base pressure of approximately 8 × 10 −6 Pa, with a heating ramp of 5 • C min −1 until it reaches 400 • C, an isothermal period of 5 h, followed by free cooling until it reaches room temperature.The annealing treatment promotes the diffusion of the dispersed noble metal atoms inside the TiO 2 matrix, crystallizing both the matrix in the anatase phase and the formed noble metal NPs.
For TEM analysis, the NaCl substrates containing annealed samples of Au, Ag, and Au-Ag/TiO 2 were dissolved in deionized water, and the floating layer was transferred to a copper grid.
The analysis was performed under high vacuum and at 300 kV, using an FEI-TITAN ETEM in the high-angle annular dark-field (HAADF)-STEM mode and obtaining images suitable for NP's analysis.These images were processed, resulting in thresholded black-and-white images.The resulting measurements were plotted in histograms and fitted to obtain the values of the average Feret diameter (MATLAB environment) and nearest neighbor (ImageJ environment).Furthermore, the noble metals present on the thin films were mapped using STEM-EDX in the same microscope with an X-MAX EDX detector (Oxford Instruments).
The bulk RIS was determined by monitoring the LSPR band shifts in the presence of media with different refractive indexes (figure S1).The measurement cycles were performed with deionized water (η = 1.3325RIU) and a 20% (w/w) sucrose solution (η = 1.3639RIU), with the transmittance spectra monitored for 2 min for each half-cycle.A custommade optical system consisting of an LED light source (LS-LED, SARSPEC, Lda), an enclosed thin film holder, and a modular spectrometer (SPEC RES + UV/Vis, SARSPEC.Lda) composed of a diffraction grating adjusted to the wavelength range of 420-720 nm and a CCD detector.Spectra were acquired with a 3 ms integration time and an average of 200 scans.NANOPTICS software allowed the processing of the acquired spectra and the changes in the LSPR band due to the presence of different surrounding media [34].
Optical response of the thin films
The thin film's optical response and LSPR bands were evaluated by spectrophotometry in transmittance mode, and the resulting spectra are presented in figure 2.
Considering the as-deposited thin films (figure 2(a)), as expected, no LSPR bands were found since the noncrystallized NPs formed by nucleation during the deposition process have first to reach sizes above the quantum limit (>10 nm) to contribute to the appearance of LSPR bands [28].After thermal annealing at 400 • C, all three nanoplasmonic systems display LSPR bands, each one with its minimum positioned at different wavelengths of the visible range (figure 2(b)).For the analysis of the LSPR bands, a similar approach to previously published studies was used [35].Analyzing the LSPR band using NANOPTICS software, the peak position for both wavelengths of transmittance minimum (λ min ) and transmittance minimum (T min ) was obtained.It also allowed the characterization of the band's full width at half height (FWHH) and the LSPR band's full height (BFH), which results from the difference between the T min in and the transmittance maximum at the band's left tail (table 2).
For Au/TiO 2 thin films annealed at 400 • C, the LSPR band has its minimum positioned at λ min = 638.0nm and T min = 26.0%.As seen in previous work, the LSPR band for Au/TiO 2 thin films has a flatter right tail, causing a high FWHH of 339.8 nm and a shorter BFH of 13.3 pp (percentage points).When considering the thin film of Ag/TiO 2 annealed at 400 • C, changing the noble metal to Ag causes the LSPR excitation to appear at lower wavelengths.The LSPR band that derives from Ag NPs usually has sharper LSPR bands due to higher extinction efficiency and at lower wavelengths, when compared to Au NPs (or other metals) [7].The LSPR peak is found at λ min = 560.6 nm and T min = 29.3%.As expected, the FWHH for the Ag/TiO 2 nanoplasmonic system is substantially lower, about 267.0 nm, while the BFH is slightly higher (17.4 pp), showing a narrower LSPR response.Finally, depositing both Au and Ag in the TiO 2 matrix with a 1:1 ratio led to an LSPR band positioned between the bands observed for monometallic Au and Ag systems.For Au-Ag/TiO 2 annealed at 400 • C, the LSPR band minimum is positioned at λ min = 612.0nm and T min = 22.1%.The LSPR band is broader than the LSPR band from Au/TiO 2 , with an FWHH of 358.1 nm but with a higher BFH (22.3 pp), showing a less flat right tail.
Nanoparticles' distribution.
A transmission electron microscopy investigation was performed to correlate the LSPR response of the different films with the size distribution of the NPs after annealing.For this, HAADF-STEM mode was mostly used, and the obtained images were processed in MATLAB software environment to produce thresholded black and white regions concerning matrix and NPs, respectively.The resulting Feret's diameters were plotted in histograms.
Figure 3 presents micrographs of Au-TiO 2 thin film annealed at 400 • C (figures 3(a)-(c)) at different magnifications.In the first analysis, most of the Au NPs (nearly 90%) showed sizes below 10 nm, which have a negligible contribution to the LSPR band, as they are below the quantum limit.
To statistically analyze the NPs, the threshold parameters were adjusted to ignore sizes below 10 nm.The resulting Feret's diameter was plotted in a histogram (figure 3(d)).
Considering only the NPs that directly impact the LSPR response (approximately 10% of the total NPs), around 70% have sizes between 10 and 20 nm, while the remaining 30% reveal sizes between 20 and 50 nm.The average Feret's diameter is 17 nm, with a broad distribution of sizes (σ = 8).The aspect ratio (AR) was also analyzed using the same method in the MATLAB environment.For Au/TiO 2 thin films annealed at 400 • C, the NPs were found with an average AR of 1.7 (σ = 0.7) (figure S2(a)-supplementary material), showing spheroid-like NPs.However, this analysis is limited due to the 2D nature of the HAADF-STEM images.
Changing the noble metal to Ag (Ag/TiO 2 thin films) caused a drastic change in the NPs sizes' distribution and overall aspect of the thin film (figures 4(a)-(c)).For Ag/TiO 2 thin films annealed at 400 • C, the same threshold conditions were used in the MATLAB environment.In comparison with Au/TiO 2 thin films, more than 95% have sizes above 10 nm and, hence, are contributing to the LSPR response (figure 4(d)).As such, there was no need to disregard the NPs with sizes below 10 nm, as they only constitute less than 5% of the total analyzed NPs.
Unlike Au/TiO 2 thin films annealed at 400 • C, the differences between the NPs and the matrix can clearly be distinguished.Ag NPs have an average Feret's diameter of 55 nm, with a much broader size distribution (σ = 28).The higher recorded Feret's diameter for Ag/TiO 2 thin films is 141 nm, Finally, for the bimetallic Au-Ag nanoplasmonic system, the STEM analysis revealed some distinct nanostructural features compared to metallic counterparts, as can be observed in the micrographs of figures 5(a)-(c).Like Au/TiO 2 , a few bigger NPs were formed during the thermal annealing step, being surrounded by many small NPs.Size distribution analysis also revealed that most of them (80%) have sizes below 10 nm.So, a similar threshold was applied when analyzing the Au-Ag NPs in MATLAB environment, disregarding sizes below 10 nm.The resulting Feret's diameters were plotted in the histogram (figure 5(d)) and represent 20% of the NPs.These NPs have sizes typically ranging from 10 to 40 nm, with an average Feret's diameter of 22 nm, in a broad distribution of sizes (σ = 9).The average AR is about 1.4 (σ = 0.4), also showing spheroid-shaped NPs (figure S2(c)).
Besides the reported size distribution values, the most relevant characteristic is related to its unusual morphology.The Au-Ag NPs are organized into isolated clusters, each one containing a reasonable number of NPs.This type of nanostructural arrangement is certainly different from what would be expected, and it promises different optical properties and plasmonic responses.Therefore, the micrographs revealing the clustering nanostructures obtained in the Au-Ag/TiO 2 thin films, annealed at 400 • C (figure 6(a)), were deeply analyzed in the ImageJ software environment, with thresholded black and white, similar to the analysis done in MATLAB, yet, this time, the nearest neighbor (N.N.) distance was evaluated, firstly between adjacent NP clusters (figure 6(b)) and then within each NP cluster (figure 6(c)).
When evaluating the N.N.distance between the NP clusters, the cluster's center was considered, and the N.N. was found at an average distance of 115 nm, with a broad distribution of distances (σ = 24), meaning that the clusters are relatively distant from each other.
Considering the N.N.distances within a cluster, each one was evaluated independently in the same software environment.The N.N. was found at an average distance of 32 nm (σ = 24).Since the N.N.measurement is done considering the centroid for each NP and the average Feret's diameter (22 nm), the average distance between the surface of two adjacent NPs can be as low as 10 nm.This means that Au-Ag NP hotspots could have been formed during the annealing at 400 • C. Plasmonic hotspots can occur when the distance between two adjacent NP surfaces is lower than the quantum limit (10 nm) and results in a significant enhancement of the EM field near the NPs [36][37][38].This strong enhancement of the EM field could prove useful for producing LSPR platforms for surface-enhanced Raman spectroscopy (SERS) [39,40], plasmon-enhanced photocatalysis [41], etc.
Elemental composition of NPs for Au-Ag/TiO 2 thin films.
To determine the chemical nature of the NPs in the bimetallic system, the Au-Ag/TiO 2 thin film was also probed using STEM-EDX (figure 7) by mapping the noble metal present in the thin film.
The chemical nature of the NPs was established by mapping both Au (figure 7(b)) and Ag (figure 7(c)) separately using EDX mapping.In previous work, the 1:2 ratio of Au to Ag content showed the possibility of the formation of Ag-enriched Au-Ag alloyed NPs, clearly visible by the higher accumulation of both noble metals in the bigger NPs, while Ag was also found throughout the matrix in smaller NPs [35].With the 1:1 ratio of Au to Ag, prepared in this study, both Au and Ag seem to concentrate on every single NP, with an even distribution, shown by the overlapping of both noble metal maps on the STEM image (figure 7(d)).This could indicate that with these deposition conditions, the resulting NPs are composed of an Au-Ag alloy, composed of both the smaller Au-Ag NPs that have a negligible contribution to LSPR and larger Au-Ag NPs (between 10-40 nm), contributing to the LSPR band.
Refractive index sensitivity for LSPR thin films
Intending to apply these nanoplasmonic systems as LSPRbased optical transducers for biosensing, with enhanced performance, RIS is the first parameter to evaluate, and it goes beyond the simple evaluation of the LSPR band spectra.For this reason, RIS was evaluated by measuring the LSPR bands of the thin films over time while immersed in media with different refractive indexes.Cycles of alternating deionized water (η = 1.3325RIU) and a 20% (w/w) sucrose solution (η = 1.3639RIU) allowed to obtain several spectra that were processed using the NANOPTICS software.
Firstly, in the case of Ag/TiO 2 thin films annealed at 400 • C, an increased instability of the LSPR band signal was observed when the thin film was immersed in deionized water, with the transmittance spectrum revealing a higher noise than expected.
When this happens, the NANOPTICS software is unable to produce reliable results from the cycles' measurements, and hence, Ag/TiO 2 film was disregarded for liquid environments.
As for Au/TiO 2 and Au-Ag/TiO 2 annealed at 400 • C, the resulting cycles from the RIS experiments are presented in figure 8.
The cycles concerning the RIS evaluation for Au/TiO 2 thin films annealed at 400 • C are presented in figure 8(a).At first glance, some noise during the cycles' measurements is visible.After analyzing each cycle, the average LSPR band shift was 2.5 ± 0.1 nm, with a signal-to-noise ratio (SNR) of 8.4.Using the refractive indexes for each media, a RIS of 80 ± 4 nm RIU −1 was calculated.
For the RIS evaluation of Au-Ag/TiO 2 thin films annealed at 400 • C, the measured cycles are displayed in figure 8(b).In contrast with the analysis made for Au/TiO 2 , the measurements have much less visible noise.Analysis of each cycle revealed an average LSPR shift of 4.51 ± 0.03 nm and an SNR of 57.9.The calculated RIS for Au-Ag/TiO 2 thin films was 147.7 ± 0.9 nm RIU −1 .
As evidenced by the results displayed in figure 8, adding Ag to the Au/TiO 2 nanoplasmonic system in equal parts allowed almost double the LSPR response in the same experimental conditions.Firstly, this improvement could be due to the presence of alloyed Au-Ag NPs.From Mie's theory, it is known that Ag NPs have higher absorption and scattering efficiencies when compared to Au NPs [42].As a result of being more efficient in scattering light at the resonance conditions for LSPR, Ag NPs have a higher responsiveness to environmental changes [43].As such, the stable mixing of Au and Ag into bimetallic NPs benefits from the Ag properties, achieving higher extinction efficiencies [44].On the other hand, the increase in NPs size from Au/TiO 2 to Au-Ag/TiO 2 can also contribute to the higher sensitivity of Au-Ag NPs since a higher average NP diameter is also associated with higher extinction efficiencies, and thus improved sensitivity to the surrounding media [45].Therefore, these optimized properties seem to provide a better signal, with a much higher SNR and increased sensitivity for different refractive indexes, which could allow the production of a more reliable and stable system to develop LSPR-based optical transducers for (bio)sensing.
Finally, based on the results obtained by STEM, it can also be claimed that the formation of clusters of NPs, the latter close to each other (originating hotspots), and consequent near-field enhancement might also have contributed to an enhanced sensitivity of Au-Ag NPs.In the literature, the effect of plasmonic hotspots on LSPR sensitivity is a controversial subject.For instance, Feuz et al published a study comparing the sensing capabilities of nanodiscs and hot spots between two adjacent structures, showing that, while the hotspot presented an approximately 20% lower RIS, the SNR increased approximately 20 times [46].Another work published by Yockell-Lelièvre et al dwells on the formation of self-assembled Au NPs-based sensors.It shows that when the interparticle distance is small (3 nm gap) and plasmonic hotspots are formed, no RIS is measurable.Still, the SERS enhancement was reported to increase up to 3 orders of magnitude [47].Lastly, Lee et al described the influence of nanogaps in highly advanced plasmonic structures, with the highest RIS found where the highest EM confinement is expected, suggesting that hotspots improve the sensibility of plasmonic nanostructures [48].As such, continuous study on this topic is still needed to determine the effects of hotspots on the sensibility of LSPR-based optical transducers.
Conclusions
This work reports on the comparison between nanoplasmonic systems composed of Au, Ag, or Au-Ag alloyed NPs dispersed in a TiO 2 dielectric matrix in terms of NPs distribution and optical response of the LSPR band.
The nanostructural analysis using transmission electron microscopy showed the formation of NPs after annealing at 400 • C for all nanoplasmonic systems.For Au/TiO 2 thin films, while most NPs have sizes below the quantum limit and did not contribute to the LSPR phenomenon, approximately 10% of the NPs were found with an average size of 17 nm, contributing to a RIS of 80 nm RIU −1 .Using silver instead of gold in the production of nanoplasmonic thin films, led to the production of Ag/TiO 2 thin films with embedded Ag NPs.The average size of the NPs was 55 nm (increasing three times when compared to Au/TiO 2 thin films), yet it was not possible to calculate the RIS for these plasmonic films, due to instability of the LSPR band signal when the thin film was immersed in liquid environment.Finally, the formation of Au-Ag NPs in the bimetallic nanoplasmonic system can be highlighted.The NPs can be divided into two groups: the large majority dispersed in the TiO 2 matrix with sizes below 10 nm, and the remaining organized into clusters of larger NPs (sizes above 10 nm), contributing to the LSPR phenomenon, with an improved sensitivity for changes in the refractive index, giving rise to an outstanding RIS of 147.7 nm RIU −1 .
In conclusion, electron microscopy techniques allowed to discern nanoscale differences between the nanoplasmonic systems in the study and correlate the improved capabilities of Au-Ag/TiO 2 thin films with their morphological features for LSPR optical sensing.The nanoscale analysis also showed the possibility of applying these thin films as SERS platforms or in photocatalysis due to the formation of plasmonic hotspots in the NP clusters.
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.
Figure 6 .
Figure 6.(a) Au-Ag NP clusters of nanoparticles obtained by STEM; (b) histograms for nearest neighbor distance distribution, considering the distance between clusters; (c) histograms for nearest neighbor distance distribution, considering the distance between NPs within a cluster.
Figure 7 .
Figure 7. HAADF-STEM and STEM-EDX analysis of nanoparticles present in Au-Ag/TiO 2 thin film annealed at 400 • C: (a) TEM image of the selected area; (b) Au and (c) Ag EDX maps in the selected area; and (d) Au and Ag maps overlapping with the HAADF-STEM image.
Figure 8 .
Figure 8. LSPR band minimum wavelength shift, processed by NANOPTICS software, for thin films, annealed at 400 • C, showing the cycles due to the change of refractive index of the surrounding media (deionized water vs. a model-solution of 20% (w/w) of sucrose).The last cycle is the reference cycle (water vs water).
Table 1 .
Deposition parameters for nanoplasmonic thin films concerning noble metal content and plasma current density.
Figure 1 .
Schematic representation of the two-step procedure to achieve TiO 2 thin films with embedded noble metal NPs, starting from (a) transparent substrate; (b) the Ti-noble metal composite target is sputtered onto a substrate, resulting in a TiO 2 thin film with dispersed noble metal; and (c) the annealing procedure is conducted to promote noble metal crystallization and NPs growth inside the TiO 2 matrix.
Table 2 .
Summary of LSPR band minimum position (λ min and T min ), FWHH, and BFH for nanoplasmonic thin films determined by NANOPTICS. | 5,868.8 | 2024-03-11T00:00:00.000 | [
"Materials Science",
"Physics",
"Chemistry"
] |
The oncometabolite R-2-hydroxyglutarate produced by mutant IDH dysregulates the differentiation of human mesenchymal stromal cells and induces DNA hypermethylation
Background: Isocitrate dehydrogenase (IDH1/2) gene mutations are the most frequently observed mutations in cartilaginous tumors. The mutant IDH causes elevation in the levels of R-enantiomer of 2-hydroxylglutarate (R-2HG). Mesenchymal stromal cells (MSCs) are reasonable precursor cell candidates of cartilaginous tumors. This study aimed to investigate the effect of oncometabolite R-2HG on MSCs.Methods: Human bone marrow MSCs treated with or without R-2HG at concentrations 0.1 to 1.5mM were used for experiments. Cell Counting Kit-8 was used to detect the proliferation of MSCs. To determine the effects of R-2HG on MSC differentiation, cells were cultured in osteogenic, chondrogenic and adipogenic medium. Specific staining approaches were performed and differentiation-related genes were quantified. Furthermore, DNA methylation status was explored by Infinium 450 K arrays. Real-time PCR was applied to examine the signaling component mRNAs involved in.Results: R-2HG showed no influence on the proliferation of human MSCs. R-2HG blocked osteogenic differentiation, whereas promoted adipogenic differentiation of MSCs in a dose-dependent manner. In addition, R-2HG inhibited chondrogenic differentiation of MSCs, but increased the expression of genes related to chondrocyte hypertrophy in a lower concentration (1.0 mM). Moreover, R-2HG induced a pronounced DNA hypermethylation state of MSCs. Ingenuity pathway analysis showed Sonic Hedgehog (Shh) signaling as the most enriched signaling pathway. Further data indicated that R-2HG decreased the mRNA levels of Shh and Gli1, indicating Shh signaling inhibition.Conclusions: The oncometabolite R-2HG dysregulated the chondrogenic and osteogenic differentiation of MSCs possibly via induction of DNA hypermethylation, improving the role of R-2HG in cartilaginous tumor development.
prepared from MSCs treated either in the absence or presence of 1.0 mM R-2HG for 6 days using EZ DNA methylation Kit (Zymo Research, D5002, USA). A total amount of 500 ng of DNA was bisulfite converted and subsequently processed for hybridization onto an Infinium Human Methylation 450 Bead Array (Illumina, San Diego, CA, USA) under the manufacturer's instructions. This array can interrogate 27,578 CpG dinucleotides encompassing 14,495 genes. In brief, the DNA was mixed with bisulfite, and the nonmethylated C nucleotides were converted to U (T), whereas the methylated C nucleotides remained to be unaffected. Subsequently the bisulfite-treated DNA was amplified, fragmented, and hybridized to locus-specific oligonucleotides on the BeadArray. C or T nucleotides were detected by fluorescence signaling in order to obtain the single-nucleotide extension of the DNA fragments. The results were interpreted as a ratio (β value) of methylated signal (C) when compared with the sum of methylated and unmethylated signal (C-T) for each locus, where 0 was regarded as fully unmethylated DNA and 1 as fully methylated DNA.
Heat maps
The heat maps were designed by Mev software. The Euclidean distance within the two groups of samples was calculated using the average linkage measure [the mean of all pair-wise distances (linkages) between the members of the two concerned groups]. Gene annotation and enrichment analyses were performed by KEGG databases using the DAVID Bioinformatics Resources (http://david.abcc.ncifcrf.gov/) interfaces and WebGestalt (http://bioinfo.vanderbilt.edu/webgestalt/), respectively.
Gene pathway analysis
To determine the biological processes enriched within genes of differential methylation in the comparisons, we uploaded the gene lists into the Ingenuity Pathway Analysis (IPA; Ingenuity Systems, Redwood City, CA, USA). Each gene symbol was linked to its corresponding gene object in the Ingenuity Pathways Knowledge Base. Then the IPA integrates the genes and molecules that share part of the same biological functions or regulatory networks interacting together. The over-represented cellular and molecular functions were ranked according to the calculated P-value.
Statistical analysis
The results are expressed as mean ± standard error (SE), each performed in duplicates. Statistical analysis was performed by analysis of variance (ANOVA). All analyses used SPSS software (Paris, France). A p-value of < 0.05 was considered significant.
1.
R-2HG did not influence the proliferation and phenotype of human MSCs The effect of R-2HG on the proliferation of MSCs was examined by CCK-8 assay. As shown in Figure 1A, R-2HG showed no affect on the proliferation of MSCs at concentrations 0.1mM, 0.5 mM, 1mM or 1.5mM.
The expression of surface antigens of MSCs was analyzed using flow cytometry. As shown in Figure 1B, R-2HG had no influence on the immunophenotype of MSCs, shown as positive for CD105, CD90 and CD73 and negative for CD34, CD45 and HLA-DR.
R-2HG inhibits osteogenic differentiation of MSCs
Osteogenic differentiation in MSCs in the presence of R-2HG (1, 1.5mM) showed a dose dependent impaired calcification when compared to MSCs in the absence of R-2HG. Alizarin red staining revealed a low extent of mineralization with less detectable bone nodules in R-2HG treated MSCs when compared to those in controls ( Figure 2A). To further investigate the effects of R-2HG on MSC differentiation, we analyzed the relative mRNA expression levels of osteoblast-specific transcription factors (LPL and Osterix) and osteoblastic markers (IBSP and BGLAP). The results showed that R-2HG reduced the expression level of both early (LPL and IBSP) and late (Osterix and BGLAP) osteoblast differentiation-related genes significantly, which is consistent with the results in the functional assays ( Figure 2B).
R-2HG inhibits chongenetic differentiation of MSCs, but promotes the expression of genes related to chondrocyte hypertrophy.
To evaluate the effect of R-2HG on chongenetic differentiation property of MSCs, the cells were made into cell pellets of high-density and then were induced for chondrogenesis for 21 days. As is shown in Figure 3A, the physical dimension of the pellets in the presence of 1.5mM R-2HG showed marked decrease compared to those in the absence of R-2HG. Morphologically, matrix deposition as well as collagen 2a (COL2a) staining in cell pellets showed decreased growth in the presence of 1.0mM R-2HG ( Fig 3B). The pellets of MSCs in the presence of 1.5mM R-2HG failed to undergo immunohistochemistry. The expression of chondrogenic markers including Sox9 and Col2a demonstrated down-regulation in MSCs treated with R-2HG at 1.0mM and 1.5mM. However, hypertrophic markers including Runx2 and Col10a1 were up-regulated in 1.0mM R-2HG treated group, while down-regulated in 1.5mM treated group ( Fig 3C). These data confirmed that R-2HG suppresses chongenetic differentiation of human MSCs, but might promote the onset of chondrocyte hypertrophy in lower concentrations.
R-2HG promote the adipogenic differentiaion of MSCs
Next, the effect of R-2HG on adipocytic differentiation was evaluated. MSCs with 100% confluence were induced in adipogenic medium. As is shown in Figure 4A, R-2HG (at 1 and 1.5mM) promoted adipogenic differentiation of MSCs which is measured by increased lipid vacuoles (oil red O staining).
Furthermore, R-2HG enhanced the relative mRNA expression of adipocyte-specific transcription factors (C/EBPα and Pparg2) and the marker genes (adiponectin and aP2), supporting the above functional results ( Figure 4B).
R-2HG induced a pronounced DNA hypermethylation state of MSCs.
R-2HG affects histone modification and DNA methylation. DNA methylation is considered as a critical epigenetic modification that regulates the differentiation of stem cells, and so the changes in DNA methylation of MSCs exposed to R-2HG were explored.
As is shown in Figure 5A, R-2HG treated MSCs showed a profound DNA hypermethylation at CpG islands when compared with control. 154 differentially methylated CpGs between the two groups were identified after analyzing the data . A more detailed analysis of the differential distribution pattern of DNA methylation revealed wide-spread global changes, equally affecting all chromosomes ( Figure 5B). In R-2HG group , hypermethylation was found in 117 genes and hypomethylation in 37 genes (Supplemental Table S1). In addition, the most significantly hypermethylated genes in R-2HG treated samples included stem cell differentiation regulators such as GFI1, GEFT and RUNX1.In order to gain deep insights into the mechanism of aberrant DNA methylation, IPAwas performed. The data implicated several signalling pathways involved in MSC differentiation including the Sonic Hedgehog (Shh), insulin/insulin-like growth factor and Wnt signal pathways ( Figure 5C). Taken together, the oncometabolite R-2HG induced a pronounced DNA hypermethylation state of MSCs. 6.
R-2HG decreased the expression of Sonic Hedgehog signal components
As is shown in Figure 5C, the Shh signaling is considered as the most enriched pathway identified by IPA assay, and regulates cell differentiation. Shh signaling is initiated through the binding of Shh To confirm the results of IPA and elucidate the mechanism further, the expression of components of Shh signaling was investigated by RT-PCR. Our data showed that Shh ligand, a secreted glycoprotein that activates Shh pathway, decreased in a dose-dependent manner in MSCs treated by R-2HG ( Figure 6). In addition, GLI1, a key marker of Shh signaling, was down-regulated significantly in R-2HG These results revealed that the oncometabolite R-2HG induced by mutant IDH mutation blocked the osteogenic and chongenetic differentiation, while promoted the adipogenic differentiaion of MSCs.
The underlying mechanisms might be associated with hypermethylation of stem cell differentiation related genes, such as Shh signaling.
Discussion
Some metabolites play a critical role as regulators of some important enzymes in various biological pathways. According to recent studies, metabolic alterations promote the initiation and development of malignant cells. R-2HG that is produced by mutant IDH proteins is regarded as a prototype of these oncometabolites, and a serious of studies have proved the role of R-2HG in malignant transformation [13,25]. Elevated levels of R-2HG that are caused due to mutations in IDH1 and IDH2 are frequently shown (up to 87%) in enchondromas [4]. Impaired differentiation by R-2HG has been reported in central nervous system and during hematopoietic differentiation processes [13,25]. We therefore examined the effects of R-2HG on the characteristics, especially on the differentiation properties of human MSCs, which presumably act as precursors of cartilaginous tumors.
The results of the present study showed that R-2HG impaired the calcification of MSCs and reduced the expression of both early and late osteoblast differentiation-related genes in a dose-dependent inhibitor, enasidenib (AG-221), in patients with relapsed or refractory IDH2-mutated AML has been approved by FDA [41]. The development of IDH inhibitors is an emerging treatment option for patients with chondrosarcoma.
Conclusion
In conclusion, the present study results showed that R-2HG impaired osteogenic and chondrogenic The expression of Sonic Hedgehog signaling in MSCs exposed to R-2HG. After 6 days of treatment, quantitative RT-PCR was performed to evaluate Gli1, Gli2, Gli3, Shh, Smo and Ptch1. Data are presented as mean ± S.D. and performed in triplicate from an experiment representative of three independent experiments. * P<0.05 vs. the group of MSCs in absence of R-2HG. ** P<0.01 vs. the group of MSCs in absence of R-2HG.
Supplementary Files
This is a list of supplementary files associated with this preprint. Click to download. | 2,426.2 | 2020-04-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
An Iterative Method for Solving a Class of Fractional Functional Differential Equations with “ Maxima ”
In the present work, we deal with nonlinear fractional differential equations with “maxima” and deviating arguments. The nonlinear part of the problem under consideration depends on the maximum values of the unknown function taken in time-dependent intervals. Proceeding by an iterative approach, we obtain the existence and uniqueness of the solution, in a context that does not fit within the framework of fixed point theory methods for the self-mappings, frequently used in the study of such problems. An example illustrating our main result is also given.
Introduction
One of the most interesting kinds of nonlinear functional differential equations is the case when the nonlinear part depends on the maximum values of the unknown function.These equations, called functional differential equations with "maxima", arise in many technological processes.For instance, in the automatic control theory of various technical systems, it occurs that the law of regulation depends on the maximal deviation of the regulated quantity (see [1,2]).Such problems are often modeled by differential equations that contain the maximum values of the unknown function (see [3][4][5]).Recently, ordinary differential equations with "maxima" have received wide attention and have been investigated in diverse directions (see, for example, [4,[6][7][8][9][10][11] and the references therein).As far as we know, in the fractional case, these equations are not yet sufficiently discussed in the existing literature, and thus form a natural subject for further investigation.Motivated by the previous fact and inspired by [11], in this work, we focus on the existence and uniqueness of the solution for similar systems in a fractional context, and in more general terms.We consider the following nonlinear fractional differential equation with "maxima" and deviating arguments: u (σ) , u (t − τ 1 (t)) , ..., u (t − τ N (t)) , t > 0, (1) with the initial condition function where C D α denotes the Caputo fractional derivative operator of order α ∈ [0, 1], N is a positive integer, a, b and τ i (with 1 ≤ i ≤ N) are real continuous functions defined on R + = [0, +∞] subject to conditions that will be specified later, φ : [−∞, 0] −→ R is a continuous function such that φ (0) = φ 0 > 0, and f : R + × R 1+N −→ R is a nonlinear continuous function.
Our aim is to give sufficient assumptions leading to an iterative process that converges to the unique continuous solutions of Equations ( 1) and (2).These being under weaker conditions compared to the usual contractions (see Remark 3), and in a setting for which the standard process of Picard's iterations fails to be well defined.
It should be pointed out here, that the maximums in Equation ( 1) are taken on time-dependent intervals and not on a fixed one as is the case of the example given in [11].
Moreover, the Equation (1) will be supposedly of mixed type, namely with both retarded and advanced deviations τ i , while, in [11], only the delays are considered.It is also important to note that, in the Lipschitz condition of the nonlinear function f , we take into account the direction of maximums too, which is not the case of the corresponding assumption in [11].
Due to all of these generalizations, our work attempts to extend the application of [11] (Theorem 3) to the fractional case by a constructive approach.
To our knowledge, the studies devoted to the question of the existence and uniqueness of the solutions for fractional differential equations are based on different variants from the fixed point theory for self-mappings, or on the upper and lower solutions method (see, e.g., [12][13][14][15][16][17][18] and the references therein).We emphasize here that our result answers this question for a class of problems of the forms Equations ( 1) and ( 2), even when the previous versions of the theory fail to do so directly.That is, when the integral operator associated with Equations ( 1) and ( 2) is allowed to be a non self-mapping (see Remark 1).
The rest of the paper is organized as follows.In the next section, we introduce some basic definitions from the fractional calculus as well as preliminary lemmas.In Section 3, under some sufficient conditions allowing the integral operator associated with Equations ( 1) and ( 2) to be non self, we prove an existence-uniqueness result by means of an iterative process.The applicability of our theoretical result is illustrated in Section 4.
Preliminaries
We start by recalling the definitions of the Riemann-Liouville fractional integrals and the Caputo fractional derivatives on the half real axis.For further details on the historical account and essential properties about the fractional calculus, we refer to [19][20][21][22].
Definition 1.The Riemann-Liouville fractional integral of a function u : R + −→ R of order α ∈ R + is defined by where Γ (.) is the Gamma function, provided that the right side is pointwise defined on [0, ∞].
In the following definition, n denotes the positive integer such that n − 1 < α ≤ n and d n /dt n is the classical derivative operator of order n.For simplicity, we set du/dt = u (t).
provided that the right-hand side exists pointwise on [0, ∞].
In particular, when 0 < α < 1, Let us denote by C (R) the set of all real continuous functions on R. Applying the Riemann-Liouville fractional integral operator I α of order α to both sides of Equation (1) and using its properties (see [19,21]), together with the initial condition Equation (2), we easily get the following lemma.Lemma 1.If f , a, b and τ i (with 1 ≤ i ≤ N) are continuous functions, then u ∈ {v ∈ C (R) s.t.v (t) = φ (t) for t ≤ 0 } is a solution of Equations ( 1) and (2) if and only if u (t) = Fu (t), where Proof.Let u ∈ C (R).The functions a and b are continuous, so, according to the Remark in [7] (page 8) , see also [4] u (σ) , u (t − τ 1 (t)) , ..., u (t − τ N (t)) as a composition of continuous functions, it is also continuous.Now, we are able to follow the usual approach to show this type of result (see [15,19,21,23,24]).Note first that the Caputo fractional derivative of order α ∈ [0, 1] can be expressed by means of the Riemann-Liouville fractional derivative denoted by D α , as follows (see [21] (2.4.4) or [19] (Definition 3.2)): Let now u ∈ {v ∈ C (R) s.t.v (t) = φ (t) for t ≤ 0 } be a solution of Equations ( 1) and (2).Thus, in view of the first equality in Equation (6), Equation (1) can be rewritten as Since the right-hand side of Equation ( 7) is continuous, then according to the definition of the Riemann-Liouville fractional derivative given by the second equality in Equation ( 6), we have Thus, using [21] (Lemma 2.9, (d) with γ = 0), we have where Since U is continuous, for every T > 0, there exists L > 0 such that |U (t)| ≤ L : for all t ∈ [0, T].Thus, the following inequality holds true for every t > 0, sufficiently small Hence, the fact that 1 − α > 0, together with the continuity of I 1−α U resulting from Equation ( 8), imply that I 1−α U (0) = 0. Consequently, Equation ( 9) becomes Now, returning to Equation ( 7), applying the Riemann-Liouville fractional integral to both sides, and then using Equation (11) together with Equation (2), we obtain Equation (4).
Suppose now that u ∈ {v ∈ C (R) s.t.v (t) = φ (t) for t ≤ 0 } is a solution of Equations ( 4) and (5).Then, in view of Definition 1, we can rewrite Equation (4) as Since u is continuous, then the right-hand side above is continuous too.By applying the Caputo fractional derivative operator C D α to both sides, then using its linearity (see [19] (Theorem 3.16)), as well as the fact that the derivative of a constant (in the sense of Caputo) is equal to zero [21] (Property 2.16), together with [21] (Lemma 2.21), we get Equation ( 1).
In the present work, the state space will be regarded as a complete Hausdorff locally convex space.For further details on these spaces, we refer to [25].In the sequel of this paper, we make use of the following lemma, which can be found in [26] ([Lemma 2).Lemma 2. Let X be a complete Hausdorff locally convex space, E a closed subset of X and u, v
The Main Results
In this section, we not only prove the existence-uniqueness result for Equations ( 1) and ( 2), but we also give this solution as a limit of an iterative process.
First, let us set the following hypotheses: (H 4 ) There exist positive constants l 1 and l 2 , such that f satisfies the Lipschitz condition (H 6 ) f is a non negative function, and, moreover, ∃h ∈ Let X = C (R) be the locally convex sequentially complete Hausdorff space of all real valued continuous functions defined on R, and {P K : K ∈ K} be the saturated family of semi-norms, generating the topology of X, defined by where K runs over the set of all compact subsets of R denoted by K, and λ is a positive real number to be specified later.We denote by E φ,M , the subset of X defined by where a * , b * and M are the constants given by (H 1 ) and (H 5 ).It can be easily seen that E φ,M is a closed subset of X and its boundary is Throughout the remaining of this paper, F denotes the operator defined on E φ,M by Equations ( 4) and (5).Thus, according to Lemma 1, F maps E φ,M into X and the fixed points of F are continuous solutions of problems Equations ( 1) and ( 2).
Remark 1.It should be pointed out that under hypotheses (H 1 )-(H 3 ), (H 6 ) with the additional condition max 1≤i≤N t i < b * , F is a non-self mapping on E φ,M .Indeed, as is noted in the proof of [11] (Theorem 3) , for any function u ∈ E φ,M defined by u (t) = φ 0 + (h − φ 0 ) t/b * , where t ∈ [0 , b * ] and h is the constant given by (H 6 ), it can be easily seen that Fu / ∈ E φ,M .This will be checked by the example of the last section.
The introduction of a self-mapping of the index set in uniform spaces is motivated by applications in the theory of neutral functional differential equations [11,27,28].Following this idea, let us define a map j : K −→ K by where K + := K ∩ [0, +∞] , K m = sup K, τ and b * are the positive constants given in (H 1 )-(H 2 ).For n ∈ N * , j n (K) is the compact set defined inductively by j n (K) = j j n−1 (K) and j 0 (K) = K.
Remark 2. Note that, for every K ∈ K and every integer n greater than 1, we have j n (K) = j(K).
In the next proposition, we show that F satisfies Equation (14), which is a weakened version of the usual contraction when L λ < 1 (see Remark 3).Proposition 1.Under hypotheses (H 1 )-(H 4 ), the operator F : E φ,M → X satisfies for each u, v ∈ E φ,M and every K ∈ K with Proof.Note that it suffices to consider K + = ∅, since otherwise P K (Fu − Fv) = 0. Letting t ∈ K + , we obtain by means of hypotheses (H 3 ) and (H 4 ) where r i (s) = s − τ i (s) .Note that, due to the definition Equation ( 13) and under hypothesis (H 1 ), it is clear that, for every K ∈ K with K + = ∅, we have [a * , b * ] ⊂ j(K) and further (H 2 )-(H 3 ) lead to r i (s) ∈ j(K) when t i ≤ s ≤ t.Hence, e λr i (s) ds.
Now, multiplying the both sides of the above inequality by e −λt , then performing the change of variable u = λ (t − s), we get Let µ := 1 + α and ν := 1 + 1/α.Taking into account (H 3 ), Hölder's inequality gives Thus, the result is obtained by taking the supremum on K.
Remark 3. Since K + ⊂ j(K), if P K (Fu − Fv) ≤ L λ P K (u − v) is satisfied, then Equation ( 14) holds true.Therefore, due to the choice of j, in the present context, the usual contraction is a particular case of Equation ( 14) when L λ < 1.
To reach our aim, we proceed by adapting the proof of [11], [Theorem 1] with some completeness, for the construction of an iterative process converging to the unique continuous solutions of Equations ( 1) and (2).
According to Remark 1, the standard process of Picard's iterations fails to be well defined.To overcome this fact, we make use of Lemma 2 to construct a sequence of elements of E φ,M as follows: starting from an arbitrary point u 0 ∈ E φ,M , we define the terms of a sequence {u n } n∈N * in E φ,M iteratively as follows: Note that the terms of the sequence {u n } n∈N * belong to A ∪ B ⊂ E φ,M , with B ⊂ ∂E φ,M , where Furthermore, if u n ∈ B, a straightforward computation leads to Proposition 2. Let u 0 ∈ E φ,M , and {u n } n∈N * be the sequence defined iteratively by Equation ( 16).Then, under hypotheses (H 1 )-(H 5 ), for each K ∈ K and every integer m greater than or equal to 1, the following estimation holds true: where L λ is given by Equation ( 15) and gives Consequently, two consecutive terms of the sequence {u n } n∈N * can not belong to B (recall that B ⊂ ∂E φ,M ).Thus, it suffices to consider the three cases below.
Case 1. u n , u n+1 ∈ A. From Equation ( 14), we have From the condition Equation ( 14) together with Equation ( 17) (for u n+1 instead of u n ), we get Thus, by Equation ( 14), for every integer number n ≥ 2, we obtain either Moreover, In summary, the following inequality is true in all cases We now prove Equation ( 18) by induction.Using Equation ( 19), we have either and similarly we obtain Consequently, Equation ( 18) is satisfied for m = 1.Assume now that Equation ( 18) holds true for some m > 1.Using Equation (19), we get either Thus, the fact that C j(K) = C K , which follows from Remark 2, leads to In the same way, we get which means that Equation ( 18) holds for m + 1, and this completes the proof.
We are now ready to prove our main result.
the sequence {u n } n∈N * defined iteratively by Equation ( 16), converges in E φ,M to the unique continuous solution of Equations ( 1) and (2) provided that Proof.Let us put λ = 1/max{τ, b * } in Equation (12).Thus, according to Proposition 1, for every K ∈ K and u, v ∈ E φ,M , Equation ( 14) holds true with L λ < 1/4.Therefore, for an arbitrary fixed K ∈ K, and, for each ε > 0, there exists a positive integer s satisfying Hence, for n ≥ 2s, q ≥ 1 and a sufficiently large l, we get, by means of Equations ( 18) and ( 22), Consequently, {u n } n∈N * is a Cauchy sequence in the closed subset E φ,M of the complete locally convex space X, and so it converges to a point u ∈ E φ,M .Let {u n k } k≥1 be a sub-sequence of {u n } n≥1 in A, which is u n k +1 = Fu n k for every positive integer k.Then, for each compact K ∈ K, we have Therefore, u = Fu and so, according to Lemma 1, u is a solution of Equations ( 1) and (2).For the uniqueness, assume that there exists another solution v ∈ E φ,M such that u = v.Since X is Hausdorff, then P K 0 (u − v) = 0 for some compact K 0 ∈ K. Using Equation ( 14) and Remark 2, we get for every positive integer n , which contradicts the fact that L λ < 4 .This completes the proof.
Note that, for 0 < t − τ 1 (t) < τ, we have Then, hypothesis (H 6 ) yields to which means that Fu 0 / ∈ E φ,1.9 .Thus, in this framework, the iterative processes usually used in the self-mapping context can not be applied, while, according to Theorem 1, the process defined by Equation ( 16), converges in E φ,1.9 , to the unique continuous solutions of Equations ( 23) and (24).The first term is approximately given by u 1 (t) For t > 0, the other terms can be computed using the following formulas: 1], such that the right-hand side belongs to ∂E φ,1.9 .
Conclusions
In this contribution, the investigated question concerns the existence and uniqueness of the solution for a class of nonlinear functional differential equations of fractional order.The considered problems in Equations ( 1) and ( 2) are distinguished by the fact that the nonlinear part depends on maximum values of the unknown function, which is not frequently discussed in the existing literature.These maximums are taken on time-dependent intervals and, moreover, the equation is of mixed type, i.e., with both retarded and advanced deviations.It should be noted that, if the hypotheses (H 6 ) and Equation ( 20) are omitted, the operator F can be a self mapping, and thus, by the usual contraction methods, it can be shown that the result of Theorem 1 remains valid with the bound in Equation ( 21) weakened to 1.When additional conditions are necessary to meet the physical or mechanical requirements of the phenomenon governed by Equations ( 1) and ( 2), we leave the previous usual framework of study.In this case, our main result of Theorem 1 shows that the condition in Equation ( 21) is sufficient for the existence and uniqueness of the solution.
Definition 2 .
The Caputo fractional derivative of a function u : R + −→ R of order α ∈ R + is defined by | 4,452 | 2017-12-22T00:00:00.000 | [
"Mathematics"
] |
Impulse radio ultrawideband pulse shaper based on a programmable photonic chip frequency discriminator
We report and experimentally demonstrate the generation of impulse radio ultrawideband (UWB) pulses using a photonic chip frequency discriminator. The discriminator consists of three add-drop optical ring resonators (ORRs) which are fully programmable using thermo-optical tuning. This discriminator chip in combination with a phase modulator forms a temporal differentiator where phase modulation is converted to intensity modulation (PM-IM conversion). By means of tailoring the discriminator response using either the individual or the cascade of drop and through responses of the ORRs, first-order or second-order temporal differentiations are obtained. Using this principle, the generation of UWB monocycle, doublet and modified doublet pulses are demonstrated. The use of this CMOS-compatible discriminator is promising for the realization of a compact and low cost UWB transmitter. ©2011 Optical Society of America OCIS codes: (060.2360) Fiber optics links and subsystems; (060.5060) Phase modulation; (060.5625) Radio frequency photonics; (070.6020) Continuous optical signal processing; (130.3120) Integrated optics devices; (350.4010) Microwaves. References and links 1. J. Capmany and D. Novak, “Microwave photonics combines two worlds,” Nat. Photonics 1(6), 319–330 (2007). 2. M. H. Khan, H. Shen, Y. Xuan, L. Zhao, S. Xiao, D. E. Leaird, A. M. Wiener, and M. Qi, “Ultrabroadbandwidth arbitrary radiofrequency waveform generation with a silicon photonic chip-based spectral shaper,” Nat. Photonics 4(2), 117–122 (2010). 3. J. Yao, F. Zheng, and Q. Wang, “Photonic generation of ultrawideband signals,” J. Lightwave Technol. 25(11), 3219–3235 (2007). 4. J. Azaña, “Ultrafast analog all-optical signal processors based on fiber-grating devices,” IEEE Photonics J. 2(3), 359–386 (2010). 5. M. Ferrera, Y. Park, L. Razzari, B. E. Little, S. T. Chu, R. Morandotti, D. J. Moss, and J. Azaña, “On-chip CMOS-compatible all-optical integrator,” Nat Commun. 1(3), 29 (2010). 6. F. Liu, T. Wang, L. Qiang, T. Ye, Z. Zhang, M. Qiu, and Y. Su, “Compact optical temporal differentiator based on silicon microring resonator,” Opt. Express 16(20), 15880–15886 (2008). 7. Y. Park, M. H. Asghari, R. Helsten, and J. Azaña, “Implementation of broadband microwave arbitrary-order time differential operators using a reconfigurable incoherent photonic processor,” IEEE Photonics J. 2(6), 1040–1050 (2010). 8. C. Wang, F. Zeng, and J. Yao, “All-fiber ultrawideband pulse generation based on spectral shaping and dispersion-induced frequency-to-time conversion,” IEEE Photon. Technol. Lett. 19(3), 137–139 (2007). 9. M. Abtahi, J. Magné, M. Mirshafiei, L. A. Rusch, and S. LaRochelle, “Generation of power efficient FCCcompliant UWB waveforms using FBGs: analysis and experiment,” J. Lightwave Technol. 26(5), 628–635 (2008). 10. Q. Wang and J. Yao, “UWB doublet generation using nonlinearly-biased electro-optic intensity modulator,” Electron. Lett. 42(22), 1304–1306 (2006). 11. S. T. Abraha, C. M. Okonkwo, E. Tangdiongga, and A. M. J. Koonen, “Power-efficient impulse radio ultrawideband pulse generator based on the linear sum of modified doublet pulses,” Opt. Lett. 36(12), 2363– 2365 (2011). #155805 $15.00 USD Received 30 Sep 2011; revised 10 Nov 2011; accepted 11 Nov 2011; published 21 Nov 2011 (C) 2011 OSA 5 December 2011 / Vol. 19, No. 25 / OPTICS EXPRESS 24838 12. V. Torres-Company, K. Prince, and I. T. Monroy, “Fiber transmission and generation of ultrawideband pulses by direct current modulation of semiconductor lasers and chirp-to-intensity conversion,” Opt. Lett. 33(3), 222–224 (2008). 13. X. Yu, T. Braidwood Gibbon, M. Pawlik, S. Blaaberg, and I. Tafur Monroy, “A photonic ultra-wideband pulse generator based on relaxation oscillations of a semiconductor laser,” Opt. Express 17(12), 9680–9687 (2009). 14. Q. Wang, F. Zeng, S. Blais, and J. Yao, “Optical ultrawideband monocycle pulse generation based on cross-gain modulation in a semiconductor optical amplifier,” Opt. Lett. 31(21), 3083–3085 (2006). 15. Q. Wang and J. P. Yao, “Switchable optical UWB monocycle and doublet generation using a reconfigurable photonic microwave delay-line filter,” Opt. Express 15(22), 14667–14672 (2007). 16. J. Li, S. Fu, K. Xu, J. Wu, J. Lin, M. Tang, and P. Shum, “Photonic ultrawideband monocycle pulse generation using a single electro-optic modulator,” Opt. Lett. 33(3), 288–290 (2008). 17. M. Bolea, J. Mora, B. Ortega, and J. Capmany, “Optical UWB pulse generator using an N tap microwave photonic filter and phase inversion adaptable to different pulse modulation formats,” Opt. Express 17(7), 5023– 5032 (2009). 18. F. Zeng and J. Yao, “An approach to ultrawideband pulse generation and distribution over optical fiber,” IEEE Photon. Technol. Lett. 18(7), 823–825 (2006). 19. F. Zeng and J. P. Yao, “Ultrawideband impulse radio signal generation using a high-speed electro-optic phase modulator and a fiber-Bragg-grating-based frequency discriminator,” IEEE Photon. Technol. Lett. 18(19), 2062– 2064 (2006). 20. J. Li, K. Xu, S. Fu, M. Tang, P. Shum, J. Wu, and J. Lin, “Photonic polarity-switchable ultra wideband pulse generation using a tunable Sagnac interferometer comb filter,” IEEE Photon. Technol. Lett. 20(15), 1320–1322 (2008). 21. S. Pan and J. Yao, “Switchable UWB pulse generation using a phase modulator and a reconfigurable asymmetric Mach-Zehnder interferometer,” Opt. Lett. 34(2), 160–162 (2009). 22. Y. Dai, J. Du, X. Fu, G. K. P. Lei, and C. Shu, “Ultrawideband monocycle pulse generation based on delayed interference of π/2 phase-shift keying signal,” Opt. Lett. 36(14), 2695–2697 (2011). 23. F. Liu, T. Wang, Z. Zhang, M. Qiu, and Y. Su, “On-chip photonic generation of ultra-wideband monocycle pulses,” Electron. Lett. 45(24), 1247–1248 (2009). 24. J. Dong, X. Zhang, J. Xu, D. Huang, S. Fu, and P. Shum, “Ultrawideband monocycle generation using crossphase modulation in a semiconductor optical amplifier,” Opt. Lett. 32(10), 1223–1225 (2007). 25. E. Zhou, X. Xu, K. S. Lui, and K. K. Y. Wong, “A power-efficient ultra-wideband pulse generator based on multiple PM-IM conversions,” IEEE Photon. Technol. Lett. 22(14), 1063–1065 (2010). 26. I. Gasulla, J. Lloret, J. Sancho, S. Sales, and J. Capmany, “Recent breakthrough in microwave photonics,” IEEE Photonics J. 3(2), 311–315 (2011). 27. P. Samadi, L. R. Chen, C. Callender, P. Dumais, S. Jacob, and D. Celo, “RF arbitrary waveform generation using tunable planar lightwave circuit,” Opt. Commun. 284(15), 3737–3741 (2011). 28. L. Zhuang, C. G. H. Roeloffzen, A. Meijerink, M. Burla, D. A. I. Marpaung, A. Leinse, M. Hoekman, R. G. Heideman, and W. van Etten, “Novel ring resonator-based integrated photonic beamformer for broadband phased array receive antennas—Part II: Experimental prototype,” J. Lightwave Technol. 28(1), 19–31 (2010). 29. N. N. Feng, P. Dong, D. Feng, W. Qian, H. Liang, D. C. Lee, J. B. Luff, A. Agarwal, T. Banwell, R. Menendez, P. Toliver, T. K. Woodward, and M. Asghari, “Thermally-efficient reconfigurable narrowband RF-photonic filter,” Opt. Express 18(24), 24648–24653 (2010). 30. S. Ibrahim, N. K. Fontaine, S. S. Djordjevic, B. Guan, T. Su, S. Cheung, R. P. Scott, A. T. Pomerene, L. L. Seaford, C. M. Hill, S. Danziger, Z. Ding, K. Okamoto, and S. J. B. Yoo, “Demonstration of a fastreconfigurable silicon CMOS optical lattice filter,” Opt. Express 19(14), 13245–13256 (2011). 31. D. Marpaung, C. Roeloffzen, A. Leinse, and M. Hoekman, “A photonic chip based frequency discriminator for a high performance microwave photonic link,” Opt. Express 18(26), 27359–27370 (2010). 32. H. Nikokaar and M. Prasad, “Introduction to ultra wideband for wireless communications,” in Springer Science and Business Media (Springer-Verlag, New York, 2009). 33. L. Zhuang, D. Marpaung, M. Burla, W. Beeker, A. Leinse, and C. G. H. Roeloffzen, “Low-loss, high-indexcontrast Si3N4/SiO2 optical waveguides for optical delay lines in microwave photonics signal processing,” Opt. Express 19(23), 23162–23170 (2011). 34. J. F. Bauters, M. J. R. Heck, D. John, M.-C. Tien, W. Li, J. S. Barton, D. J. Blumenthal, A. Leinse, and R. G. Heideman, “Ultra-low loss single mode silicon nitride waveguides with 0.7 dB/m propagation loss,” in 37th European Conference and Exposition on Optical Communications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper Th.12.LeSaleve.3.
Introduction
Microwave photonics (MWP) techniques for the generation and processing of RF signals have enjoyed a surge of interests in the last few years.Generation of arbitrary microwave and RF waveforms [1][2][3] and fundamental RF signal processing techniques such as differentiation and integration [4][5][6][7] using photonic devices and systems have recently been reported.These functionalities have exploited the advantage of unprecedented bandwidth of photonics.One application that benefits from these functionalities is the ultrawideband (UWB) over fiber technology.In this approach UWB signals are generated and later on distributed in the optical domain to increase the reach of the UWB transmission, similar to the more general concept of radio over fiber.In the last five years numerous techniques have been proposed for the socalled photonic generation of impulse-radio UWB (IR-UWB) pulses [3,[8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25].The generated pulses are usually the variants of Gaussian monocycles, doublets or in some occasion higherorder derivatives of the basic Gaussian pulses.These techniques usually aim at generating the pulses which power spectral densities (PSD) satisfy the regulation (i.e.spectral mask) specified by the U.S. Federal Communications Commission (FCC) for indoor UWB systems.
Different techniques that have been proposed for IR-UWB pulses generation include spectral shaping combined with frequency-to-time mapping [8,9], nonlinear biasing of a Mach-Zehnder modulator (MZM) [10,11], and direct modulation of a semiconductor laser by exploiting the frequency chirp [12] or the relaxation oscillation [13].The most popular techniques to generate the UWB pulses, however, are based on two principles.The first approach is to use MWP delay-line filter to achieve either spectral filtering of the Gaussian input in the frequency domain or to approximate nth-order derivatives of the Gaussian input pulse with the nth-order differences [7,[14][15][16][17][18].In the second approach, phase modulation-tointensity modulation (PM-IM) conversion is used to achieve temporal differentiations (in most cases first-order derivatives, i.e. monocycles) of the input Gaussian pulse [19][20][21][22][23][24][25].
On the other hand, nowadays we have seen a growing trend to demonstrate various MWP functionalities with on-chip integrated photonic approach [26].Functionalities like integrator [5], differentiator [6], arbitrary waveform generation [27], beam forming [28], RF filtering [29,30] and MWP link [31] have been demonstrated in integrated photonic form very recently.Keeping up with these approaches, we recently reported a CMOS-compatible photonic chip frequency discriminator [31] consisting of optical ring resonators (ORRs) in an add-drop configuration.These ORRs are fully programmable using thermo-optical tuning mechanism.The discriminator chip was initially implemented in a phase-modulated MWP link [31] to perform a PM-IM conversion, leading to a high dynamic range analog photonic link with a simple direct detection scheme instead of the complex coherent detection.
In this paper we use the photonic chip discriminator reported in [31] to perform the PM-IM conversion necessary in a photonic IR-UWB generator.We experimentally demonstrate the generation of Gaussian monocycles, doublets and the modified doublets [11] that comply with the FCC-regulation for indoor UWB transmission [32].The approach of using a ring resonator as frequency discriminator and UWB pulses generation has previously been reported by Liu et al. [6,23].In these reported works, the PM-IM conversion was achieved using the linear region of the power response of an all-pass microring resonator and the generated pulses were limited to Gaussian monocycles.In this paper we use the through or drop responses of an ORR to generate monocycle pulses and the cascade of different ORR responses to generate the doublets and the modified doublets.To the best of our knowledge, this is the very first demonstration of IR-UWB pulses generation, beyond the simple monocycles using an integrated photonic chip.As explicitly stated in several previously reported works [11,[20][21][22], the approach of IR-UWB pulses generation using an integrated photonic chip instead of discrete components is highly desirable.
Theoretical analysis of UWB generation
A typical signal commonly considered as the basis function in IR-UWB transmission is the Gaussian pulse expressed as [32] ( ) ( ) where A and σ are the amplitude and the spread factor of the Gaussian pulse, respectively.As mentioned earlier, most of the approaches reported for photonic generation of IR-UWB pulses are aiming at the creation of higher-order derivatives of the pulse in Eq. ( 1) in order to reduce the low-frequency components in the pulse and to comply with the FCC regulation.The normalized power spectral density (PSD) of the nth-order derivatives of the pulse in Eq. ( 1) can be expressed as [32] ( ) where 2 f ω π = is the angular frequency and n is the order of derivation.Substituting n = 0, 1 or 2 to Eq. (2) will result in the PSD of the Gaussian pulse, monocycle or doublet, respectively.The PSDs of these pulses, with σ = 50 ps are plotted in decibels in Fig. 1a.As seen in this figure, the monocycle and the doublet can be obtained by means of filtering the spectrum of the Gaussian pulse.In this case, the nth-order derivative PSD can be written as where S 0 (ω) is the PSD of the Gaussian input and H n (ω) is the transfer function of the filter used to generate the nth-order derivative.The squared magnitude of the transfer functions calculated from Eqs. ( 2) and ( 3) are plotted in Fig. 1b.It can be seen that the filter attenuates the low frequency components of the input Gaussian pulse.For this reason, the resulting higher-order derivative pulses are more suited for transmission with antennas and comply better with the FCC regulation [32].Thus, in order to generate the monocycle or the doublet one must synthesize the magnitude responses in Fig. 1b.This filtering technique for photonic IR-UWB generation has been investigated in the past [3,7,[14][15][16][17][18].In this case, the approach essentially reduces to synthesizing a microwave photonic filter (MPF) with an electrical transfer function approximating the filter responses in Fig. 1b or even more complicated responses [17].In this work we will demonstrate that the PM-IM conversion using the frequency discriminator chip constitutes an MPF with a transfer function that can be programmed to generate the monocycle, doublet and potentially more complex responses.
The photonic chip frequency discriminator
The schematic of the photonic chip discriminator used in this work is shown in Fig. 2a.The chip consists of five add-drop ring resonators.However, in this work only three of the rings (Rings 1-3) are used while Rings 4 and 5 are tuned out of resonance to show an all-pass filter response and hence do not play any role in the experiments.The ORRs have a free spectral range of 21.5 GHz and they are fully-reconfigurable in terms of resonance frequency and quality factor using thermo-optical tuning mechanism.The tuning speed to fully reconfigure the ORR response is in the order of 0.5 ms.During the experiments the chip is temperature stabilized to maintain the desired response.For the ease the measurements, the chip has been packaged, where electrical connections to a pair of PCBs have been established (for tuning purposes) and fiber array units have been connected to the optical inputs and outputs.The measured fiber to-chip coupling loss is 12.5 dB/facet.The loss is mainly attributed to the absence of spot size converters at the chip facets [31].The photograph of the packaged frequency discriminator is shown in Fig. 2b.The details of the chip fabrication (waveguide technology, ORR specifications) and characterization (waveguide propagation loss, fiber-tochip coupling loss) have been reported in [31].The packaged photonic chip with fiber array units and wirebonded PCBs (from [31]).
The ORRs in the chip are arranged such that two outputs can simultaneously be used in the experiments.At Out 1 one can observe three different responses depending on the programmed settings, namely the drop response of Ring 1, the through response of Ring 2 or the cascade of these responses.Similarly, at Out 2, the individual through responses of Ring 1 and Ring 3 as well as the cascade of these responses can be obtained.The programmability of the output responses of the chip is very high since each ORR can independently be configured in terms of resonance frequency and quality factor (Q-factor).The chip is then programmed to synthesize the output responses that lead to the generation of IR-UWB pulses.
Experiments
The schematic of the measurement setup is shown in Fig. 3.The light from a high power DFB laser (EM4 Inc.) is phase modulated in a 10-GHz phase modulator (Covega Mach-10) with electrical Gaussian pulses generated by a pulse pattern generator (PPG) (Anritsu MP17633).The PPG is driven at a rate of 4 Gb/s with a fixed pattern of one "1" per 64 bits which are equivalent to a Gaussian pulse train with a repetition rate of 62.5 MHz.This relatively low rate was chosen such that output pulse from the system can still be measured using an oscilloscope (Agilent 54850) with a maximum frequency 4 GHz (an oscilloscope with a higher frequency range was not available during the experiments).In this case the theoretical Gaussian pulse full width at half-maximum (FWHM) is about 250 ps.Later on, when the compliance with the FCC spectral mask of the shaped pulse is investigated, the PPG was driven with repetition rate of 12.5 Gb/s with a pattern of one "1" per 64 bits.In this case the Gaussian pulse FWHM is 80 ps.In all cases the peak-to-peak amplitude (V pp ) of these pulses is set to 2.5 V.The phase modulated signal is then transferred into intensity modulation in the discriminator photonic chip.To overcome the loss in the optical chip a pair of EDFAs is used at the outputs of the discriminator prior to the photodetectors.Here we use a 10 GHz balanced photodetector (BPD, DSC 710) where the inputs are addressed one at a time.To measure the spectrum of the shaped pulses an RF spectrum analyzer (RFSA, Agilent MXA N9020A) is used.Fig. 3. Schematic of the measurement setup used to demonstrate the pulse shaping.The discriminator chip is configured to shape the Gaussian pulses modulated onto the phase of the optical carrier into a monocycle or a doublet.To overcome the fiber-to-chip coupling loss a pair of EDFAs is placed at the chip output.DFB: distributed feedback laser, PPG: pulse pattern generator, PM: phase modulator, PD: photodetector, RFSA: RF spectrum analyzer.
Monocycle generation with one ORR
To generate monocycle pulses the photonic chip is programmed such that only one ORR is in resonance (in this case Ring 1 of Fig. 2a).The other ORRs are tuned out of resonance.We select the through response of Ring 1 observed at Out 2 of the chip (Fig. 1) to perform the PM-IM conversion.In principle, the drop response (from Out 1) should also yield similar results.A similar technique as reported in [31] has been used to characterize the static (i.e.without modulation) response of the ORR.The result is depicted in Fig. 4a, which clearly shows the notch filter behavior of the ORR through response.This static characteristic will be used to set the laser wavelength for operation in the desired region of the filter response.Next, the laser light is phase modulated with a Gaussian electrical pulse train as described in the previous section.The waveform and the PSD of the input Gaussian pulse are depicted in the inset of Fig. 3.The FWHM of the Gaussian pulse is measured to be 237 ps and the spectral content reaches to 4 GHz.To achieve PM-IM conversion the laser wavelength is aligned to each of the linear slopes (positive and negative slopes in Fig. 4a output of the photodetector using a vector network analyzer (Agilent PNA N5230A).This constitutes a squared magnitude response of a microwave photonic filter (MPF) as suggested in Eq. ( 3).The measured result is compared to the calculated MPF response for monocycle generation (Eq.( 3) and Fig. 1b).These results are shown in Fig. 4d.The measurement and the calculated response show a similar trend and a good match at lower frequencies (lower than 3 GHz).Deviations are observed at higher frequencies which might come from the limited range of the linear region of the ORR response.As analyzed in [31] this is directly limited by the waveguide propagation loss in the optical chip (1.2 dB/cm at 1550 nm).However, a much lower propagation loss of below 0.1 dB/cm can already be achieved using the same waveguide technology as the one used in our photonic chip [33].Besides the MPF theory, a time domain approach can also be used to understand the process of monocycle generation with this frequency discriminator chip.As previously reported [6,23], PM-IM conversion with a ring resonator can be regarded as temporal differentiation of the input electrical pulse.This is illustrated in Fig. 5a.Since the phase modulator is driven with a Gaussian pulse, the instantaneous phase of the optical signal, φ(t), resembles a Gaussian shape.The instantaneous frequency, being the first-order derivative of φ(t), takes the monocycle function.This instantaneous change in frequency is linearly transferred into instantaneous change in optical power via the linear response of the ORR which essentially is a frequency discriminator.This results in a monocycle electrical pulse after the photodetection process.As explained before, this behavior can be explained in the frequency domain representation as a simple filtering of the input Gaussian PSD, S 0 (ω), to produce the PSD of the monocycle, S 1 (ω) at the output.This is illustrated in Fig. 5b.
Doublet and modified doublet generation with cascaded ORRs
To generate Gaussian doublets the photonic chip is programmed such that two ORRs, Ring 1 and Ring 3, are in resonance.The cascade of two through responses of these ORRs is observed at Out 2. By means of tuning the phase shifters on the rings [31], the resonance frequency of Ring 3 can be brought closer to the resonance frequency of Ring 1.The measured cascade response is shown in Fig. 6a together with the simulated response as well as the individual through responses of Ring 1 and Ring 3. Next, the laser wavelength is tuned to the position indicated also in Fig. 6a.The electrical signal from Out 2 after photodetection process is a Gaussian doublet with a waveform shown in Fig. 6b.The width of the pulse is 162 ps.The measured PSD of this pulse is shown in Fig. 6c.The measured spectrum is compared with the calculated doublet spectrum from Eq. ( 2), substituting n = 2 and σ = 110 ps (shown as the envelope in Fig. 6c).The measured MPF electrical transfer function of the system programmed for double generation is depicted in Fig. 6d.The measured values (thick line) are compared with the calculated response (thin line).As observed in the monocycle generation, a deviation is occurred at higher frequencies (beyond 3 GHz).This might be attributed to the limited frequency range of the response in Fig. 6a usable for the doublet generation.
In previously reported investigations of photonic generation of IR-UWB pulses, compliance with the FCC spectral mask is often emphasized.In this case it is desired to fill the spectral mask efficiently without violating it.Recently, Abraha et al. [11] reported a scheme where an FCC compliant pulse is generated using a linear combination of modified doublet pulses.The modified doublet is a doublet variant of which the amplitude ratio between its positive and negative part of the doublet pulse is slightly modified [11].In the frequency domain, this type of pulses can be distinguished from a conventional doublet by a notch in the lower frequency region of its PSD.Theoretically, the PSD of the modified doublet can be expressed as where k is an arbitrary scaling parameter.In case of k =1 the pulse becomes a conventional doublet.In order to demonstrate the generation of a modified doublet and to check the spectral compliance with the FCC indoor mask a higher bit rate (12.5 Gb/s) from the PPG is used.The chip is then carefully tuned while observing the output PSD of the generated pulses.This procedure is equivalent to adjusting the scaling parameter, k, such that the notch in the pulse PSD is aligned with the notch in the FCC mask that corresponds to the GPS frequency band (0.96-1.61 GHz) [32].The measured PSD is depicted in Fig. 7, together with the calculated PSD from Eq. ( 4).The best fit of the measured and the calculated PSDs is obtained with k = 1.24 and σ = 50 ps.As can be seen from the figure, the resulting pulse does not fully comply with the FCC regulation.This is expected since theoretically the modified doublet by itself Fig. 7.The power spectrum of a modified doublet pulse generated from the cascade of through responses of two ORRs.The envelope is calculated using Eq. ( 4).does not fit the FCC mask.The linear combination of two of such pulses with different polarity and a time delay between them will eventually satisfy the FCC regulation [11].But to the best of our knowledge, this is the very first demonstration of a modified doublet generation using an integrated photonic chip approach
Simultaneous generation of two modified doublets
So far we have demonstrated the generation of a wide variety of UWB pulses from one output of the photonic chip (Out 2).The other output that consists of the drop response of Ring 1 and the through response of Ring 2 can also be programmed to generate UWB pulses.In fact, the two outputs can be programmed to simultaneously yield UWB pulses.This is particularly interesting since it has been demonstrated that FCC compliant pulses can be generated by means of a linear combination of two time-delayed UWB pulses, like the modified doublets [11] or asymmetric monocycles [25].Thus, it is desirable to have a pair of UWB pulses from the same system that can later on be delayed and linearly combined to create more complex pulses.For the sake of demonstration, we programmed the photonic chip such that both outputs simultaneously produce two modified doublets.As has been previously demonstrated, these two modified doublets can then be mutually delayed and linearly combined to create a more complex (and FCC compliant) pulse.It is customary to generate this mutual delay with a length difference of two optical fibers [11].It is very interesting however, to combine the UWB generation technique reported in this work together with on-chip delay generation.This however requires ultra-low loss optical waveguides as delay lines.Recently, such a delay line with a propagation loss as low as 0.7 dB/m has been demonstrated using a high-index-contrast stoichiometric silicon nitride waveguides [34].Combined with the work presented here, such CMOS-compatible delay lines will be very relevant for low cost and compact photonic IR-UWB transmitters.
Conclusions
The generation of IR-UWB pulses using a photonic chip frequency discriminator has been reported.The high degree of programmability of the chip allows the generation of a widevariety of pulses, such as opposite polarity monocycles, conventional and modified doublets.Analysis of the generation process has been presented based on the time domain (temporal differentiation) and frequency domain (microwave photonic filtering) approaches.We believe this is the first time the generation of UWB pulses beyond the monocycles is demonstrated with integrated photonic chip.The reported work is very relevant for the development of low cost photonic IR-UWB pulse shaper and transmitter.The possibility to implement this pulse generation technique to be adaptable for various modulation formats will be investigated.
Fig. 1 .
Fig. 1.(a) The power spectral densities (PSDs) of the Gaussian, monocycle and doublet pulses (Eq.(2)) for σ = 50 ps.(b) Squared magnitude of the filter response that shapes the spectrum of the input Gaussian pulse into the spectrum of the monocycle or the doublet.
Fig. 2 .
Fig. 2. The photonic chip frequency discriminator used in this work.(a) Chip schematic.Three ORRs are used to simultaneously generate the IR-UWB pulses from Out 1 and Out 2. (b)The packaged photonic chip with fiber array units and wirebonded PCBs (from[31]).
) of the through response.Depending on the slope of the PM-IM conversion, Gaussian monocycles with opposite polarities are generated.The measured waveforms of these monocycles are shown in Fig. 4b.The PSD of the positive polarity monocycle is then measured and the result is shown in Fig. 4c.The measured response is compared to the theoretical PSD of a monocycle obtained from Eq. (2) by substituting n =1 and σ = 110 ps.A proper attenuation factor has been chosen to have a good fit.The measured PSD shows a good agreement with the theoretical response.Next, we measure the electrical transfer function from the input of the phase modulator to the #155805 -$15.00USD Received 30 Sep 2011; revised 10 Nov 2011; accepted 11 Nov 2011; published 21 Nov 2011 (C) 2011 OSA
Fig. 4 .
Fig. 4. Measurement results on the monocycle generation with one ORR.(a) Measured Ring 1 through response.(b) Waveforms of the generated monocycles.(c) Power spectral density of the positive polarity monocycle.(d) Comparison of the theoretical and the measured transfer function of the microwave photonic filter synthesized for the monocycle generation.
Fig. 5 .
Fig. 5. Two ways to explain the monocycle generation using the ORR frequency discriminator.(a) Time domain approach where the instantaneous frequency change of the optical carrier is linearly transferred to the intensity modulation via linear frequency discrimination.(b) Frequency domain approach, where the monocycle is generated via MPF spectral filtering of the input Gaussian pulse.
#Fig. 6 .
Fig. 6.Measurement results on the doublet generation with a cascade of two ORRs.(a) Measured Ring 1 and Ring 3 through responses depicted together with simulation results.(b) Waveforms of the generated doublet.(c) Power spectral density of the generated doublet compared with the theoretical response.(d) Comparison of the theoretical and the measured transfer function of the microwave photonic filter synthesized for the doublet generation.
Fig. 8 .
Fig. 8.The photonic chip discriminator response used for simultaneous generation of two modified doublets from Out 1and Out 2. (a) Simulated responses.(b) Measured responses.The relative position of the laser frequency is indicated by the arrow.
Fig. 9 .
Fig. 9. Measurement results on the simultaneous generation of two modified doublets from Out 1and Out 2. (a) Waveforms of the generated pulses.(b) Power spectral density of the generated modified doublets. | 6,760.6 | 2011-12-05T00:00:00.000 | [
"Physics"
] |
Reliable computational quantification of liver fibrosis is compromised by inherent staining variation
Abstract Biopsy remains the gold‐standard measure for staging liver disease, both to inform prognosis and to assess the response to a given treatment. Semiquantitative scores such as the Ishak fibrosis score are used for evaluation. These scores are utilised in clinical trials, with the US Food and Drug Administration mandating particular scores as inclusion criteria for participants and using the change in score as evidence of treatment efficacy. There is an urgent need for improved, quantitative assessment of liver biopsies to detect small incremental changes in liver architecture over the course of a clinical trial. Artificial intelligence (AI) methods have been proposed as a way to increase the amount of information extracted from a biopsy and to potentially remove bias introduced by manual scoring. We have trained and evaluated an AI tool for measuring the amount of scarring in sections of picrosirius red‐stained liver. The AI methodology was compared with both manual scoring and widely available colour space thresholding. Four sequential sections from each case were stained on two separate occasions by two independent clinical laboratories using routine protocols to study the effect of inter‐ and intra‐laboratory staining variation on these tools. Finally, we compared these methods to second harmonic generation (SHG) imaging, a stain‐free quantitative measure of collagen. Although AI methods provided a modest improvement over simpler computer‐assisted measures, staining variation both within and between laboratories had a dramatic effect on quantitation, with manual assignment of scar proportion being the most consistent. Manual assessment also most strongly correlated with collagen measured by SHG. In conclusion, results suggest that computational measures of liver scarring from stained sections are compromised by inter‐ and intra‐laboratory staining. Stain‐free quantitative measurement using SHG avoids staining‐related variation and may prove more accurate in detecting small changes in scarring that may occur in therapeutic trials.
Introduction
Histological assessment of liver scarring is a pivotal endpoint for determining efficacy of potential antifibrotic therapies in clinical development. Conventional scoring systems encompass broad architectural distribution of fibrosis rather than reflecting only the amount of scar deposition [1] and rely on subjective interpretation by a trained pathologist. Therefore, subtle but potentially clinically significant improvements in histology that may predict endpoints such as portal hypertension and liver function may not be reliably captured.
Picrosirius red (PSR) staining is established as the most reliable method for visualising fibrosis in a liver biopsy, shows concordance with other measures of collagen deposition [2], and may show less staining variation than observed in trichrome staining or immunohistochemistry [1]. As measuring the intensity of a single colour on a slide is an extremely simple metric, it lends itself to computerised measurement that removes intra-and inter-observer variation introduced by a pathologist score [3,4]. This has led to the development of several computer-aided methodologies, with varying degrees of success [5,6]. These tools are often described as automated morphometry or collagen proportionate area (CPA) measurement and generally rely on tinctorial staining (Verhoeff's Van Gieson or PSR) to stain elastin and/or collagen fibres. Digital scans of these stained slides are then made and, by using a colour space threshold based on the hue, saturation, and brightness (HSB), a quantitative assessment of collagen or elastin over the entire section can be made. Such methods have been used to demonstrate differences between groups in translational or clinical research studies where staining can be undertaken in a tightly controlled, single/minimal batch manner by a single laboratory [5,7].
The relative ease and declining cost of both acquiring and storing whole-slide images mean that there is now a significant amount of histological data available that can be mined by machine learning algorithms. As opposed to CPA and associated techniques, machine learning enables the characterisation of 'sub-visual' features of a slideinformation that would not be consciously captured by a pathologist or simple computational methods [8,9]. Machine learning methods can both be applied as a more sophisticated form of segmentation whereby an algorithm can be taught to distinguish features of a slide rather than using simple thresholds based on colour [10,11], or be used to correlate complex histopathological features with clinical outcomes [12].
In addition to artificial intelligence (AI) methods, stain-free second harmonic generation (SHG) and twophoton excited fluorescence (TPEF) microscopy have been proposed as tools to enable a more accurate and objective assessment of a liver biopsy that is not influenced by staining quality [13]. SHG light is only generated by non-centrosymmetric molecules such as collagen, therefore by exposing a tissue specimen to a laser and measuring the polarised light produced, an assessment of the amount and distribution of collagen can be made. SHG can be used in conjunction with TPEF microscopy, enabling visualisation of background liver tissue at the same time as collagen [14]. Currently unaffordable for routine diagnostic use, SHG/TPEF imaging can provide an accurate stain-free quantitative measurement of fibrosis on a biopsy [15].
In this exploratory study, we have compared the performance of an AI methodology with simple thresholding and manual assessment in quantifying scar proportion in PSR-stained sections of liver, alongside a stain-free method of scar quantification. For widespread application in large clinical trials or routine clinical practice, the ideal method of scar quantitation from stained sections must be robust to staining variation both between and within laboratories where sections must be stained daily rather than as studyspecific batches. We have used sequential sections from the same blocks, stained on two separate occasions at two independent National Health Service (NHS) clinical pathology laboratories. In the absence of a 'groundtruth', the performance of different methods of scar quantitation has been evaluated by the consistency of derived metrics of scar amount across the set of sequential stained sections, testing both inter-and intralaboratory effects, i.e. an optimal method would produce the same 'result' from each of the four sections from the same block stained on two separate occasions in two independent NHS laboratories. Finally, the stain-based measurement methods of scar quantification were compared to stain-free SHG/TPEF imaging, which gives a similar readout of the amount of collagen on a slide but is not subject to bias relating to either laboratory protocols or stain interpretation. Specifically, the measurement of fibrosis-related parameters in liver tissue by SHG is highly reproducible when the test-retest performance has been evaluated [16].
The prevailing orthodoxy is that machine learning methods will provide a significant performance improvement over both simple colour space methods and human measurement, but we demonstrate significant challenges that must be overcome if AI methods are to be applied to histopathology in large multicentre studies and clinical practice.
Human tissue acquisition and staining
Anonymised unstained formalin-fixed paraffin-embedded sections from 20 cirrhotic explant livers (four cases each with alcoholic liver disease, non-alcoholic fatty liver disease, chronic hepatitis C virus infection, primary sclerosing cholangitis, and primary biliary cholangitis as the stated primary aetiology) were provided after approval by the Lothian NRS Human Annotated Bioresource with permission granted under authority from the East of Scotland Research Ethics Service REC 1, reference 15/ES/0094.
From each block, initial five adjacent 5 μm sections were cut for staining in Nottingham and Edinburgh. A single section from each case was PSR stained according to standard local protocols within two CPA UK-accredited NHS pathology laboratories -Nottingham University Hospitals NHS Trust Queen's Medical Centre Pathology Department and NHS Lothian Department of Laboratory Medicine at the Royal Infirmary of Edinburgh (see Supplementary materials and methods). To assess intra-laboratory variation, staining on each case was repeated at both laboratories 6 months later using another section of the initial set from the same block, generating four stained sets of slides in total. Finally, further sections were cut from the same blocks and stained in Nottingham, where the standard staining protocol was unchanged, within 1 week to further evaluate intra-laboratory staining variation; the standard protocol for PSR staining in Edinburgh had changed after the two rounds of staining, so an additional round of staining was not undertaken.
Image acquisition and processing
Stained sections were scanned using identical Nano-Zoomer scanners (Hamamatsu Photonics, Shizuoka, Japan) at Â20 magnification. The raw scanned .ndpi whole-slide images were split by ndpisplit [17] into Â5 magnification 1,000 Â 1,000 pixel tiles in TIFF format. As the scans contained the entire slide, including areas not containing tissue, simple thresholding was used to isolate the tissue from each tile and discard empty space and debris contained in each scan. The script used to isolate tissue is included in supplementary material, File S1. Overview images at Â1.25 were exported from the raw . ndpi file and are available from the University of Nottingham Research Data Repository (https://rdmc. nottingham.ac.uk/handle/internal/9133).
Manual scoring
All livers were cirrhotic, so the application of traditional ordinal scores of architecture provided no inter-case discrimination. Instead, whole-slide images of each section were stripped of all identifiers and randomly numbered. Four participants (two qualified pathologists and two non-clinical researchers) provided their assessment of the percentage of tissue on each slide that was PSR positive, scoring each batch of 20 slides with a 'washout' period of at least 1 day in between different, randomly ordered batches.
HSB colour space scar quantification
Each background cleaned tile was classified by two separate HSB colour space thresholds to calculate the number of pixels representing total tissue and the number of PSR-positive pixels. The threshold values to determine PSR-positive pixels were derived by selecting positive pixels within a representative tile and then testing on representative tiles from all cases, adjusting and iterating by-eye until the most consistent thresholding was achieved. A single set of threshold values was applied to tiles from all stained image sets.
Classifier development in WEKA
Following pre-processing described above, the Waikato Environment for Knowledge Analysis (WEKA) [18], an open-source Java-based tool available as a plugin within Fiji [19], was used to build a PSR classifier. Tiles were randomly selected from the data set and used to train the classifier. Classes were defined as 'Space' (empty space surrounding extracted portions of tissue), 'Lumen', 'PSR positive', and 'Tissue'. Areas of each tile were selected using a graphics tablet and manually defined as one of the four classes. WEKA was set up to use mean, minimum, maximum, median, and variance as training features for the selected pixels. The balance classes setting was used to account for differences in the amount of training data used for each of the four classes. Once the areas of each tile were defined, these training data were used by WEKA to extrapolate across the entire tile, giving an image segmented into four colours based on the defined classes. Training was then repeated on each tile until it was segmented into the four classes accurately, as judged by a pathologist. Training was then continued using at least one tile from each of the 20 slides in the study. Following training, a script was used to apply the classifier to every tile in the study and count the number of pixels in each class. This script is included in supplementary material, File S2. PSR positivity was defined as the number of PSR-positive pixels divided by total PSR-positive pixels + tissue + lumen and expressed as a percentage.
Individual WEKA classifiers (WEKA_i) specific to each single set of images were trained by using only tiles from that stained image set. For combination WEKA classifiers (WEKA_c1 and c2), classifiers were trained using tiles drawn from all stained image sets. SHG/TPEF imaging SHG/TPEF imaging was carried out by Histoindex Pte Ltd (Singapore) using an unstained section from each of the 20 cases, at Â20 magnification. The raw SHG percentage (a measure of the amount of collagen) and qFibrosis, a score taking into account both the amount and distribution of collagen [20], were correlated with the other stain-based scoring methods.
473
Automated quantification of liver fibrosis using machine learning
Statistical comparisons between methods
The combination of metrics from four stained sections of the initial set from each case, Edinburgh 1 and 2 (E1 and E2) and Nottingham 1 and 2 (N1 and N2) from three different measurement approaches resulted in six pairs of observations used to compare the different methods (Figure 1). Scores derived from the unstained set (SHG/TPEF) were correlated with scores from each measurement method on the E1-stained set. Spearman correlation coefficients were calculated for each pair of observations. Metrics from the freshly recut sections stained in Nottingham (rN3) were compared with those derived from N1 and N2 alone. Scores for each section using each of the measurement methods are included in supplementary material, File S3.
Results
Inter-and intra-laboratory staining variation significantly reduces consistent PSR quantification using computational methods Slides were stained at two centres (January 2018), followed by staining of a second set 6 months later at both centres (July 2018), using the next sections from each FFPE block that were cut at the same time at the start of the study to allow evaluation of intralaboratory staining variation in a routine, real-world clinical laboratory context. We observed substantial qualitative differences in the PSR colour and intensity in each batch stained, even when comparing slides stained at the same centre ( Figure 2). Measurement of scar proportion using a single HSB colour space threshold applied to all staining sets (E1, E2, N1, and N2) showed large differences between derived values of PSR percentage for each given case. The Spearman's rank correlation of values was poor both for inter-laboratory (ρ = 0.26) and intra-laboratory (ρ = 0.19) stain pairs ( Figure 3A).
A WEKA classifier was trained using training data from and applied to each individual set of PSR-stained slides (E1, E2, N1, and N2) in isolation (WEKA_i). The classifiers trained on and then applied to each individual set of stained slides (WEKA_i) produced no increase in consistency compared with the simple colour space thresholding method. Spearman's correlation coefficients were similarly low for both intra-(ρ = 0.24) and inter-laboratory stain sets ( Figure 3B). E2, N1, and N2). The stained slides were then scored using three different methods (human, HSB, and WEKA). A fifth set of slides were sectioned and left unstained for SHG/TPEF imaging. (B) Each stained set of slides gives six measurement pairs that can be compared to assess inter-and intra-laboratory variation with each scoring method. A single set of stained slides (E1) was used as the comparator with the stain-free SHG/TPEF set.
S Astbury et al
A second unified WEKA classifier was trained with images from both sites (WEKA_c1) and applied to all cases. Unified training marginally increased the consistency of the classifier, with slightly increased Spearman's correlation coefficients for both intra-(ρ = 0.31) and inter-laboratory (ρ = 0.29) stain sets ( Figure 3C).
To explore the potential of using further training to iteratively improve classifier accuracy, sections that produced especially divergent results (inter-laboratory pairs displaying over 2Â divergence in scoring) were used to further train an improved combined classifier (WEKA_c2). This targeted training led to further improvements in classifier consistency across all images, and increased Spearman's correlation coefficient for both intra-(ρ = 0.53) and inter-laboratory (ρ = 0.37) stain sets ( Figure 3D).
By comparing the derived scar proportion (PSRpositive percentage of tissue) using each classifier applied to each set of stained images (E1, E2, N1, and N2), it is evident that both inter-and intra-laboratory staining differences have a significant impact on classification (Figures 2 and 3). Most importantly, the change in colour of the PSR stain led to a significant reduction in the number of pixels classified as PSR positive by certain classifiers (Figure 2). Some of this was corrected with further training (Figure 2, WEKA_c2). This staining variation also led to significant misclassification of non-PSR-positive tissue, in particular, the incorrect classification of liver tissue as vessel lumen (Figure 2, Nottingham stain 2).
To explore whether the duration between block sectioning and staining could account for some of the intra-laboratory staining variation, a freshly cut set of sections from the same blocks were stained in Nottingham (rN3), where the PSR staining protocol Figure 2. Representative illustration of intra-and inter-laboratory PSR staining differences and the effect on segmentation using HSB and WEKA classifiers and comparison to SHG/TPEF imaging. WEKA features are coloured as purple = PSR positivity, yellow = lumen, green = tissue, and red = blank space. WEKA_c1: WEKA classifier trained on sections from both laboratories. WEKA_c2: WEKA classifier c1 with further targeted training on sections with greater than 2Â divergence in PSR quantification between stain pairs. SHG/TPEF image is coloured as collagen in green/yellow and parenchyma in red.
475
Automated quantification of liver fibrosis using machine learning remained unchanged, in February 2021. The derived scar proportion using the WEKA_c2 classifier from rN3 most closely correlated with the derived values from N1, suggesting that the duration of time between section preparation and staining was responsible for a proportion of the intra-laboratory
476
S Astbury et al variation in PSR staining (see supplementary material, Figure S1).
Human assessment of scar proportion is significantly more consistent than computational methods
Although the current gold standard is ordinal scoring of architecture by a pathologist, the scales are crude. For example, all cases in this study were cirrhotic and so would be assigned the same score in any system used. Computational analysis is purported to outperform a human observer in determining the absolute amount of a feature of interest, such as the percentage of tissue that is PSR positive, and so such estimates by an observer are rarely used. However, we tested whether such confidence in computational methods, or more correctly scepticism about the performance of human observers, was valid. Four observers, two consultant pathologists and two research scientists, were asked to give an estimate of the percentage of the tissue that was PSR positive from randomly ordered low-power thumbnail images of each stain set (E1, E2, N1, and N2), repeating the process on renumbered image sets at least 24 h later. No prior training for the task was provided.
Against all expectations, there was much greater consistency of given scar proportion across the stain sets for each observer alone compared with any computational method (Figures 4 and 5), regardless of whether the scorer was a pathologist (hu1 and hu2) or non-clinical researcher (hu3 and hu4). This indicates that an observer is much more able to compensate for variations in staining than any computational method.
Comparison of computational methods on stained sections with SHG/TPEF imaging
Having assessed an AI-based method against existing methods of stain-based measurement, we then compared these methods to commercially available, stainfree SHG/TPEF imaging (Figures 1 and 2). HSB colour space thresholding, the most consistent AI classifier (WEKA_c2), and the most consistent human scorer (hu1) for readouts on the E1 set, based on median correlation between all stain pairs, were used as comparators with SHG. Scores were compared against both the raw SHG value (expressed as a percentage of the total amount of tissue scanned) and the qFibrosis score, which adjusts the SHG percentage based on its distribution across the scanned section. In both instances, human scoring gave the strongest correlation with SHG/TPEF quantification ( Figure 6).
477
Automated quantification of liver fibrosis using machine learning
Discussion
In 2015, a report commissioned by the UK government's Minister for Digital and Culture outlined potential benefits and opportunities of AI and machine learning tools, including how these could be applied to health care [21]. In 2019, £250 m was invested in a National Artificial Intelligence Lab, to be based within NHSX [22]. Thus, there is a clear drive among both politicians and the largest technology companies in the world to apply AI methods wherever possible in medicine.
In the research setting, large, multi-centre trials using liver fibrosis as a primary efficacy endpoint currently rely on ordinal scores such as the non-alcoholic fatty liver disease (NAFLD) activity score or Ishak fibrosis stage. Even when best practices are followed (central review by more than one pathologist, central staining if practical, and consistency in biopsy technique) [23], there is potentially a significant amount of information lost through the use of these ordinal scoring systems. Ideally, computational methods including those using AI would provide a way to both extract information from liver biopsies that is not represented by ordinal scoring, whilst also removing the subjectivity inherent to the process of scoring.
However, there are several challenges that machine learning-based tools need to overcome if they are to be utilised in a clinical or research setting, including, but not limited to, a reliance on retrospective rather than prospective studies, the lack of standardisation to enable comparison between different AI tools, and the 'AI-chasm', a term defining the gulf between reported accuracy measurements of a given machine learning tool during development and its actual diagnostic efficacy when used in the field [24]. The AI-chasm problem was illustrated by a Google-developed tool for detecting diabetic retinopathy using scanned images of retinas, which displayed high accuracy during training but was significantly affected by inter-site variation when applied in a live setting [25]. A systematic review of AI tools published in 2019 highlighted that few studies make direct comparisons between a tool and healthcare professionals, and even fewer use external validation. In studies where external validation was compared to internal validation, internal validation was shown to overestimate the effectiveness of AI compared to healthcare professionals [26]. Using readily accessible, open-source machine learning tools to measure a simple histological feature, we have demonstrated that staining variation both within and between laboratories will pose significant challenges if these tools are to be applied even in the tightly controlled environment of multi-centre studies. Figure 6. Using a single stained set of slides (E1), the stain-based scoring methods were compared to percentage SHG measured using stain-free SHG/TPEF imaging and the qFibrosis index derived from the measured parameters. WEKA_c2: WEKA classifier c1 with further targeted training on sections with greater than 2Â divergence in PSR quantification between stain pairs.
S Astbury et al
As there is a lack of an established gold standard for measuring the amount of scarring in a liver biopsy, we have used consistency of the derived scar proportion across the four sets of stained sections as the metric to assess the performance of each method. Methods that are more robust to staining variation will produce more consistent values and give a tighter correlation between stain pairs.
Our study demonstrates that a trained AI-based method does increase consistency compared with simple colour space thresholding, an increase in performance that is enhanced by further training. As expected, this increase in consistency was higher between intra-laboratory stain pairs, with protocol and environmental differences between laboratories more likely to produce significant changes in staining compared to reagent changes within a single laboratory. However, there was still considerable residual inconsistency in the calculated scar proportion and human observers were easily able to outperform these methods, despite the task putatively favouring computational methods. As observed for the computer-based scoring methods, the human scores also showed a slightly higher consistency between intra-laboratory pairs compared to inter-laboratory pairs (Figure 4).
The age of a histological section is known to affect a variety of stains [27], therefore staining was repeated in the Nottingham laboratory on a freshly cut set of sections (rN3) ensuring 1 week between sectioning and staining. Classifying with the most consistent AI classifier (WEKA_c2) produced significant correlations between both the newly stained set rN3 and N1 and N2 stains, with a closer correlation observed between N1 sections and the rN3 set (see supplementary material, Figure S1). As the time between sectioning and staining was greater in N2 compared to N1 sections, this indicates that section age may contribute to intralaboratory variation if a standard interval between sectioning and staining is not used. The other sources of intra-laboratory staining variation can only be speculated upon, but may include inter-operator differences in the application of hand-staining protocols, the age of reagents, and seasonal and diurnal variation in the laboratory air and water temperatures.
We present this not to suggest that by-eye estimations of scarring should be used but to highlight that staining variation is an inevitable factor in real-world laboratories. Whilst iterative training will undoubtedly increase the consistency of methods used to assess scarring in stained sections and more sophisticated tools are in development to effectively allow 'normalisation' between multiple staining sites to attempt to account for the variation introduced [28,29], stain-free methods that are not affected by such variation should be considered. Existing quality control efforts in histopathology focus on maintaining consistency particularly with regard to immunohistochemistry, where there is a greater variation in staining protocols and reagents compared to tinctorial staining. This study indicates that similar efforts (protocol standardisation, the use of tissue controls, and colour calibration) would be required if AI-assisted scoring of tinctorial stains is to be applied widely.
SHG/TPEF imaging has been proposed as a gold standard for the measurement of liver fibrosis [30], particularly in the context of clinical trials, where quantifying potentially small changes through the course of a study is required. Our comparison of each of the stain-based methods of collagen quantification with both raw SHG percentage and the qFibrosis score demonstrated that human scoring is the most strongly correlated, again suggesting AI methods are more vulnerable to inter-and intra-laboratory staining variation than humans. The common advantage of both humans and SHG/TPEF is their ability to consider beyond colour quantification. Both can utilise information from the tissue architecture and 'landscape' of the scar either unconsciously by humans or using feature recognition processes that are not biased by staining to quantify based on different aspects of fibrosis.
The study is limited in the type of specimen and the stain assessed. Only sections from explant livers were used. Whilst this type of specimen in only encountered by laboratories at liver transplant centres, it was chosen because the available tissue for research was abundant with no risk of exhausting the blocks. Whilst the use of explants meant that the study was limited to cirrhotic livers, rather than representing the full spectrum of disease stage, there is no clear reason that the findings cannot be extrapolated to PSR-stained sections with any amount of fibrosis. The examination of a set of cases where 'gold-standard' ordinal scoring of fibrosis is unambiguously non-informative (i.e. all cases are assigned the same score under any panaetiology or aetiology-specific scoring system) serves to illustrate the potential value for formal computational quantification. Finally, only PSR-stained sections were examined. We would suggest that whilst the sources and extent of staining variation will vary depending on the specific stain, the susceptibility of computational methods of feature quantification in stained sections to such variation should always be evaluated where studies use anything other than selfcontained, single batch staining.
In conclusion, we demonstrate that computational tools are not yet able to satisfactorily compensate for differences in tissue staining both between and within laboratories. The results here suggest that caution should be exercised when applying such methods to stain-based quantification in histopathology, particularly in large multi-centre studies, without applying extremely rigorous standardisation between staining centres. | 6,270.6 | 2021-03-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
Controlled self-aggregation of polymer-based nanoparticles employing shear flow and magnetic fields.
Star polymers with magnetically functionalized end groups are presented as a novel polymeric system whose morphology, self-aggregation, and orientation can easily be tuned by exposing these macromolecules simultaneously to an external magnetic field and to shear forces. Our investigations are based on a specialized simulation technique which faithfully takes into account the hydrodynamic interactions of the surrounding, Newtonian solvent. We find that the combination of magnetic field (including both strength and direction) and shear rate controls the mean number of magnetic clusters, which in turn is largely responsible for the static and dynamic behavior. While some properties are similar to comparable non-magnetic star polymers, others exhibit novel phenomena; examples of the latter include the breakup and reorganization of the clusters beyond a critical shear rate, and a strong dependence of the efficiency with which shear rate is translated into whole-body rotations on the direction of the magnetic field.
Abstract. Star polymers with magnetically functionalized end groups are presented as a novel polymeric system whose morphology, self-aggregation, and orientation can easily be tuned by exposing these macromolecules simultaneously to an external magnetic field and to shear forces. Our investigations are based on a specialized simulation technique which faithfully takes into account the hydrodynamic interactions of the surrounding, Newtonian solvent. We find that the combination of magnetic field (including both strength and direction) and shear rate controls the mean number of magnetic clusters, which in turn is largely responsible for the static and dynamic behavior. While some properties are similar to comparable non-magnetic star polymers, others exhibit novel phenomena; examples of the latter include the breakup and reorganization of the clusters beyond a critical shear rate, and a strong dependence of the efficiency with which shear rate is translated into whole-body rotations on the direction of the magnetic field.
Introduction
Star polymers, a family of macromolecules where f polymeric arms (each consisting of n A monomers) are tethered to a central, colloidal particle, have received a rapidly increasing share of interest within the soft matter community during the past years (see e.g. [1,2]). The reason for their popularity rests both upon the tunability of their architecture via variations of f and/or n A as well as upon the possibility to functionalize star polymers by selectively designing the polymeric arms [3]. This functionalization can, for instance, be realized by tethering block copolymers to the central colloid, leading to so-called telechelic star polymers [3]; alternatively, as recently put forward in [4], one can attach (super-para-)magnetic particles as terminal monomeric units onto each of the arms. This latter manner of functionalization is particularly attractive in that it allows for wellcontrolled and practically instantaneous tuning of the interaction, and hence of the system properties, via the external magnetic field, so that one does not have to rely on slow and inaccurate changes in temperature. In addition, it introduces strong anisotropy of the interactions between the endgroups, modifying thereby the morphology of the terminal aggregates from spherical into linear ones [4].
Star polymers show in their different architectures a broad range of interesting physical equilibrium properties; Examples include (i) the ability to cover in their single-molecule properties -by tuning their functionality f -the range of ultrasoft to spherical, essentially hard colloidal particles [1,2], (ii) association, where telechelic star polymers form selfassembled, reconfigurable, soft patchy colloids, which then further self-organize at a supramolecular level into a variety of micellar or network-forming structures [3,5], and (iii) the ability of the above-mentioned magnetically functionalized star polymers to form under equilibrium conditions clusters of particles ("valences"), whose number and size depend on f , n A , and the strength and the orientation of the external magnetic field B. The wealth of emerging scenarios (in terms of valence and molecular shape) has been thoroughly discussed in [4]. In addition to equilibrium situations, conventional star polymers exhibit a variety of intriguing properties in a stationary, non-equilibrium setup, as demonstrated in the investigations by Ripoll et al. [6], who have exposed these macromolecules to shear forces by faithfully including hydrodynamic interactions: depending on the values of f and n A , the particles show, upon increasing the shear rateγ, strong deformations and distinctively different types of motion.
In this contribution, we extend these nonequilibrium simulations to the aforementioned magnetically functionalized star polymers and expose these particles both to shear forces as well as to an external magnetic field B, considering three orientations of the latter relative to the shear flow direction (ê x ), the shear gradient direction (ê y ), and the vorticity direction (ê z ). As compared to the related investigations on conventional star polymers [6], we face here an entirely new situation due to the emergence of patches which can or cannot be broken up under the influence of external fields, their stability being governed by an interplay between shear rate, magnetic field strength, and relative orientation of B to the shear-cell geometry.
Employing the multi-particle collision dynamics (MPCD) technique [7], which incorporates hydrodynamic interactions, we provide evidence that conformational properties, such as the number, the size, and the location of the magnetic clusters, the shape of the macromolecule, or its flexibility can easily and accurately (but not necessarily independently from one another) be triggered via suitable combinations of the two above-mentioned external fields. With this contribution we thus introduce magnetically functionalized star polymers as a novel system of very flexible particles featuring specific numbers of self-associating aggregates with versatile and easily addressable conformational properties.
To the best of our knowledge, magnetically functionalized star polymers have not been synthesized in experiment, but realization of magnetic nanoparticles [8], their successful chemical coating and linkage [9], and a rich history of the study of other types of star polymers [10] make the synthesis of magnetically functionalized star polymers feasible and render them, as we hope to show in the following, interesting candidates for future experiments.
Model and Methods
In our investigations, we employ a bead-spring model for the magnetically functionalized star polymers: f linear polymer arms are attached to a core particle (index 'C'), each of them containing n A arm particles : Schematic representation of our simulation setup: a magnetically functionalized star polymer is exposed to shear flow (as specified in the text) and to an external magnetic field B, pointing in independent experiments along the Cartesian axes. Blue: velocity profile of the flow, red: schematic representation of the super-paramagnetic end monomers, forming two magnetic clusters (suppressing the arm monomers and the central core bead, which would be situated approximately in the origin in this sketch). In the three panels, the magnetic field B points along the flow, gradient, or vorticity direction (from left to right).
(index 'A'); to the end of each arm, a superparamagnetic particle (index 'M') is attached. The steric interactions of all these spherical monomeric units have two concentric interaction ranges: an inner, impenetrable part with diameter D α and an outer, soft part with range σ α , with α = C, A, or M. The masses of all types of monomers are assumed to be equal in order to avoid introducing features dependent on specific mass asymmetries. Any pair of monomers, separated by a distance r, interact via a modified Weeks-Chandler-Andersen (WCA) potential V WCA (r) [11], given by with D αβ = (D α + D β )/2, σ αβ = (σ α + σ β )/2, and αβ = √ α β , to be set to specific values in what follows.
Spring bonds between (i) the core monomer and the first arm monomer, (ii) adjacent arm monomers, and (iii) the last arm monomer and the functionalized monomer are modeled via the generalized finitely extensible non-linear elastic (FENE) potential [12,13], specified via here, K αβ specifies the interaction strength, l αβ is the equilibrium bond length between monomers α and β, and R αβ is the maximum deviation from l αβ . In addition, the magnetic monomers interact via the standard dipole-dipole interaction, i.e., with m 1 and m 2 being the dipolar moments of two interacting particles which are separated by a vector r (with r = |r| andr = r/r). The dipole moments are assumed to be equal in magnitude (i.e., |m 1 | = |m 2 | = m), and µ 0 is the vacuum permeability.
For reasons of simplicity, we assume that the moments of the super-paramagnetic particles are always perfectly aligned with the external, spatially homogeneous magnetic field, B = Bê B . With all this in mind, the expression (2) We introduce the dimensionless magnetic parameter λ = µ 0 m 2 /(4πσ 3 ), with the length and energy scales σ and defined below. λ represents the relative strength of the magnetic interaction compared with the other potentials, as well as the thermal and hydrodynamic interactions. Assuming, due to the super-paramagnetism, that m ∝ B, one can consider λ ∝ B 2 a measure of the magnetic field strength, and thus view dependencies on λ andê B as dependencies on the external B-field in (computer) experiments.
In an effort to reduce the large number of system parameters, we have used the following set of WCAparameters which mimic a simple, yet reasonable model of a magnetically functionalized star polymer: Here, T is the temperature, k B is Boltzmann's constant, and a is the MPCD length unit, to be specified below ‡. For the FENE parameters we use K αβ = 30 αβ σ −2 αβ and l αβ = D αβ , R αβ = 1.5σ αβ , with α = C, A, or M. ‡ σ = a has been chosen in order to achieve, on average, a spatial separation of two monomers sufficient to place them in different MPCD collision cells.
To quantify the shape of the star polymer under arbitrary external conditions, we employ the radius of gyration tensor, S, with elements S µν (µ, ν = 1, 2, 3) r i µ is the µ-th component of the Cartesian position vector of particle i with respect to the molecule's center of mass frame; N = 1 + f (n A + 1) is the total number of monomers. From the eigenvalues of this tensor, termed Λ 2 α (α = 1, 2, 3), and assuming, without loss of generality, that Λ 2 1 ≤ Λ 2 2 ≤ Λ 2 3 , one can calculate the acylindricity c, the asphericity b, the radius of gyration R g , and the relative shape anisotropy κ 2 of the macromolecule [14]: Typical To shed light on the tunability of these particles under non-equilibrium conditions, we have exposed in this contribution a single functionalized star polymer to shear forces, assuming the flow direction, the velocity gradient direction, and the vorticity direction along the x-, y, and z-axes, respectively; the strength of the flow is measured by the shear rateγ. In addition, we have applied an external magnetic field, B, which we have assumed in distinct computer experiments to be oriented along each of the Cartesian axes; see figure 1 for a schematic representation.
To avoid a scan of the high-dimensional parameter space, we have restricted ourselves to the case of star polymers with a functionality f = 10, each arm being formed by n A = 30 monomers -a situation which is computationally very tractable and still exhibits rich physics and phenomenology already in the equilibrium case [4]. For the reduced magnetic interaction strength λ two values have been assumed, namely λ = 100 and λ = 200. From the diagrams of states (as they are shown in [4]) we know that for this set of parameters, star polymers form under equilibrium conditions two to three magnetic column-shaped clusters; these are assemblies of interacting magnetic end-monomers, aligned along the external magnetic field, where two magnetic beads are considered to be part of the same cluster if their interparticle distance is at most 2.5a.
In the Multi-Particle Collision Dynamics (MPCD) technique, the macromolecule is surrounded by microscopic fluid particles of mass m f which are considered point particles; their positions and momenta are not constrained to a lattice (for details cf. [7]). In this simulation technique, two steps are carried out alternately: (i) in the streaming step, the point particles move ballistically for a time ∆t, such that r i (t + ∆t) = r i (t) + v i (t)∆t, r i (t) and v i (t) being the position and velocity of particle i, respectively. (ii) In the collision step, interaction takes place: in the variant of MPCD that we have employed in this contribution, Stochastic Rotation Dynamics, the point particles are sorted, according to their instantaneous positions r i (t), into collision cells, i.e. cubic boxes of side length a, which tesselate the simulation volume. Then, for each collision cell k, one transforms the velocities of all particles i in that cell according to the is the centerof-mass velocity of collision cell k, m i is the mass of particle i, and R(k, t, α) is a rotation matrix about a randomly chosen axis and fixed angle α, with independent choices for each collision cell k and time t. In order to suspend the star polymer in this MPCD fluid, its beads are treated like fluid particles, except that their masses are m b = 5m f , and -instead of ballistic streaming -the intra-star forces are integrated in five consecutive iterations of a velocity-Verlet algorithm [15], each with timestep ∆t/5.
Simulations were initialized with equilibrium configurations of the stars and a random fluid configuration; representative data was taken only after an equilibration period to avoid correlations with the initial state. The simulation volume was chosen to be cubic and of side length 30a. Lees-Edwards boundary conditions [16] were employed to enforce a shear flow. Units are chosen such that a = 1, m f = 1, and k B T = 1, the temperature T being enforced via the Maxwell-Boltzmann scaling thermostat [17]. The pure fluid's mass density was set to = 10, such that in total 10 × 30 3 = 270 × 10 3 MPCD fluid particles were simulated. The rotation angle α was set to 2.27 radians, corresponding to approximately 130 degrees. The OpenMPCD simulation package used can be found at [18].
Results
We find that the observed conformational and dynamic properties can qualitatively be classified into four categories: (i) the mean number of magnetic clusters, N C , which is of particular importance and thus warrants separate treatment, (ii) quantities that are largely controlled by N C , (iii) quantities that are unaffected by the presence of magnetic moments in the model, and (iv) quantities that, on top of an N Cdependence, are sensitive to the orientation of the external magnetic fieldê B relative to the shear flow and shear gradient direction.
Mean Number of Clusters N C
The main plot in the top-left panel of figure 2 shows the mean number of clusters, N C , as a function of the shear rateγ. For lowγ-values, the mean cluster count is roughly 2, up until a critical shear rateγ is reached, which depends on the orientationê B and strength (encoded in λ) of the external magnetic field. At thisγ , shear-induced forces overcome the attractive magnetic interactions, breaking up columns of endmonomers (which form along theê B direction) into successively smaller, more stable units as the shear rate is increased; to be more specific, we observe N C ∝ ln(γ). This criticalγ is largest forê B =ê z and smallest forê B =ê y , where the magnetic columns are particularly exposed to the shear flow gradient (cf. figure 1). Furthermore,γ , or equivalently, the robustness of magnetic clusters, increases with λ. The inset shows that, upon scaling the shear rate with an empiricalê B -and λ-dependent factor τ (ê B , λ), all curves collapse onto a master curve, with the scaling chosen such thatγ (ê B , λ) · τ (ê B , λ) ≈ 1.
N C -Controlled Quantities: Shape Descriptors
The shape descriptors [cf. equation (5)], when viewed as functions of the scaled shear rate, exhibit comparable qualitative behavior for various orientationsê B and magnetic interaction strengths λ. This is significant in that the shape is largely determined by the number of magnetic columns formed in a given situation, but is otherwise relatively unaffected by the details of the magnetic interaction.
The top-right panel of figure 2 shows the relative shape anisotropy κ 2 as a representative member of this category of quantities. A value of κ 2 near 0 would roughly be indicative of a spherically symmetric arrangement of the star polymer's beads; even for low shear ratesγ, this condition is not met, since the magnetic columns formed by the end-monomers break rotational symmetry, as they align with the external magnetic field. For higher shear rates, the polymer is strongly elongated along the flow direction. Also, note that there is a sudden increase in κ 2 aṫ γ(ê B , λ) · τ (ê B , λ) ≈ 1, i.e. at the critical shear rateγ where magnetic clusters start breaking apart, particularly pronounced forê B =ê z (see panel inset). Conversely, given the rather well-defined dependence of N C on B andγ, one can manipulate the shape and size of the star polymers by tuning the external fields in their strength and/or relative orientation.
Universal Properties: Orientational Resistance
One can measure the extent of alignment between the flow direction (ê x ) and the major axis of the instantaneous configuration of the star polymer, i.e. the eigenvector associated with the largest eigenvalue Λ 2 3 of S, and denote the corresponding angle χ; then, one can define the orientational resistance m G = γτ eq tan(2χ), where τ eq is the longest relaxation time of the star polymer in equilibrium.
The bottom-left panel of figure 2 shows m G /τ eq as a function ofγ. For sufficiently large shear rates (γ 10 −2 in inverse MPCD time units), the orientational resistance follows a power-law m G ∝γ µ with a characteristic exponent 0.4 < µ < 0.6. This behavior is shared by the majority of polymeric systems (each with a corresponding value of µ), ranging from linear chains to block copolymers, randomly cross-linked singlechain nanoparticles, dendrimers, and non-magnetic star polymers [6,19,20,21,22]. Thus, while the exponent µ varies with B, the characteristic power-law of star polymers is conserved despite the addition of a magnetic interaction and the associated introduction of another distinguished axis.
While parts of the literature predict [23] or report [6] m G approaching a constant plateau for lowγ, the high fluctuations observed in our data for low shear rates allow neither confirmation nor dismissal of this claim.
3.4.ê B -Sensitivity Beyond N C : Angular Velocity
Although the star's shape is largely determined by N C , as discussed above, the rotational dynamics of the star are peculiar in that they have an additional dependence on the orientation ofê B : When considering the angular velocities ω α around the Cartesian axes α, or more appropriately, the Eckart-frame angular velocities Ω α -constructed so as to remove spurious contributions by vibrational modes to the (apparent) angular velocity [24,25,26] -one finds that there is no net rotation around the x (shear flow direction) and y (shear gradient direction) axes, but a significant rotation Ω z = 0 (shear vorticity direction); this fact additionally and decisively distinguishes the caseê B = e z from the other ones, even when scaling the shear rates (cf. figure 1 and bottom-right panel in figure 2).
In particular forê B =ê z , the magnetic interaction parameter λ plays no role below the critical shear ratė γ (see inset), and as soon as magnetic columns start breaking up, the different curves approach a common master curve, corresponding to the case of little to no magnetic clustering. The most pronounced change in (Eckart) angular velocity occurs, again, atγ =γ (cf. inset) orγ(ê B , λ) · τ (ê B , λ) ≈ 1 (cf. main panel), respectively.
Conclusions and Outlook
Decorating the arms of star polymers with magnetic particles opens up a rich, new facet of the phenomenology of polymer physics. The resulting magnetically functionalized star polymers are sensitive to both direction and intensity of an external magnetic field, as well as to the relative orientation and strength of shear flow. Said sensitivity manifests in the self-aggregation behavior of columns of the star's magnetic monomers, and the stability of the resulting magnetic columns. This in turn largely determines size, shape, anisotropy, and dynamic responses, some aspects of which (e.g. orientational resistance) behave qualitatively as in the non-magnetic case, while others (e.g. whole-body rotation) exhibit entirely novel phenomenology.
The tunability of the star conformation, anisotropy, and of the stability of magnetic aggregates via manipulation of the external magnetic field B allows for new avenues in which (computer) experiments can be conducted. For example, upcoming research will discuss self-aggregation of magnetic columns in dense solutions of magnetic stars, how changes in the external fields can influence e.g. rheology or the formation of large-scale structures in a given system, and what types of phase behavior can be observed. Possible applications might include micro-fluidic devices, such as micro-mixers with tunable efficiency due solely to the geometry of flow and magnetic field. | 4,984 | 2019-04-02T00:00:00.000 | [
"Physics"
] |
How Magnetic Erosion Affects the Drag-Based Kinematics of Fast Coronal Mass Ejections
In order to advance our understanding of the dynamic interactions between coronal mass ejections (CMEs) and the magnetized solar wind, we investigate the impact of magnetic erosion on the well-known aerodynamic drag force acting on CMEs traveling faster than the ambient solar wind. In particular, we start by generating empirical relationships for the basic physical parameters of CMEs that conserve their mass and magnetic flux. Furthermore, we examine the impact of the virtual mass on the equation of motion by studying a variable-mass system. We next implement magnetic reconnection into CME propagation, which erodes part of the CME magnetic flux and outer-shell mass, on the drag acting on CMEs, and we determine its impact on their time and speed of arrival at 1 AU. Depending on the strength of the magnetic erosion, the leading edge of the magnetic structure can reach near-Earth space up to ≈ three hours later, compared to the non-eroded case. Therefore, magnetic erosion may have a significant impact on the propagation of fast CMEs and on predictions of their arrivals at 1 AU. Finally, the modeling indicates that eroded CMEs may experience a significant mass decrease. Since such a decrease is not observed in the corona, the initiation distance of erosion may lie beyond the field-of-view of coronagraphs (i.e. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$30~\mathrm{R}_{\odot }$\end{document}30R⊙).
Introduction
A coronal mass ejection (CME) is the release of a significant amount of magnetized plasma from the solar corona that moves away from the Sun, and once it reaches the interplanetary medium can then be measured as an interplanetary CME (ICME). Although the magnetic field plays the most important role in the initiation and early evolution of CMEs, there are many studies suggesting that approximately from 15 R outwards, the Lorentz forces are negligible and it is the momentum coupling between a CME and the solar wind via a drag force that dominates their dynamics (e.g. Gopalswamy et al., 2001;Tappin, 2006;Sachdeva et al., 2015).
Drag models are based on the concept of magnetohydrodynamic (MHD) drag, which, in contrast to the kinetic-drag effect in a fluid, is supposed to be caused primarily by the emission of MHD waves in the collisionless solar-wind environment (Cargill et al., 1996). Observations of slow (fast) CMEs, acceleration (deceleration) towards the ambient solarwind speed led to the conclusion that it is the drag force that is responsible for the relative equalization of CME and solar-wind speeds (see Vršnak et al., 2010;Poomvises, Zhang, and Olmedo, 2010) and therefore drag-based models could be used for the prediction of the arrival times of CMEs at various locations in the inner heliosphere and beyond. Vršnak and Žic (2007) proposed that the equation describing the aerodynamic drag can be utilized to establish a simple drag-based model of CME propagation. Although the drag force between the CME and the solar wind is rather well established and can be quite easily modeled, several effects such as CME deformation, front flattening, deflection, rotation, erosion, and expansion can be relevant for CME propagation (see, for example, the review by Manchester et al., 2017).
Understanding the interactions between CMEs and the ambient solar wind has presented the scientific community with formidable challenges. Predicting the time and speed of arrival of CMEs (ToA and SoA, respectively) in the near-Earth space environment, or other locations in the inner heliosphere using, apart from drag-based modeling (DBM) (e.g. Vršnak et al., 2014;Shi et al., 2015;Dumbović et al., 2018), a variety of other methods (e.g. empirical, physics-based, time-dependent MHD), frequently leads to significant errors (e.g. Vourlidas, Patsourakos, and Savani, 2019). The complexity of CME-solar-wind interactions, lack of critical observations of CMEs and the solar wind, and gaps in theory/modeling are examples of factors that induce significant difficulties in predictive schemes of CME impacts. With all these methods considered, the mean absolute error (MAE) for predicted ToAs of CMEs at 1 AU is greater than 12 hours (e.g. Vourlidas, Patsourakos, and Savani, 2019).
Magnetic reconnection is ubiquitous in space (e.g. Priest and Forbes, 2007). It is important for the formation of most of the solar-dynamic phenomena, and it has been detected from the low solar atmosphere up to the Earth's magnetotail. Recently, magnetic reconnection has been shown to occur regularly in the solar wind (e.g. Gosling et al., 2005;Gosling and Phan, 2013). During its propagation in the interplanetary medium, a CME interacts with the interplanetary magnetic field (IMF) and magnetic reconnection may occur (see for example the schematic of Figure 1). If it happens at the front boundary of a magnetic cloud (MC), i.e. a coherent CME structure resembling a magnetic-flux rope, characterized by enhanced magnetic-field intensity, a smooth rotation of its magnetic-field components, and a much lower proton temperature with respect to the background solar wind (Burlaga et al., 1982), then magnetic reconnection erodes part of its entrained magnetic flux and peels off the flux rope's outer layers (Dasso et al., 2006). Magnetic erosion leads to an imbalance of the azimuthal magnetic flux in the front and rear part of CMEs. This is indeed used as an observational signature of magnetic erosion (Dasso et al., 2006).
Figure 1
Magnetic erosion applying to a CME. Magnetic erosion is due to magnetic reconnection between oppositely directed CME and IMF magnetic fields for a cylindrical CME (Panels a and b) and it extracts concentric magnetic cells and the frozen-in mass from the outer shell of the CME (Panels c and d). Used with permission of John Wiley & Sons, from J. Geophys. Res., Understanding the twist distribution inside magnetic flux ropes by anatomizing an interplanetary magnetic cloud, Wang et al., 123, 3238, 2018; permission conveyed through Copyright Clearance Center, Inc. Ruffenach et al. (2012) and Lavraud et al. (2014) examined several CMEs and found evidence that magnetic reconnection took place in their fronts and gave rise to magnetic erosion. In addition, Ruffenach et al. (2012) reported magnetic-flux decreases of 44% and 49% as the studied CME propagated to the Advanced Composition Explorer (ACE) and the still-operational Solar Terrestrial Relations Observatory (STEREO-A), respectively. Ruffenach et al. (2015) performed a statistical study of 109 magnetic clouds observed by Wind, 78 by STEREO-A, and 76 by STEREO-B during the period 1995 -2012. Due to the importance of reliable boundary determination in the implementation of the analysis methods, they investigated in detail each event to define the magnetic-cloud boundaries with the best accuracy possible. They suggested that magnetic clouds may be eroded at the front or rear and in similar proportions, with significant average erosion of about 40% of the total azimuthal magnetic flux. On average, 42% of the magnetic clouds were eroded at the front and 33% at the rear (relative to the total azimuthal flux content). Their results are consistent with the frequent (up to ≈ 30%) observation of reconnection signatures locally at both the front and rear boundaries (Tian et al., 2010).
In this work, we incorporate magnetic reconnection caused by interactions between CMEs and the IMF-solar-wind couple, resulting in magnetic erosion, into the drag-based CME propagation model and examine its impact on the Time and Speed of Arrival of a CME structure at 1 AU. Section 2 contains our theoretical framework. In Section 2.1 we discuss the drag-based propagation of non-eroded CMEs and then in Section 2.2 we derive empirical profiles of the radial evolution of key physical parameters of non-eroded CMEs. In Section 2.3 we incorporate magnetic erosion into a new drag-based model of CME kinematics. In Section 3, we discuss our model's results for both eroded and non-eroded CMEs. Finally, in Section 4, we provide a summary of our work and discuss future plans.
Theoretical Framework
This section contains our theoretical framework for the formulation of a drag-based model incorporating magnetic erosion.
Aerodynamic Drag Force Acting on Non-eroded CMEs
We start with a discussion of the drag force acting on non-eroded CMEs. Consider a cylindrical flux-rope CME with radius R and height L which radially propagates in the interplanetary medium. The assumption of radial CME propagation essentially leads to 1D equations for the structure's kinematics. The drag force [F D ] acting on it, and in particular the associated rate of change of the velocity, is given by Cargill (2004) as Subscripts i and e are used to represent quantities inside and external to the ICME with V i and V e corresponding to the bulk speed of the CME (i.e. the speed of the center of the cylindrical structure) and the ambient solar-wind speed, respectively; M tot is the total mass of the CME-solar-wind system and will be discussed in detail later in this section. The γ -parameter is an inverse deceleration length given by where τ and A are the volume and reference area (i.e. lateral cross-sectional area) of a cylindrical CME structure, respectively, and ρ i and ρ e are the mass density of the CME and of the ambient solar-wind mass density, respectively. Finally, C D is the drag-force coefficient, a dimensionless number that encapsulates all the complex dependencies of a given structure's drag on shape, inclination, flow conditions, etc. For the calculation of C D , we followed the formulation of Sachdeva et al. (2015), which is based on a microphysical prescription for viscosity in collisionless plasmas of the turbulent solar wind (Subramanian, Lara, and Borgazzi, 2012). In our calculations of C D , we used the radial profile of proton density from Hellinger et al. (2013).
We now discuss the important aspects of the total mass [M tot ] of the CME-solar-wind system. We have that with M i corresponding to the mass of the cylindrical CME; m virtual is the so-called added mass or virtual mass of fluid dynamics (e.g. White, 2011) applying when a body moves through a fluid. It relates to the inertia added to a system because an accelerating or decelerating body must displace some volume of surrounding fluid as it moves through it. In other words, virtual mass naturally arises because the object and surrounding fluid cannot simultaneously occupy the same volume. The inclusion of the virtual mass leads to the requirement of an extra force to accelerate a body moving in a fluid compared to the case when it moves in a vacuum. This force is sometimes referred to as apparent mass force (Crowe, 2011) because it is equivalent to adding a mass to the existing body. The concept of virtual mass can be incorporated in the study of CME propagation (e.g. Cargill, 2004;Vršnak, 2021), where the moving CME body gives rise to accumulation or in other words to a pile-up of solar-wind mass around it. The piled-up, i.e. compressed plasma, is contained within the sheath formed around CMEs.
Frequently, in applications of the drag-force model, the virtual mass is neglected (e.g. Vršnak et al., 2013). As we can see from Equation 2 when ρ CME ρ sw the virtual mass becomes negligible. Temmer et al. (2021) combined remote-sensing and in-situ observations with the Graduated Cylindrical Shell model (see Thernisien, 2011) and ascertained that the sheath region should be treated as a significant extra mass. An indication that the sheath becomes much more prominent in the interplanetary medium is also given by a relative increase of the sheath duration from Mercury to Earth (Janvier et al., 2019).
Obviously, the virtual mass corresponding to a CME increases with distance since more and more mass is piled up as the CME propagates outwards. We furthermore assume that the CME mass [M i ] is constant during propagation in the IP medium. This is justified on the grounds that CMEs attain a constant mass relatively close to the Sun at ≈ 10 R (Vourlidas et al., 2010). Therefore, the mass of the CME-sheath system [M tot ] varies with distance, and hence we have to generalize the 1D drag-force equation along the radial direction for a variable mass system.
The force [F νm ] required to accelerate the fluid surrounding a moving submerged rigid body from Crowe (2011) is with D/Dt denoting the material derivative. Generalizing the virtual-mass force for a varying external (i.e. solar-wind) density and an expanding CME structure, we get where u rel is the relative velocity of the body (CME) with respect to the fluid (solar wind). The origin of the notion of "virtual mass" becomes evident when we take a look at the momentum equation, which becomes with P i corresponding to the momentum of a CME. Moving the derivative of the CME's velocity from the right-hand side of the equation to the left-hand side and assuming a steady background solar wind, we get The second term inside the parentheses on the left-hand side of the above equation is the virtual mass. The virtual mass of a cylindrical CME is equal to m virtual = 1 2 ρ e πR 2 L (The added mass of a cylinder can be derived by considering a hydrodynamic force of an axisymmetric flow acting on it as it accelerates). The equation of motion then becomes In Equation 9 we introduced the speed of the CME leading edge [V LE ] given that mass pile-up occurs ahead of a CME, which is faster than the ambient solar wind, as we study here. The second term of the right-hand side of Equation 9 is due to the relative motion of the CME with respect to the solar-wind flow. It is that V LE = V i + V exp , where V exp is the expansion speed of the cylindrical CME, Solving Equation 9 for dV i /dt we obtain and, by using Equation 1, we finally obtain Equation 12 describes the propagation of a faster than the ambient solar-wind CME subject to drag and incorporates mass pile-up. It is identical to the drag equations of Cargill (2004) and Vršnak et al. (2010), with the addition of a second term in the right-hand side corresponding to a varying virtual mass. Note that the above studies neglected altogether the virtual mass. Both terms on the right-hand side of Equation 12 are negative for fast CMEs, i.e. they cause a slow-down in the CME structure. The first term is associated with the aerodynamic drag and the second with the mass pile-up around the CME.
The numerical solution of Equation 12 requires the properties of the ambient solar wind. The solar-wind density is given by the Leblanc, Dulk, and Bougeret (1998) empirical formula, and its speed results from the application of the continuity equation n e V e r 2 = const, describing the constant flow (conservation of mass flux) of solar-wind particles, where n e is the number density of the solar wind and r the heliocentric distance: For our study, the solar-wind electron-number density and speed at 1 AU were set as follows: n e (1) = 7 cm −3 and V e (1) = 400 km s −1 , respectively.
Radial Profiles of Physical Parameters for Non-eroded CMEs
For non-eroded CMEs, it is reasonable to expect that their mass and magnetic flux should not vary with distance. This means that we need to consider in the numerical solution of Equation 12 the radial profiles of CME density, radius, and (axial) magnetic field leading to approximately constant CME mass and magnetic flux with distance. There exist several works deducing the radial profiles of various CME physical parameters (e.g. Bothmer and Schwenn, 1998;Liu, Richardson, and Belcher, 2005;Wang, Du, and Richardson, 2005;Forsyth et al., 2006;Leitner et al., 2007). They are mainly based on CME observations by the Helios mission in ≈ 0.3 -1 AU and describe the evolution of several CME physical parameters as power laws of the radial distance. We then have that the CME parameter y is given by y = Ar b , with A and b corresponding to the constant and index of the power law, respectively. Calculating average values for the constants and the indices of the power laws describing the CME radius and density of the mentioned-above works, we have that R = 0.138 r 0.69 AU and n i = 6.59 r −2.384 cm −3 which, under the assumption of a cylindrical CME, leads to an almost constant CME mass as a function of distance (i.e. M i ∝ r −0.004 ). Following a similar procedure as above, this time for the CME magnetic-field magnitude, and the works of Liu, Richardson, and Belcher (2005), Wang, Du, and Richardson (2005), Forsyth et al. (2006) and Leitner et al. (2007), we end up with the following power-law for B i : Combining this power-law with the corresponding power-law for the CME radius (i.e. Equation 14), we end up, for cylindrical CMEs, with a practically constant CME magnetic flux with distance, i.e. B ∝ r −0.003 . In both Equations 14 and 15, the radial distance r is expressed in AU.
Aerodynamic Drag Force Acting on Eroded CMEs
We now consider the impact of magnetic erosion on the drag force acting on CMEs. This is done by essentially assuming that erosion removes part of the mass and magnetic flux of the CME.
Consider Figure 1 depicting the propagation of a cylindrical CME in the radial direction. If the CME encounters oppositely directed IMF, then reconnection occurs and progressively peels off concentric shells from the CME. This decreases both the mass and the magnetic flux of the structure. Given that typically when magnetic reconnection occurs in simple geometries, we have that post-reconnection magnetic-field lines, and therefore the entrained mass as well, move perpendicularly to the inflow direction (e.g. check Panels b and c of Figure 1), the addition of extra terms into Equation 12 associated with the momentum of the outflowing plasma is not warranted. As a matter of fact, in-situ observations of reconnection jets ("CME exhausts") occurring around or within CMEs showed that they mostly lie in planes that are perpendicular to the radial direction (Gosling et al., 2007).
The strength of this CME-IMF magnetic reconnection, and hence of the erosion that a CME undergoes, depends on the associated reconnection rate. The reconnection rate is proportional to the Alfvén speed, which is much higher near the Sun (e.g. Manchester et al., 2017). Given now that the postulated reconnection involves two systems (i.e. CME and IMF) with different physical properties (e.g. densities, magnetic fields, etc.) estimating the magnetic-reconnection rate from the classical Sweet-Parker or Petschek reconnection models, based on symmetric inflow conditions, is not appropriate. We, therefore, opted to use a hybrid magnetic-reconnection rate derived by Cassak and Shay (2007), where C is a dimensionless coefficient that depends on the geometry of the magnetic reconnection process (≈ 0.1) and S is a hybrid Alfvén speed multiplied by a hybrid magnetic-field strength as deduced in Borovsky et al. (2008) as In the above equation, B and ρ are the magnetic field and the mass density, respectively, for the two different inflowing systems, with subscripts 1 and 2, corresponding to the CME and the ambient solar wind and IMF at the CME's front position, respectively. Equations 14 and 15, the Leblanc, Dulk, and Bougeret (1998) density profile and the azimuthal component of the IMF from Parker (1958) supplied the CME and IMF-solar-wind parameters required for the calculation of S. Finally, μ 0 is the magnetic permeability of vacuum.
Having established a means to calculate the CME-IMF magnetic reconnection rate as a function of distance in the inner heliosphere, we are now in a position to incorporate the impact of magnetic erosion on the CME kinematics. Magnetic erosion essentially boils down to a reduction of the CME radius at any given distance, compared to its value when no erosion occurs, and it is written as In essence, Equation 18 supplies the radius R of the CME at a given radial position i, by reducing its value [R * i−1 ] at the previous radial grid position with index i − 1, calculated when no erosion occurs (i.e. following Equation 14), by a factor depending on the magnetic reconnection rate at positions i − 1 and i. The magnitude of the erosion, and therefore its impact on R, is controlled by the exponent α.
To determine α, we considered a total CME magnetic-flux reduction from 20 R (i.e. the starting distance of the application of the drag-force model discussed in the next section) to 1 AU, of 20%, 40%, and 50% consistent with CME observations showing evidence of magnetic erosion (Ruffenach et al., 2015;Pal, Dash, and Nandy, 2020), leading to α-values of 0.038, 0.089, and 0.121, respectively.
Magnetic erosion is then incorporated into drag-based CME kinematics (i.e. Equation 12), via the associated reduction in CME radius (i.e. Equation 18). Note that the above effect, i.e. radius reduction, does not only have a geometrical impact but also leads to a decrease in the CME mass ∝ R 2 . Therefore, our treatment of magnetic erosion leads to a reduction in the structure's size along with an abatement of the virtual mass piling up in front of it. We will discuss in detail the implications of our prescription of the impact of erosion on CME radius in Section 4.
Results
Having presented in the previous section our theoretical framework dealing with both eroded and non-eroded CMEs, we will now apply it in order to study the structure's behavior. The propagation is studied via the numerical solution of Equation 12 from x 0 = 20 R to 215 R , i.e. 1 AU. We considered four cases: a CME without erosion, and three CMEs experiencing erosion leading to total magnetic-flux reduction between x 0 and 1 AU of 20%, 40%, and 50% (see the previous section). Typically, applications of the drag model use the same x 0 as starting distance (check, for example, Table 1 of Dumbović et al., 2021). The initial mass of the modeled CMEs was 1.74 × 10 12 kg, consistent with observed distributions of CME masses in the corona (e.g. Vourlidas et al., 2010), and their initial radius was ≈ 5.75 R , consistent with forward modeling of CMEs observed by STEREO (e.g. Thernisien, Vourlidas, and Howard, 2009). In addition, all modeled CMEs have an angular width of 45 • , consistent with STEREO observations of CMEs (Thernisien, Vourlidas, and Howard, 2009), which was kept constant during their propagation. This is valid even for eroded CMEs, given that as discussed in the previous section, in the framework of our model, erosion affects only the radii of the postulated cylindrical CMEs and not their heights. Finally, the initial CME bulk speed was 1000 km s −1 , therefore we were dealing with fast CMEs.
Figure 2
Radial evolution of the normalized CME-IMF magnetic reconnection rate (green line) and the mass of the magnetic-flux rope (blue line) for a CME undergoing a total magnetic-flux reduction of 40% between x 0 and 1 AU. The i-index (see the caption of the y-axis) captures the distance of the magnetic structure in units of solar radii from x 0 (x = 20 R ) up to 1 AU (x = 215 R ).
Figure 3
Radial evolution of the CME radius for a non-eroded CME (blue line) and eroded CMEs with total magnetic-flux reduction of 20%, 40%, and 50% (red, yellow, and green lines) between x 0 and 1 AU. The i-index (see the caption of the y-axis) captures the distance of the magnetic structure in units of solar radii from x 0 (x = 20 R ) up to 1 AU (x = 215 R ).
Impact of Magnetic Erosion on CME-IMF Reconnection Rate and CME Mass and Radius
Before studying in detail the kinematics of eroded and non-eroded CMEs we give examples of the impact of erosion on pertinent CME characteristics. Figure 2 corresponds to a CME undergoing a total magnetic-flux reduction between x 0 and 1 AU of 40%. It shows that the CME-IMF reconnection rate is stronger near the Sun and falls off rapidly with distance. This is expected on the grounds of higher Alfvén speeds close to the Sun. In addition, magnetic erosion leads to a significant decrease in the CME mass of the order 40% from x 0 to 1 AU. Figure 3 contains the radial evolution of the CME radius for the non-eroded case, and three eroded cases with a 20%, 40%, and 50% magnetic-flux reduction. Once more the impact of erosion is obvious. The greater the magnetic-flux reduction, the smaller the increase Figure 4 Radial evolution of CME bulk speed for a non-eroded CME (blue line) and three eroded CMEs with total magnetic-flux reduction of 20%, 40%, and 50% (red, yellow, and green lines) between x 0 (x = 20 R ) and 1 AU (x = 215 R ).
of the CME radius. We have that the CME radius at 1 AU is smaller in comparison to its value for a non-eroded case, by factors of ≈ 3.5, 4.5, and more than 5 for magnetic-flux reduction of 20%, 40%, and 50%, respectively.
Kinematics of Eroded and Non-eroded CMEs
We now consider in detail the propagation of the four modeled CMEs. For this, we solved numerically the CME equation of motion (Equation 12) along with Equation 10 describing the evolution of CME radius. Recall here that all four CMEs were identically initialized at x 0 . Figures 4 and 5 contain the CME bulk speed and transit time for the center of the structure, respectively, as functions of the heliocentric distance for the four modeled CMEs. The non-eroded case exhibits lower speeds than the eroded ones, and as a result the CME center reaches 1 AU later, with the differences increasing with the magnitude of the erosion. The CME center reaches 1 AU at speeds of ≈ 6 -18 km s −1 higher and 0.38 -1.21 hours earlier with respect to the non-eroded case ( Figure 5). By next adding the CME expansion speed to the CME bulk speed and the CME radius to the CME center radial distance, we obtain the CME leading-edge speed and heliospheric position, respectively. Figures 6 and 7 contain the radial evolution of the CME leading-edge speed and transit time, respectively. Depending on the magnitude of the erosion, the CME leading edge reaches 1 AU at speeds of ≈ 2 -4.5 km s −1 smaller and ≈ 0.62 -1.79 hours later than the corresponding non-eroded case.
From the above, and by juxtaposing Figures 4 and 5 with Figures 6 and 7, we can readily reach an interesting conclusion, namely that magnetic erosion affects differently the CME center and leading edge, which are faster and slower than these of corresponding non-eroded CME, respectively. To understand this behavior, we have to investigate the varying impact of erosion on the CME bulk and expansion speeds.
To better understand the impact of magnetic erosion on the CME bulk speed, we plot in Figure 8 the radial evolution of the two deceleration terms appearing on the right-hand side of Equation 12 for a non-eroded CME and the CME with maximum erosion, i.e. corresponding to a magnetic-flux reduction of 50% from x 0 to 1 AU. We first note that the deceleration term due to the drag force (blue lines) is significantly higher than the varying virtual mass term (red lines). Their differences are more pronounced for distances of up to ≈ 100 R , Figure 5 Transit time of CME center in the 20 -215 R interval for a non-eroded CME (blue line) and three eroded CMEs with total magnetic-flux reduction of 20%, 40%, and 50% (red, yellow, and green lines) between x 0 (x = 20 R ) and 1 AU (x = 215 R ).
Figure 6
Radial evolution of CME leading-edge speed for a non-eroded CME (blue line) and three eroded CMEs with total magnetic-flux reduction of 20%, 40%, and 50% (red, yellow, and green lines) between x 0 (x = 20 R ) and 1 AU (x = 215 R ). and beyond this distance, both terms become progressively closer. Given that both terms are negative for the fast CMEs of our study, we conclude that the CME bulk speed is controlled in the same fashion by the drag and varying virtual mass. We next note that for the eroded CME, the deceleration due to the drag force is higher than that corresponding to the noneroded CME, while the opposite is true for the varying virtual mass. However, the sum of both deceleration terms (green lines) is smaller for the eroded CME of up to a distance of ≈ 100 R , and therefore the eroded CME experiences smaller deceleration compared to its non-eroded counterpart. This leads to a higher CME bulk speed for the eroded CME, and therefore to delayed CME-center transit time at 1 AU compared to that of the non-eroded CME.
On the other hand, since magnetic erosion essentially strips off concentric cells from CMEs (see, for example, the schematic of Figure 1), we could expect that an eroded CME would have a slower expansion speed compared to the corresponding non-eroded case (see Figure 7 Radial evolution of CME leading-edge transit time for a non-eroded CME (blue line) and three eroded CMEs with total magnetic-flux reduction of 20%, 40%, and 50% (red, yellow, and green lines) between x 0 (x = 20 R ) and 1 AU (x = 215 R ).
Figure 8
The deceleration terms (absolute values) of the CME bulk motion corresponding to the drag force (first term of the right-hand side of Equation 12 -blue lines) and the varying virtual mass (second term of the right-hand side of Equation 12 -red lines) for a non-eroded (solid lines) and an eroded (dashed lines) CME with 50% magnetic-flux reduction. The green solid and dashed lines correspond to the total deceleration (i.e. the sum of both terms on the right-hand side of Equation 12) acting on the eroded and corresponding non-eroded CME, respectively. also Figure 3). Since now the CME leading-edge speed is essentially the sum of the CME bulk and expansion speed, which as seen above experience opposite dependencies on magnetic erosion, i.e. they increase (decrease) with respect to the non-eroded case, the leadingedge behavior is controlled by the competition between bulk and expansion speeds. For our studied cases, expansion speed is influenced stronger by erosion compared to the bulk speed, and therefore eroded CMEs have delayed transit times of their leading edges compared to the corresponding non-eroded CMEs.
Varying Initial CME Bulk Speed and Starting Distance of Erosion
In this section, we perform two parametric studies of eroded and non-eroded CMEs by varying two of the parameters used in the initialization of the modeled CMEs. They pertain to the initial CME bulk speed and the starting distance of the application of the erosion to CMEs. Figure 9 contains the CME leading-edge transit time to 1 AU as a function of the initial bulk speed. We readily note that the higher the initial bulk speed, the lesser the impact of erosion, i.e. the smaller the difference in the transit-time at 1 AU between an eroded and a noneroded case. For the considered initial CME bulk speeds in the range of 500 -2000 km s −1 , the transit time difference between an eroded and non-eroded CME is ≈ 1.6 -2.8 hours.
The decrease in the transit-time differences at 1 AU between an eroded and non-eroded CME with increasing initial bulk speed could be rather readily understood because we assumed drag-dominated CMEs. Magnetic erosion, when considered, is also incorporated into the drag prescription. Since drag essentially attempts to bring the structure's speed closer to the ambient solar-wind speed, a high initial bulk speed case is expected to be less affected by interactions with its environment compared to a lower speed case, and hence, whether erosion occurs or not, has less impact on its kinematics.
Given now that the starting distance of the CME-IMF reconnection (i.e. the starting distance of magnetic-erosion application) for an eroded CME is rather unknown, and for simplicity, we assumed that it coincided with the starting distance x 0 of the CME dragdominated propagation. We next investigated the impact of the (common) starting distance for magnetic erosion and drag-based CME dynamics on CME transit time at 1 AU. Figure 10 illustrates the transit time at 1 AU as a function of x 0 for an eroded and a non-eroded event.
For each starting distance, we calculated new α-values (Equation 18) so that the magneticflux reduction was 50%, i.e. the same as in the considered case with maximum erosion discussed in the previous sections. From Figure 10 we have that the transit-time difference between an eroded and a non-eroded CME decreases from ≈ 2.6 down to 1.8 hours with x 0 taking values from the interval 5 -20 R .
Figure 10
The transit time of a CME at 1 AU as a function of the starting distance of application of magnetic erosion and drag-based kinematics for two different values of magnetic-flux reduction: 0% (blue dots) and 50% (red dots). The transit-time difference between the eroded and the non-eroded case ( T T = T T eroded − T T non-eroded ) is given next to each pair of dots.
Overall, the decrease in the transit-time differences between eroded and non-eroded CMEs with increasing x 0 could be attributed to the fact that the reconnection rate sharply increases when approaching the Sun (e.g. Figure 2). While the magnetic-flux reduction at 1 AU is the same for all studied cases, the non-identical reconnection rates applied to CMEs with different x 0 might be accountable for these differences. In other words, an eroded CME with a smaller x 0 would experience high reconnection rates close to the Sun, and hence the impact of erosion on its kinematics is expected to be more pronounced than for a CME with a larger x 0 that "encounters" lower reconnection rates.
Discussion and Conclusions
In this work we developed a new drag-based model for the propagation of fast CMEs in the inner heliosphere that incorporates two significant additions over its predecessors, namely it includes the virtual mass via a variable-mass system formulation and CME magnetic erosion due to CME-IMF reconnection. In our model, magnetic flux and mass reduction due to magnetic erosion are controlled by the reconnection rate between the CME and IMF magnetic fields, which removes outer-shell mass from the cylindrical CME perpendicular to the propagation direction. Removing outer shells from the CME gives rise to a reduction of its magnetic flux as well, an essential attribute of eroded CMEs. Magnetic erosion influences the bulk and expansion speeds of the postulated cylindrical CME differently. While magnetic erosion speeds up the CME bulk speed with respect to a non-eroded case, its expansion speed slows down at a higher rate. As a net result, the eroded CME leading edge reaches 1 AU later than its non-eroded equivalent. This delay depends on the strength of the erosion.
Our results suggest that the addition of magnetic erosion into drag-based models has a significant impact on ToA predictions. Note here that our model is treating only the kinematics of the magnetic ejecta associated with CMEs and not their corresponding shocks and sheaths. Comparisons with actual in-situ CME ToA observations need to focus, therefore, on the ejecta (i.e. the magnetic obstacle). For the small number of cases we studied, we found that in the presence of erosion, a CME could reach 1 AU by up to ≈ 3 hours delay, with respect to the corresponding non-eroded case. Such delays represent a substantial fraction of the error in our existing arrival-time predictions of 10 -12 hours as discussed, for example, Vourlidas, Patsourakos, and Savani (2019). However, the majority of these predictions refer to shock/sheath ToAs rather than ToAs pertinent to the magnetic ejecta.
Drag-based models typically predict earlier CME arrivals at 1 AU. For instance, Dumbović et al. (2018) discovered, for a large sample of CMEs analyzed, that the drag model leads to a mean ToA of −9.7 hours, with the minus sign corresponding to earlier predicted CME arrival at 1 AU as compared to the associated in-situ observations. Their earlier-thanobserved predicted CME ToAs were particularly pronounced for fast CMEs (check Figure 6a in Dumbović et al., 2018). Although the observed in-situ arrival times of the Dumbović et al. (2018) study were relevant to the CME shock and not the magnetic ejecta per se, our findings of delayed CME arrivals of eroded cases nevertheless suggest that adding erosion to the prescription of CME propagation could improve the drag-based predictions of fast CME arrivals at 1 AU.
Note here that the plasma and magnetic-field conditions upstream of the modeled CMEs of our study, as required for the calculation of the CME-IMF reconnection rate, correspond to a quiescent inner heliosphere, i.e. without any large-scale transients such as CMEs. However, CMEs compress and distort the sheath region which is the actual CME-"background" interface. Therefore, updated calculations of reconnection rates in CMEs incorporating the properties of CME sheaths would add additional realism to our model. Recently Hosteaux, Chané, and Poedts (2019) performed 2.5D (axisymmetric) MHD simulations of CMEs and investigated the role of the polarity of the internal magnetic field in their evolution. They noticed that for the same initial conditions (e.g. CME speed and density, solar-wind density, etc.) inverse CMEs (i.e. with the same magnetic-field polarity as in the IMF in front of them) reach 1 AU faster than normal CMEs (i.e. with opposite magnetic-field polarity compared to the IMF in front them). Therefore, when it comes to magnetic reconnection at the front of the magnetic structure, inverse (normal) polarity CMEs correspond to non-eroded (eroded) CMEs. The CMEs in their simulations were magnetized and dense plasma blobs.
For fast CMEs, with speeds in the inner corona (2 R ) of 800 and 1200 km s −1 , Hosteaux, Chané, and Poedts (2019) concluded that the magnetic ejecta of eroded CMEs could be delayed from ≈ 0.5 -1.5 hours with respect to a non-eroded CME, which is consistent with our results. The follow-up study of Hosteaux, Chané, and Poedts (2021) found that for CMEs undergoing erosion, both their mass and magnetic-flux decrease with distance, again along the lines of our model. The delayed ToAs of the magnetic ejecta of eroded CMEs of the Hosteaux, Chané, and Poedts (2019) simulations were attributed to magnetic reconnection occurring at the front of the CME which strips off magnetic shells from it, which leads in the CME frame, to a recession of its front. This is similar to the erosion-related decrease in the CME radius expansion of our work on Equation 18. The apparent similarities between our much-simplified model and the fully fledged MHD simulations of Hosteaux, Chané, and Poedts (2019) are encouraging and prompt for further analysis.
Based on our core hypothesis that magnetic erosion peels off the outer layers of CMEs, it is reasonable to expect that it could diminish the tension force exerted by the azimuthal magnetic field on CMEs. This reduction in tension could, in turn, lead to a decrease in the confinement of the CME internal plasma and magnetic field, resulting in an over-expansion (compared to a non-eroded case) of its cross-sectional area, which could affect its kinematics and ToA at 1 AU. This phenomenon is most likely to occur in regions closer to the Sun, where magnetic reconnection is facilitated by the higher Alfvén speeds. While erosion could potentially introduce an even greater imbalance between internal pressure and magnetic tension and consequently affect the transit times of CMEs at 1 AU, the MHD simulations presented in Poedts (2019, 2021) suggest that the postulated erosionrelated CME over-expansion does not seem to have a major impact on the kinematics of eroded CMEs, which as discussed in the previous paragraph reach 1 AU later than their non-eroded counterparts.
Moreover, Démoulin and Dasso (2009) studied the causes of CME expansion with analytical solutions. They modeled the evolution of cylindrical flux ropes as a series of forcefree field states with ideal MHD and minimization of the magnetic energy with conserved magnetic helicity with the latter reproducing a situation for which the CME undergoes reconnection. Although they concluded that the ambient pressure has the most significant impact on the expansion, the various cases slightly deviated from each other due to the different magnetic-field configurations and evolution, with the reconnection case exhibiting a shallower radius-expansion rate in the inner heliosphere (check Figure 4 in Démoulin and Dasso (2009)), in qualitative agreement with our results.
Magnetic erosion could also influence CME kinematics by modifying the exerted Lorentz force, particularly closer to the Sun. The effect of erosion on the acceleration of a CME is not straightforward, and it depends on various factors. However, stripping away the outer azimuthal magnetic field of a CME due to erosion could impact its early kinematics. The magnetic field plays a central role in driving the acceleration of a CME through the Lorentz force. Erosion-induced weakening of the magnetic field of CMEs could potentially alter their acceleration, with a more pronounced effect expected in regions closer to the Sun. However, investigating the impact of erosion on the Lorentz force in the proximity of the Sun is outside the scope of this study, which focuses on the drag-based kinematics of CMEs. This is because drag forces increasingly dominate the kinematics of fast CMEs beyond 15 solar radii, as reported by Sachdeva et al. (2015). Thus, further research is necessary to determine the precise effect of erosion on the acceleration of CMEs closer to the Sun by exploring the impact of magnetic erosion on the Lorentz force.
Although the core of this work was the incorporation of magnetic erosion into drag-based CME kinematics, our results have broader implications for the impact of magnetic erosion on CMEs. In Section 2.3, we laid down a general reconnection-based framework to figure out how magnetic erosion influences the CME radius, and accordingly the CME mass, by considering Equations 14 and 16 -18. Therefore, our prescription of CME mass evolution under the influence of erosion is not directly linked to the drag-based CME kinematics (or any other CME propagation model), and hence it represents a generic prediction of the effect of magnetic erosion on CME mass depending only on the specifics of the employed reconnection model. The significant decrease of the CME mass of the eroded CME of Figure 2 from 25 R to 1 AU is stronger within ≈ 25 -75 R . This interval is covered almost exclusively by the heliospheric imagers of STEREO which, however, lack the required sensitivity to fully distinguish CMEs from their sheaths. This prohibits detailed CME mass observations to be compared against our predictions. Placing the starting distance of magnetic erosion application, i.e. x 0 , closer to the Sun shifts the most noteworthy erosion-related CME mass depletion deeper within the corona, namely in the field-of-view (FoV) of the LASCO C2 and C3 coronagraphs where CME-mass observations abound. LASCO observations of the mass evolution of more than 10,000 events found that, on average, CMEs reach a constant mass above around 10 R (Vourlidas et al., 2010). Note here that these observations are not directly comparable with our model predictions since Vourlidas et al. (2010) considers both the upper part and legs, while our study considers only the CME upper part. On the other hand, measurements of the mass evolution of the upper parts (fronts) of a small sample of 13 CMEs within the LASCO FoV found no evidence of pileup, contrary to the general expectation (Howard and Vourlidas, 2018). The authors attributed the lack of pileup to the sensitivity of the observations but our modeling here suggests that erosion could be a factor. It is therefore unclear whether mass measurements are at odds with magnetic erosion initiated within the LASCO-C2 and LASCO-C3 FoV.
The focus of our study was on fast CMEs, i.e. CMEs with speeds above the speed of the ambient solar wind. For slow CMEs, an additional complication arises from the fact that both their front and rear parts could exhibit mass pile-up. For instance, and for particular values of the bulk and expansion speed of a slow CME, it is possible that its front (rear) part could be faster (slower) than the ambient solar wind, and therefore pile-up could occur on both parts. On the other hand, for fast CMEs, both front and rear parts are expected to be faster than the ambient solar wind; therefore, pile-up occurs only in the front. Thus, more detailed studies of the mass/density evolution of CMEs fronts are needed to understand the impact of magnetic erosion on the kinematics of slow CMEs. In addition, our model may be extended to non-cylindrical geometries (e.g. spheromak, 3D flux rope).
Deducing power laws for the physical properties of eroded and non-eroded CMEs represents a crucial task for future improvement of our model, where data from the Parker Solar Probe and Solar Orbiter missions that cover both remote sensing and in-situ the corona and the inner heliosphere could be utilized. In addition, we used power-laws of CME properties based on Helios observations, which are valid from 0.3 AU outwards. Therefore, it is essential to extend such power laws "deeper" in the corona with Parker Solar Probe observations.
The most obvious extension of our model is to apply it to actual CME events undergoing erosion. Given that our study did not aim to model specific CMEs, we assumed various starting distances for the application of drag and erosion to the CME. However, for real events, the starting distance could be derived using the methodology of Sachdeva et al. (2015), which allows the determination of the distance at which the drag force dominates over the Lorentz force. Finally, given that our model depends on several, mainly empirically deduced parameters, concerning the properties of the studied CME including the amount of magnetic-flux erosion, as well as the properties of the background solar wind and IMF, the introduction of probability distributions for its input parameters instead of single values as input will allow for an estimation of the ToA forecast uncertainty (e.g. Napoletano et al., 2018).
As aforementioned, we need to conduct additional comparisons between our model and 3D MHD simulations to shed more light on the complicated processes (i.e. magnetic erosion and pile-up) that underpin the behavior of our model. | 10,792.6 | 2023-07-01T00:00:00.000 | [
"Physics"
] |
The GPI-Anchored Protein Thy-1/CD90 Promotes Wound Healing upon Injury to the Skin by Enhancing Skin Perfusion
Wound healing is a highly regulated multi-step process that involves a plethora of signals. Blood perfusion is crucial in wound healing and abnormalities in the formation of new blood vessels define the outcome of the wound healing process. Thy-1 has been implicated in angiogenesis and silencing of the Thy-1 gene retards the wound healing process. However, the role of Thy-1 in blood perfusion during wound closure remains unclear. We proposed that Thy-1 regulates vascular perfusion, affecting the healing rate in mouse skin. We analyzed the time of recovery, blood perfusion using Laser Speckle Contrast Imaging, and tissue morphology from images acquired with a Nanozoomer tissue scanner. The latter was assessed in a tissue sample taken with a biopsy punch on several days during the wound healing process. Results obtained with the Thy-1 knockout (Thy-1−/−) mice were compared with control mice. Thy-1−/− mice showed at day seven, a delayed re-epithelialization, increased micro- to macro-circulation ratio, and lower blood perfusion in the wound area. In addition, skin morphology displayed a flatter epidermis, fewer ridges, and almost no stratum granulosum or corneum, while the dermis was thicker, showing more fibroblasts and fewer lymphocytes. Our results suggest a critical role for Thy-1 in wound healing, particularly in vascular dynamics.
Introduction
Lack of healing is a significant problem in skin wounds because it contributes to chronic ulcers, inflammation, and persistent infections, among other pathological manifestations [1]. Wound healing involves multiple processes, including extracellular matrix remodeling, synthesis of pro-inflammatory mediators, and the formation of new vessels from preexisting vessels (or angiogenesis) [2][3][4].
The formation of new blood vessels promotes proper tissue perfusion and wound healing [4]. Therefore, impaired vascular function, including alterations in the angiogenesis process or blood perfusion of the tissue, compromises dermal wound healing outcomes [4,5]. Analysis of dermal blood perfusion represents a challenge due to the involvement of small blood vessels (i.e., microcirculation) and high dynamism. Several invasive and non-invasive approaches have been described in the literature to analyze dermal blood perfusion, including Laser Doppler blood flowmetry (LDF) [6]. Laser speckle contrast imaging (LSCI) enables non-contact, real-time, and non-invasive monitoring of changes in the blood perfusion of the skin [7,8]. This technique involves imaging time-integrated speckle patterns generated by low-power laser irradiation that is captured by a high spatiotemporal resolution charge-coupled device (CCD) camera. In addition, depending on the experimental setting, the LSCI technique generates a large amount of data that requires sophisticated data analysis. Fourier transform (FT) can be used as the signal processing method of skin−blood perfusion to decompose perfusion signals in the spectral domain, which may reveal various physiological rhythms associated with blood flow control mechanisms [9]. For instance, wavelet analysis of LDF signals from the human forearm or feet has revealed five characteristic frequency bands, corresponding to heartbeat dynamics; rhythmicity of breath; rhythmic activity of vessels; and metabolic activity [9,10]. Analysis of the relative contribution of these wavelets may shed light on the underlying vascular function and reactivity.
Thy-1(CD90) is a glycosyl phosphatidyl inositol (GPI)-anchored protein that resides in lipid rafts. It is essential in cell migration and is strongly upregulated in endothelial cells during pro-inflammatory cytokine-induced angiogenesis [11][12][13][14]. These observations have indicated that Thy-1 might be relevant to the process of wound healing following lesions to the skin. Indeed, neutrophils cross the endothelium during the inflammatory phase and target the injured area in a Thy-1-dependent manner [14]. Furthermore, the adhesion of neutrophils to activated endothelial cells with high Thy-1 expression promotes the binding of these cells [15] through Thy-1/Mac-1(CD11b-CD18; integrin αMβ2) interaction [16].
In recent years, increased emphasis has been granted to studying the role of Thy-1 and its downstream signaling pathways in wound healing [17]. Specifically, Lee and co-workers [18] investigated these events using Thy-1 knockdown in a mouse model of skin wound healing and showed that wound repair is retarded when Thy-1 levels are decreased. So far, only a few studies have suggested a clear association between Thy-1 expression and angiogenesis [18,19]; these have reported that Thy-1 expression is crucial in the process of new vessel formation. We recently reviewed the role of Thy-1 and its receptors-integrins and Syndecan-4-in wound healing [17]. From this analysis of the literature, it became clear that the function of Thy-1 in the angiogenic process remains poorly understood.
This study explored the role of Thy-1 in a pre-clinical wound healing model using Thy-1 knockout (Thy-1 −/− ) mice and evaluated the timeline of wound closure, microvessel number in the wounded area, blood perfusion, and tissue remodeling. We also analyzed the relative contribution of three wavelet components of blood perfusion (i.e., metabolic, neurogenic, and myogenic wavelets) during the wound healing process. We provide evidence showing that Thy-1 regulates vascular perfusion, thereby affecting the healing rate in mouse skin.
Lack of Thy-1 Expression Delays Wound Healing
Previous studies have indicated that the lack of Thy-1 in a localized area of a knockdown mouse model delays the wound healing process, suggesting a role for Thy-1 in wound repair [18]. We confirmed these findings with a proof-of-concept experiment using the Thy-1 knockout (Thy-1 −/− ) mouse model. First, we challenged wild-type (WT) and Thy-1 −/− mice with a wound biopsy punch of 2 mm in the head skin between the ears; the day of the biopsy punch was called T0. The wounded area was measured 4 and 7 days after the T0 ( Figure 1A). We found that the wound was almost 100% closed in WT mice on day 7, while Thy-1 −/− mice showed a significant delay in the wound healing process compared with the WT group on days 4 and 7 ( Figure 1B,C). 1 −/− mice with a wound biopsy punch of 2 mm in the head skin between the ears; the da of the biopsy punch was called T0. The wounded area was measured 4 and 7 days aft the T0 ( Figure 1A). We found that the wound was almost 100% closed in WT mice on da 7, while Thy-1 −/− mice showed a significant delay in the wound healing process compare with the WT group on days 4 and 7 ( Figure 1B,C).
Skin Morphology during Wound Healing Is Altered in Mice Lacking Thy-1
Skin morphology exhibits layers that protect our body from foreign threats, whe the epidermis, dermis, and hypodermis form part of the human body's largest organ [20 Regarding tissue morphology, even though the wound was nearly closed after seven da in the WT group, the skin tissue was still undergoing remodeling (compare with norm skin, Figure 1B, see arrows). Furthermore, the tissue of the wound area in both mou groups was analyzed by extracting the skin region with a bigger biopsy punch (4 mm) o days seven and fourteen ( Figure 2A). The tissue was stained with hematoxylin-eosin perform a morphological analysis ( Figure 2B,C). Additionally, the thicknesses of the de mis and epidermis layers were measured in both groups ( Figure 2D-G).
Skin Morphology during Wound Healing Is Altered in Mice Lacking Thy-1
Skin morphology exhibits layers that protect our body from foreign threats, where the epidermis, dermis, and hypodermis form part of the human body's largest organ [20]. Regarding tissue morphology, even though the wound was nearly closed after seven days in the WT group, the skin tissue was still undergoing remodeling (compare with normal skin, Figure 1B, see arrows). Furthermore, the tissue of the wound area in both mouse groups was analyzed by extracting the skin region with a bigger biopsy punch (4 mm) on days seven and fourteen ( Figure 2A). The tissue was stained with hematoxylin-eosin to perform a morphological analysis ( Figure 2B,C). Additionally, the thicknesses of the dermis and epidermis layers were measured in both groups ( Figure 2D-G). mice (4 = no stratum granulosum and no stratum corneum are present, 5 = extravascular red blood cells); (e-h) Stratified squamous epithelium is restored to its original size by day fourteen (6 and 7 = complete re-epithelization); no major histological differences were observed between the two groups. Scale bar = 50 µm. (D-G) Estimation of the epidermis (D,F) and dermis (E,G) thickness in the wounded area on day seven or day fourteen after injury in WT or Thy-1 −/− mice, respectively. * p-value < 0.05.
Histological observation of the injured and uninjured tissue samples, sections were stained with hematoxylin-eosin. The healthy mouse skin was organized into the epidermis with epithelial cells or keratinocytes in various stages of differentiation, the dermis, composed of extracellular matrix, fibroblasts, immune cells, and vascular elements, a variety of hair follicles, sebaceous glands, and sweat glands ( Figure 2B). After an injury, the skin of the Thy-1 −/− mice exhibited a thinner epidermis than that of the WT mice. For Thy-1 −/− mice, seven days after wounding, the damaged area was constituted by a flat, multilayered epithelium and the cells in the stratum spinosum were arranged in a disorderly manner ( Figure 2C). Moreover, compared to WT mice, not all the epidermal layers were present. Particularly, the granular layer and the stratum corneum were absent, and the basement membrane was discontinuous ( Figure 2C). In addition, elements that are characteristic of the inflammatory phase were observed, with an increased number of fibroblast-like cells and fewer lymphocytes in the Thy-1 −/− model, likely indicating a prolonged inflammatory and proliferative phase than in WT mice. Extravascular red blood cells were also visualized ( Figure 2C). The proliferative phase of wound repair was prolongated in Thy-1 −/− than in WT mice. After 14 days of wound healing, the size of the keratinized flat multilayered epithelium had remodeled to its original size in tissue sections from both animal models. In addition, we observed the first formations of hair follicles. No major histological differences were distinguished between the two groups at this stage of repair.
Quantitative analysis indicated a pronounced difference in re-epithelialization speed, and that wound closure was faster in WT than in Thy-1 −/− mice ( Figure 1C). Quantification of the dermal and epidermal layers indicated that 7 days after injury, the epidermal layer exhibited similar thickness (around 340 µm) in both WT and Thy-1 −/− mice ( Figure 2D). At the same time point, the dermis of Thy-1 −/− mice was, on average, slightly thicker than that of the WT animals. However, the differences were not statistically significant ( Figure 2E). On the other hand, after fourteen days, the epidermis was noticeably thicker in WT, compared to Thy-1 −/− mice ( Figure 2F); however, again, given the dispersion of the data in the WT model, the difference was not statistically significant. Finally, the dermal layer showed a significantly increased thickness in Thy-1 −/− mice, compared to the WT mice, fourteen days after injury ( Figure 2G). Thus, our data indicated that the absence of Thy-1 reduces the speed of normal wound closure and changes the remodeling of the wounded tissue.
Micro and Macrovascular Vessels Are Regulated by Thy-1
Vascular irrigation is essential to restore skin function after tissue damage. Among blood vessels, we can differentiate macro and microvessels, which are defined depending on the size of the lumen. To determine the proportion and size of micro and macrovessels in WT and Thy-1 −/− mice, we analyzed histological samples 7-and 14-days post-injury ( Figure 3). An increased percentage of microvessels was observed seven days after injury in Thy-1 −/− mice, compared with WT mice ( Figure 3A).
The mean area of the macrovessel ( Figure 3B) and microvessel ( Figure 3C) lumen was similar in WT versus Thy-1 −/− mice. We also found a similar ratio ( Figure 3D) and size of lumen in macro ( Figure 3E) and micro ( Figure 3F) vessels at day fourteen for both groups. Therefore, more microvessels are observed in the wounded area of Thy-1 −/− mice, compared with WT mice seven days post-injury. The mean area of the macrovessel ( Figure 3B) and microvessel ( Figure 3C) lumen was similar in WT versus Thy-1 −/− mice. We also found a similar ratio ( Figure 3D) and size of lumen in macro ( Figure 3E) and micro ( Figure 3F) vessels at day fourteen for both groups. Therefore, more microvessels are observed in the wounded area of Thy-1 −/− mice, compared with WT mice seven days post-injury.
Absence of Thy-1 Decreases Blood Perfusion in Wound Healing
Blood perfusion after an injury is crucial for successful wound healing and, while the blood clot is necessary to stop the bleeding, the bloodstream delivers nutrients, immune cells, and oxygen to the damaged area. Since Thy-1 −/− mice showed an increased number of microvessels in the wounded area on day seven after injury and considering that Thy-1 has been previously implicated in angiogenesis [19], we next wondered whether those blood vessels were functional. To this end, we performed an LSCI analysis in the wound area and the surrounding tissue in WT and Thy-1 −/− mice. Representative images of blood perfusion data from basal (prior to injury) and time zero (T0: immediate post-injury) are shown for both groups ( Figure 4A). The wounded area was observed as an intense red circle, which, according to the color code bar, represents an area with high blood perfusion ( Figure 4A). Perfusion quantification was performed prior (basal) and post-injury (T0) in the wounded area (inside the 2 mm diameter area) and the peripheral area (zone surrounding the wounded area).
Absence of Thy-1 Decreases Blood Perfusion in Wound Healing
Blood perfusion after an injury is crucial for successful wound healing and, while the blood clot is necessary to stop the bleeding, the bloodstream delivers nutrients, immune cells, and oxygen to the damaged area. Since Thy-1 −/− mice showed an increased number of microvessels in the wounded area on day seven after injury and considering that Thy-1 has been previously implicated in angiogenesis [19], we next wondered whether those blood vessels were functional. To this end, we performed an LSCI analysis in the wound area and the surrounding tissue in WT and Thy-1 −/− mice. Representative images of blood perfusion data from basal (prior to injury) and time zero (T0: immediate post-injury) are shown for both groups ( Figure 4A). The wounded area was observed as an intense red circle, which, according to the color code bar, represents an area with high blood perfusion ( Figure 4A). Perfusion quantification was performed prior (basal) and post-injury (T0) in the wounded area (inside the 2 mm diameter area) and the peripheral area (zone surrounding the wounded area).
Basally, Thy-1 −/− mice showed increased perfusion compared to WT mice when analyzing head skin areas, both directly where the punch had been performed ( Figure 4B) and in the peripheral area ( Figure 4C). However, at T0, Thy-1 −/− mice exhibited similar blood perfusion in the wound area ( Figure 4D), and significantly higher perfusion in the peripheral area ( Figure 4E) compared to WT mice. Basally, Thy-1 −/− mice showed increased perfusion compared to WT mice when analyzing head skin areas, both directly where the punch had been performed ( Figure 4B) and in the peripheral area ( Figure 4C). However, at T0, Thy-1 −/− mice exhibited similar blood perfusion in the wound area ( Figure 4D), and significantly higher perfusion in the peripheral area ( Figure 4E) compared to WT mice.
Because differences in the basal levels of blood perfusion were detected between both groups ( Figure 4B,C), we decided to normalize the data to the basal level in each animal per group to compare the values obtained on the indicated days after injury. Increased perfusion levels were observed in the wounded area on day one after injury in both groups of mice. However, the increase in WT mice was higher than in Thy-1 −/− mice ( ## p < 0.01). This elevated perfusion was dynamic in WT mice since it remained significantly Because differences in the basal levels of blood perfusion were detected between both groups ( Figure 4B,C), we decided to normalize the data to the basal level in each animal per group to compare the values obtained on the indicated days after injury. Increased perfusion levels were observed in the wounded area on day one after injury in both groups of mice. However, the increase in WT mice was higher than in Thy-1 −/− mice ( ## p < 0.01). This elevated perfusion was dynamic in WT mice since it remained significantly higher until day 10, compared to its respective basal condition, despite showing a continued decline until day 14. Notably, the increase in perfusion was significantly higher in the WT group, compared with the Thy-1 −/− group on days 1, 7, 10 and 14 after injury ( Figure 5A). However, perfusion in the wounded area declined more in Thy-1 −/− than in WT mice at day 14 ( Figure 5A). In contrast, blood perfusion in the peripheral area showed similar, unaltered behavior in both groups ( Figure 5B), except on day 14 where perfusion was significantly lower in both models, compared to their respective basal conditions ( Figure 5B). The perfusion (wound and peripheral) signal data were transformed into the wavelet components using Fourier transformation to determine the participation of the metabolic ( Figure 5C,F), myogenic ( Figure 5D,G), and neurogenic components ( Figure 5E,H) of blood perfusion in both WT and Thy-1 −/− mice at basal, T0, day four, day seven, and day fourteen after injury. A significant increase in the myogenic and neurogenic components of the peripheral perfusion signal was found in Thy-1 −/− mice immediately after skin injury (T0), compared with WT mice. Significant differences were also found in the metabolic and neurogenic components in both wounded and peripheral areas on day seven after injury, in which the Thy-1 −/− group exhibited higher participation of these components than WT mice. Furthermore, we also observed significant differences on days four and fourteen in the Thy-1 −/− group for the myogenic and neurogenic components, compared to the WT mice in the peripheral area. Together, these data indicate that Thy-1 is important for skin perfusion dynamics during wound healing. Potential underlying mechanisms include the metabolic and neurogenic activity associated with the metabolic regulation of blood flow [9,10].
Discussion
Wound healing is a complex multistage process involving many molecular interactions and signaling pathways that permit successful repair of the skin. Thy-1 has been considered a protein of interest in this process because its upregulation during the inflammatory stage has been associated with angiogenesis [16,17]. Here, we demonstrated that the lack of Thy-1 in mice retards the wound healing process and impairs re-epithelialization associated with decreased blood perfusion during the healing process. Relevant in this context is the participation of metabolic, myogenic, and neurogenic components of blood perfusion. Indeed, wavelet analysis suggested that metabolic and neurogenic components are the major contributors to the impaired blood perfusion observed during the healing process in Thy-1 knockout (Thy-1 −/− ) mice. Moreover, we observed increased microvessel-to-macrovessel ratios in damaged tissues. In addition, we observed altered skin morphology with disorganized epidermal layers and even the absence of granular and stratum corneum layers in the wounded area. Altogether, these results highlight the relevance of Thy-1 in regulating blood vessel formation and blood flow, revealing this protein as a new wound therapy target.
Thy-1 levels are low in healthy skin; however, levels can increase more than 20-fold, 1-3 days upon injury in endothelial cells [21]. As reported, the Thy-1 promoter is active at this stage and remains active for up to three weeks, until it starts declining [22]. Upregulated Thy-1 promotes the migration of inflammatory cells; for example, neutrophils-the first cells to arrive in the wounded area-migrate following a chemokine gradient generated by activated platelets, but can also undergo transendothelial migration promoted by the αMβ2 integrin-Thy-1 interaction [16]. A subpopulation of fibroblasts also displays an elevated expression of Thy-1 in injured skin, which potentially differentiates into myofibroblasts inducing tissue contraction [23]. Therefore, it is not surprising that organisms lacking Thy-1 exhibit an altered wound-healing process.
Even though the finding that decreased Thy-1 levels delay the wound healing process has been previously reported in mice [18], those studies used an in vivo model in which Thy-1 expression was blocked with a siRNA in the wounded area. In the present study, similar outcomes were described in an organism that completely lacks this molecule, corroborating that Thy-1 presence is required in the wound healing process. Additionally, our results demonstrated that on day 14, the wound closes similarly in WT and Thy-1 −/− mice, suggesting that Thy-1 is dispensable for the process in the long run. However, delayed wound healing in mice lacking Thy-1, might have severe consequences for the individual since infections occur during the initial stages of the healing process [24]. Moreover, neutrophils help maintain the damaged area free of pathogens, and delayed arrival of these cells due to Thy-1 absence would favor infection of the area, which could then favor a chronic non-healing wound process.
The re-epithelization speed was slower in the Thy-1 −/− wounded tissue compared with WT. These findings agree with the results reported by Lee et al. 2013 [18], where Thy-1 siRNA-treated wounds exhibited abnormally delayed re-epithelialization and an altered epidermal structure in the wound area. Like our findings, abnormal re-epithelization was present seven days after injury. Here, we observed the absence of stratum granulosum and stratum corneum Figure 2C(c,d) in the Thy-1 −/− mouse group. In wounded Thy-1 −/− mice, the inflammatory phase dependent on blood perfusion was altered. This alteration might explain deficient re-epithelialization since this process depends on tissue perfusion and oxygenation [25]. On the other hand, in a recent report, Shemesh, Fuchs, and co-workers noticed a difference in the epidermis of mice where Thy-1+ stem cells had been ablated [26]. In Figure 2B, unwounded tissue of both mouse groups indicates no major differences in the thickness of the epithelium; therefore, these Thy-1+ stem cells described as cells with a non-redundant function in the epidermis are likely not essential for normal epithelialization. Since the epidermal compartment of the injured skin differs between WT and Thy-1 −/− mice ( Figure 2C), it is possible that the delayed wound closure detected in Thy-1 −/− mice could involve defects within the epidermal compartment itself. This possibility would require further experimentation.
A critical role in vascular maturation has been ascribed to pericytes. Pericytes subjected to endothelial cell-derived factors, such as PDGF (DD and BB), endothelin-1, TGF-β, and HB-EGF, assemble around endothelial cells for tube formation [27]. Moreover, the association of pericytes and collagen type IV promotes the maturation of microvessels [28], and the promotion of endothelial cell junction and ECM deposition to the vascular basement membrane are vital to maintaining vascular stability and homeostasis [29].
Thy-1 presence or absence in pericytes constitutes an interesting avenue of research. Park and co-workers reported that the lack of Thy-1 in brain pericytes increases the ECM protein deposition in the basal membrane, compared to those not lacking Thy-1. However, the stimulation of Thy-1-positive pericytes with TGF-β1 greatly enhances fibrotic activity, suggesting that perivascular Thy-1-positive pericytes could be involved in fibrotic scar formation [30]. In injured skin, Thy-1-positive fibroblasts are relevant due to their role in tissue contraction [23]. However, in the present study, we provide new evidence regarding impaired blood perfusion, which may implicate alterations in endothelial cells and angiogenesis. These alterations include high basal perfusion in the Thy-1 −/− mouse group, associated with reduced perfusion (compared to WT mice) during the healing process, with an increased number of blood vessels (i.e., microcirculation) at day seven after injury. Additionally, a significant drop in blood perfusion in the wounded area was observed for both groups of mice, but the decline was even higher in Thy-1 −/− than in WT mice at day 14 ( Figure 5A). Although we did not analyze the underlying mechanisms, the results are indicative of increased vascular remodeling in Thy-1 −/− mice. Therefore, angiogenesis and vascular remodeling processes may be altered in the Thy-1 −/− group. Thy-1 could also have a role in microvessel maturation and ECM deposition to form the granulation tissue, a possibility that would explain the morphological differences detected between WT and Thy-1 −/− mouse skin wounds.
In agreement with this last possibility, we also found changes in the dynamics with which different components of the blood perfusion participated, either basally or during the healing process. A possible interpretation of these results is that although basal levels of blood perfusion are higher in Thy-1 −/− mice, the lack of Thy-1 decreases the capacity to fully compensate for the required blood perfusion over fourteen days of wound healing. This reduced response in blood perfusion in the Thy-1 −/− group was associated with more significant changes in the wavelet components in the peripheral area, rather than the wounded area, and was mainly observed in the metabolic and neurogenic components. These changes were to be expected, since injury generates a sensitive (i.e., neurogenic) and reddened (i.e., vasodilation or metabolic) area in the periphery of the wound. Interestingly, the dynamics of the participation of wavelet components suggest that the reduced capacity to fully compensate for the required blood perfusion on day seven observed in the Thy-1 −/− group is most likely associated with an enhanced metabolic and neurogenic response in both the peripheral and wounded area. Furthermore, participation of the neurogenic component appears more persistent since it remains enhanced up to day fourteen in the peripheral area. This analysis provides the groundwork for future research evaluating the formation of new vessels in Thy-1 −/− mice.
Thy-1 presence or absence in angiogenesis has been reported by Wen et al., who described that Thy-1 could promote healing in an early stage of wound closure but delay the process if overexpressed or absent [31]. These data correlate with our findings, where after seven days (early stage of wound healing), lack of Thy-1 impaired the process, while at a later stage (fourteen days), the differences between groups were not significant. Despite that, our study provides further insight into the relevance of Thy-1 in regulating wound healing and blood perfusion dynamics, as well as the angiogenic process. However, further research is required to elucidate the molecular mechanisms explaining the role of Thy-1 in these processes.
An exciting and novel finding of the present study was that blood perfusion decreased in the wound area of Thy-1 −/− mice. This finding could explain the delayed wound healing process and highlights the relevance of Thy-1 in the promotion of wound closure. The lower levels of blood perfusion in wounds observed in Thy-1 −/− mice, compared with WT mice, could be related to a thicker dermis, where the angiogenesis process occurs, together with a higher microvessel to macrovessel ratio detected in mice lacking Thy-1. In this context, chronic wounds, such as those found in patients with type 2 diabetes or obese individuals [32,33], could perhaps be stimulated to heal faster by treating with soluble Thy-1. However, this is a possibility that is currently being investigated.
During the preparation of this manuscript, a paper stating that Thy-1 absence promotes, rather than delays wound healing, was published by Sedov et al. in Nat Cell Biol [34]. These authors reported that the wound healing process and hair follicle regeneration were accelerated in mice lacking Thy-1 (Thy-1 −/− ) due to proliferation dependent on YAP signaling. Of note, this is the first article reporting that Thy-1 deficiency promotes wound healing, while other authors have described delayed wound closure in mice with low levels of Thy-1 in the injury site [18]. Importantly, we corroborate Lee's findings using Thy-1 −/− mice, which is the same model used in Sedov's paper. These opposing results could be explained by the differences in the inflicted wound area (1 cm 2 versus 2 mm diameter used in the present study, or 4-6 mm diameter used in Lee's paper [18] and the localization of the wound (in the dorsum versus the region between the ears). The latter is relevant because mice have a thin subcutaneous muscle layer that makes their skin heal mainly by the initial contraction of the wound area [35]. The contraction would be different if the lesion were more extensive and if the location were in a place where the skin is looser (medium dorsum) or tighter (between the ears). Therefore, cells are affected by different mechanical forces, which are known to affect the responses of the Thy-1/integrin/syndecan-4 complex [17]. Unfortunately, the paper by Sedov et al. did not discuss the differences between their findings and those previously reported by Lee and co-workers [18], which demonstrated the opposite effect caused by Thy-1 deficiency. These last results were confirmed in the present report.
Animals
Mice C57BL/6 WT (12) and Thy-1 −/− (12), male, 6-8 months old, weighing 20-25 g were used. The original mouse colony was kindly donated by Dr. James Hagood from the University of North Carolina at Chapel Hill, NC, USA (received from Kevin Kelley, Mt. Sinai School of Medicine) and kept in the Faculty of Medicine's facilities according to the bioethics committee protocol, CBA1200-22540-MED-UCH.
In vivo experiments were performed under anesthesia using isoflurane (USP, Baxter, Deerfield, IL, USA) as the inhalation anesthetic at 3% in a mixture with oxygen. In addition, 2 mg/Kg ketoprofen (Rhodia Merieux, Paulinia, Brazil) was applied for pain management after injury.
In Vivo Wound Healing Assay
The in vivo wound-healing assay was performed as previously described by our laboratory [36,37], with slight modifications. Briefly, WT and Thy-1 −/− male mice were maintained separately in individual cages. Animals were anesthetized using 3% Isoflurane mixed with oxygen and maintained with 1.5% Isoflurane during the procedure. Then, the dorsal portion of the head was shaved to generate a full-thickness excisional wound between the ears, using a 2 mm biopsy punch. The depth of the wound was approximately 0.5 mm. Wound closure was measured with a vernier caliper and photographically recorded every day placing a metric label in each photo to normalize the measurement. The measurements were performed until day fourteen, at a controlled temperature (22 • C). The percentage of wound closure was calculated as follows: wound healing = (A0 − An/A0), where wound healing represented wound closure. A0 represented the wound area at time 0, and An, the wound area after "n" days of follow-up. We defined the wound area considering the boundary edges of the wound. We also defined a peripheral area, as the zone with erythema surrounding the wound, which was about 5 mm beyond the edges of the wound.
Speckle Laser Perfusion Analysis
Tissue perfusion analysis was performed using the Pericam ® PSI-HR system (Perimed Ltd., Stockholm, Sweden), as previously reported by our group [38,39], with some minor changes. The instrument uses an invisible near-infra-red laser to measure blood perfusion. A diffuser spreads the laser beam over the region of interest producing a speckle pattern, which is monitored by a CCD camera. Blood perfusion is calculated by analyzing the variations in the speckle pattern, which generates a color map with a chromatic scale ranging from blue (lowest blood flow) to red (highest blood flow). The analysis matrix included an area of 64 × 64 points. Blood flow was recorded in the dorsal portion of the anesthetized animal's head (Isoflurane at 3% for 5 min). Both WT and Thy-1 −/− mice were analyzed side-by-side. Briefly, mice had their dorsal cranial area shaved one day before initiating the tissue perfusion analysis. Such prior depilation avoids the detection of changes in blood perfusion due to skin irritation. In addition, blood perfusion was measured without any pharmacological or physical stimulation. Blood flow was recorded for 5 min in the wound area and normalized to the perfusion in an area located approximately 5 mm away from the wound in the same animal (regions of interest, ROIs). Two different blinded observers analyzed the images to consider inter-observer variability.
Wavelet Analysis
Skin perfusion data were used for the wavelet spectral analysis. Each experimental condition was analyzed every 2 min. By applying the Discrete Fourier Transformation (DFT) function in Excel (Microsoft, Washington, DC, USA), we transformed the data from the time domain to the frequency domain, calculating the amplitude-frequency spectrum of the waveform. This analysis generates a signal intensity distribution profile over a range of frequencies. The wavelet data was sampled every 0.2 s to yield a frequency spectrum between 0.0095-2.5 Hz. This range allowed us to analyze the contribution of each regulatory component in the perfused zone of interest. Each component has a specific band frequency range: metabolic (0.0095-0.016 Hz), neurogenic (0.02-0.06 Hz), myogenic (0.0-0.15 Hz), respiratory (0.15-0.4 Hz), and cardiac bands (0.6-2 Hz). Due to the fact that we sought to identify differences in microcirculation, we analyzed only metabolic, neurogenic, and myogenic wavelets. Once the new frequency spectrums were obtained, each spectrum band contribution was calculated by evaluating the area under the curve. Each contribution was displayed as a percentage of the relative energy of all the analyzed bands.
Histology
Skin samples were fixed in formalin (in PBS, 4%, v/v) for 48 h. Tissue inclusion in paraffin was performed automatically (Leica Biosystem, Wetzlar, Germany) using a previously described protocol [40]. Subsequently, tissues were sectioned (4 µm) and stained with hematoxylin/eosin. Photos were taken at different magnifications using a Nanozoomer XR tissue scanner, Hamamatsu (REDECA, Universidad de Chile). The histology of samples was evaluated in a blinded manner by a trained veterinarian (J.López.) and an odontologist (MR), considering previous publications [41].
Six individual and random images of tissues stained with hematoxylin/eosin were used to manually estimate the thickness of the dermis and epidermis (in micrometers, µm) from high-resolution photographs taken with the tissue scanner. We used three animals per group on seven or fourteen days after injury. The measurements were performed three times in each image and the epidermis was measured from the margin of the skin to the origin of the dermal layer, whereas the distance from the epidermal ridge to the dermal-fat junction was considered the dermis. The number of blood vessels were also measured from six random images per condition, and three animals per group. In addition, we identified blood vessels in each photo by their morphology and the presence of a lumen. Furthermore, the lumen area of the blood vessels was used to categorize them into macro-or micro-vasculature considering a cut-off area of 100 µm 2 . Values higher than 100 µm 2 are considered to reflect a macrovessel, and lower than 100 µm 2 , a microvessel [42]. Therefore, values are expressed as the percentage or area of macro and microcirculation on day seven or fourteen after skin injury. Photos were processed using Pro-Plus software (Media Cybernetics, Silver Spring, MD, USA).
Statistical Analysis
Values are presented as the mean ± standard error of the mean (SEM) and percentage, where appropriate, and were compared between groups using Mann-Whitney's U-test. For perfusion analysis, values are presented as the mean and SEM. Differences between WT and Thy-1 −/− mice were also compared using non-parametric analysis. Data and statistical analyses were performed using GraphPad Prism 6.00 (GraphPad Software, San Diego, CA, USA).
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Universidad de Chile (protocol code CBA1200-22450MED-UCH, approved December 2021).
Informed Consent Statement: Not applicable.
Data Availability Statement: Any additional information required to reanalyze the data reported in this paper is available from the authors upon request. | 8,250.2 | 2022-10-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Dynamic Phase Changes: Integrating Information in Complex Biological Systems
Here we examine the potential relationship between applied exogenous EMFs and their ability to generate phase-modulations with information carrying capacity. We systematically examine, through dimensional analysis, the potential sources and interactions of these generated phase-modulations. Furthermore, we introduce the concept of generating phase-modulated signals through the application of weak, time-varying, amplitude modulated EMFs. Information generated through the magnetic pressure of zero-point fluctuations (i.e. Casimir Effect) are also discussed.
INTRODUCTION COMPLEXITY AND ITS RELATION TO SPACE-TIME STRUCTURE
Complexity describes the behaviour, or series of responses, of a system to a set of stimuli resulting a net alteration in the spatial-temporal composition of the entire set. If the degree of complexity is such that the parameters, which limit the spatial and temporal degrees of freedom of the system of observation, are exceeded then the result can manifest into an emergent property. Here we operationally define complexity as the available degrees of freedom in which information can be transferred (gained or lost) to a system from its environment (surrounding). However, traditionally, complexity is observed to be the degree of ordered states, or structure, which can be acted upon [29,31]. That is to say that complexity, at least from the perspective of space, is static and does not evolve in time. This initial conceptualization of complexity is incomplete as it does not consider the influence of time, or underlying change, as a contributory variable to observed phenomena. In our definition we describe the complexity of a system (under observation) as the sum of its processes. Temporal complexity, under our working hypothesis, would then be the available number of potential interactions, or processes, which can impact the structural complexity of fixed space-like degrees of freedom.
The representation of the entire observable set of parameters or degrees of freedom, the Universe, can then be dichotomized into space and time. More specifically, space and time-like degrees of freedom. Space can be described as a physical substrate which behaves, or more aptly can be modeled, in terms of particulate interactions [50]. Matter is the organization of a discrete set of tangible units into an ordered state. The occupation of space, or the change/distortion of curvature of a subset of space-time proportional to the mass of material [24,37], would constitute the presence of matter. Due to its spatial properties, matter can be conceptualized as crystalized or fixed whereas energy, the antithesis of matter, would be fluid. The former case, that which is described as having permanence, would allow for the representation of information or energy over protracted periods of time. In contrast, the fluidity of energy does not favour permanence, but the rapid dissipation of organized states. Information exchanged between two fragile albeit ordered states (energy) could only be represented transiently. An exchange between static and fluid (or static and static) states would allow for a prolonged representation of said information. That is to say, that the representation of information (e.g., memory) would be predicated on the presence of a fixed structure.
Time, in our case, evolves and is manifested when matter undergoes a measurable change. The complementary analogy in this context would be that time is reflective of energy or the measured unit (quantum) necessary to elicit a change in the aggregate occupying space. In essence then, the superposition between space and time, which is manifested as space-time, is the interaction or exchange of energy to matter or matter to energy. The whole of the exchange is what we define as a process. If a process is the result of matter interacting with energy, then there should be an identifiable change that occurs with respect to spatial degrees of freedom. Consequently, if energy can be affected by matter then we would expect, in light of our hypothesis, that energy, the representation of the temporal structure of the system, should also undergo a change in its temporal degrees of freedom (i.e. pattern or frequency modulation). How do While examining the elegant case of an electron orbiting a proton as modeled by a Bohr atom, we can approximate the number of spatial degrees of freedom, or degree of available spatial phase changes, as being akin to the number of potential locations that a classical electron could occupy within the context of that volume. Provided that the classical radius of an electron is 2.818 · 10 -15 m and that the radius of the Bohr atom is approximately 5.29·10 -11 m, the quotient of the volumetric equivalent of the latter and former produces a value of 10 12 -10 13 possible positions which can be occupied at any given time. Now, in order for the electron to occupy any one given position (degree of freedom), energy must be absorbed or emitted. For instance, if the electron is in its wave state, then energy must be absorbed in order to constrict its size from that of its Compton wavelength to that of its classical wavelength. In this light, Persinger [40] calculated the relative energy necessary to transform the wave-like electron to its matter-like counterpart; the value approached 10 -20 J. Furthermore, the exchange of a photon between the proton and the electron, provided that the energy is within the appropriate band, can also elicit this form of transition. The exchange of energy from wave to particle, or vice versa, alters the phase of the particle which has been intimately linked to the functional collapse of the wave-function.
Irrespective of the amount of energy involved in the translation of one position, or the functional energy equivalent of the collapse of the Compton wavelength to its classical radius of the electron, the number of discrete phases stay the same. Employing Cahn's equation for the binary information stored within the phase modulation within a signal, yields an approximate amount of information in the order of 3.98·10 13 bits of information. This value inherently reflects the amount of information available within a Bohr atom, as well as the potential limit of information processing by the system.
Our provided example dealt with the idea that spatial degrees of freedom correspond to a degree of space-like phase modulations. Classically, however, phase modulation is more generally described as a change in the distance between successive peaks or troughs, of a given wave, and is most widely employed as a means of optical or fiber optic communication and information transfer [2,5]. This then directly implicates the photon as the primary mode by which to carrier the information contained in a series of phase modulations. Examining this relationship, Mérolla et al., [34] experimentally demonstrated the ability to encode, send, and receive information using phase modulated light transmission with an emphasis on the practical application of cryptology. Additionally, Sun et al., [55] presented the idea that information could be represented in within the difference between successive wavelengths of light. This suggests that not only is the wavelength of the light carrier important to the information contained within that carrier, but so are the spatial distances between each individual wavelength. Alternatively stated, the breaks or pauses in the carrier wave also contains relevant information.
The photon is conceived to be the carrier of the electromagnetic force and we have presented data to suggest that the photon, and by extension electromagnetic radiation, plays an intimate role in the transmittance of information. On this line, Ikonen and colleagues [29] demonstrated the potential of using an AC magnetic field capable of producing transient phenomena resembling mechanical phase-modulation of 67 Zn molecules in ZnO crystals. This suggests that, if the photon is the carrier particle for the electromagnetic force, then it could be acted on by electromagnetic radiation falling outside the spectrum of "light" ranging from infra-red to ultraviolet. Information transmitted as phase-shifts are not solely limited to photons and can be acted upon by an exogenous electromagnetic field. Fang et al., [26] postulated that the phase of a photon carrier can be modulated using a dynamic Arahanov-Bohm effect.
CHANGING PHASE IN AN AMPLITUDE MODULATED TIME-VARYING FIELD
We describe phase as the offset in angle, distance, or position of a given point traveling in a wave with respect to a reference wave, at a given time. This can be considered as the structure, or location, equivalent of a particle (matter) or wave (energy) as it progresses through time in relative comparison to a reference structure (position) or pattern. The degree of which one can influence the number of phase transitions, would be contingent upon the amount of energy provided to the system. Where a phase transition is the response of a particle's position to the presence of an external modulation or applied process. One way to elicit a phase change in a system is by exposing the particle to an electromagnetic potential and can be modeled using the Arahanov-Bohm [1,57] equation (1).
Where, Δφ is the phase change, q is the elementary charge, is reduced Planck's constant and is the magnetic flux.
If the process were occurring in time, then the form would be: Where t is the point duration of a time-varying electromagnetic field. Now, if the elementary unit charge and Planck's constant do not appreciably change with respect to time (i.e., are constants) then only parameter that is left to change in time is the magnetic flux. Conceptually, magnetic flux is defined as the intensity, the degree of bunching or number of magnetic lines of force, of a magnetic field penetrating a given surface. The resultant mathematical representation would take on the form of: Where B is the intensity of the magnetic field (Tesla) and the A is the surface area (m 2 ) of a material which is immersed in the incident magnetic field. Now, if the magnetic flux is changing with respect to time then: This would permit the magnetic field to be changing in time. Furthermore, if the intensity of the magnetic field is subject to change, through amplitude modulation, then Equation 4 would take on the form of Maxwell-Faraday equation for induction. That is to say that an amplitude modulated, time varying electromagnetic field would elicit a phase change in a particle's wave-function through Faraday Induction and would be represented in the Arahanov-Bohm-Faraday equation: The derived equation for Arahanov-Bohm induced phase change is remarkably similar to the equation relating phase change in coherent domain water, outlined by Del Guidice et al. [19], where an applied electromagnetic field generates an electric potential as described by the equation: Where is the changing magnetic flux, ħ is reduced Planck's constant, and e is the elementary unit charge.
This implies that water, or at least a coherent state of water, in the presence of an external electromagnetic field, can have its phase modulated thus allowing for the potential of information storage or processing.
It should be noted that the original value for phase change (Δφ) is a dimensionless value. However, if the phase changes with respect to time, then its dimensions become 1/s, or the equivalent of the 'frequency of phase change'. Alternatively, the resulting dimensions could be interpreted as the number of phase transitions evolving in time (process). The time-varying change of phase will be herein referred to, interchangeably, as the "dynamic phase" or the "time-varying change of phase". suggests that, changing the intensity (degree of amplitude modulation) of the applied EMF would result in an increase in magnitude of the 'dynamic phase' resulting in an alteration of potential degrees of freedom. Additionally, changing the point duration (e.g., duration of current presentation, timing of the field) of the applied field would also result in a constriction (when the point duration is increased) or dilation (when the point duration is decreased) of the 'dynamic phase' resulting in a change of the available degrees of freedom. Finally, changes in the structure, and incidentally the physical geometry, of the matter under observation which is penetrated by the incident magnetic field, will result in a change in the 'dynamic phase'. Taken together, this system describes the "application geometry" of the applied EMF by accounting for intensity, timing, and geometric structure of the field where the resultant synthesis of these parameters are manifested into a pattern of time-varying phase modulations.
Phase modulation is the process by which the phase of a carrier wave is altered to follow the changing amplitude of an incident signal. The peak amplitude and frequency of the carrier wave remains constant, but as the amplitude of the information signal changes the phase of the carrier wave changes correspondingly. If this were applied to a physical apparatus capable of undergoing a change in electric potential, we would call it Faraday induction. In this instance, we would induce a change in one wave through the application of another. This convergence provides the necessary parameters to potentially modulate and encode information in an incident signal in order to appreciably change the structure-function relationship of the observed system. Alternatively, any system which demonstrates the capacity to generate its own magnetic potential, via the production of its own magnetic field, has the ability to self-modulate producing a complex, time-varying dynamic phase. The systematic evaluation of pertinent, biologically relevant systems may be revealing.
Hydrogen is the most abundant element, and consequently matter, pervading the Universe. In fact, Persinger [41] demonstrated that the ratio of the mass of the Universe (10 52 kg) to its volume (10 78 m 3 ) results in a density of approximately 10 -26 kg·m -3 , or the equivalent of roughly 1 proton or hydrogen atom per unit volume. Perhaps the dynamic phase of hydrogen is revealing of the Nature of the set or reflects some emergent property which can be appreciated in a local space-time reference. Here we assume that any given hydrogen atom, in the Universe, is immersed in the intergalactic magnetic field whose intensity ranges from 10 -12 to 10 -15 T. If we assume that an intergalactic magnetic field penetrates a given area that is equivalent to the cross-sectional area of the neutral hydrogen line (21.12 cm) and whose temporal parameter is also the neutral hydrogen line (1.42 GHz or 7.04·10 -10 s) and we substitute these values in equation 5, the resultant dynamic phase would be in the order of 0.965·10 11 Hz. If we consider the dynamic phase to be akin to a frequency, which is really just the number of iterations of 'X' events per unit time, then we would be able to compute an energy and wavelength associated with the hydrogen dynamic phase. Using the Planck-Einstein equation (equation 7) provided that the frequency is 0.965·10 11 Hz the resultant energy would be in the order of 6.39·10 -23 J, with a corresponding wavelength of 3.1·10 -3 m. The former is approximately twice the value of Landauer's limit for a temperature of 4 K (cosmic microwave background; 3.80·10 -23 J). In addition, bio-electromagnetic fields falling within the millimeter range were postulated by Devyatkov et al., [35] and Betskii et al., [4] to have been generated by the geometric and mechanical properties and asymmetries of polar membranes. Furthermore, Fröhlich [27] theorized that electromagnetic field coupling within the millimeter range would recruit resonant, vibrational processes of biomolecular structures in response to the application of applied electromagnetic fields.
In addition, the resulting dynamic phase, that is in the order of 10 11 Hz, would fall within the range of proton movement and hydronium complex formation as denoted by Pollack [46,47,48] and Del Guidice [22,33] as well as others [17,23].
Where h is Planck's constant, v is frequency (Hz), c is the speed of light, and λ is the wavelength.
Next, we consider the electron and assume that the energy associated with its rotation around a Bohr nucleus can be approximated using the equation of kinetic energy as function of rotation around a circle.
Where E is energy, m is mass, r is the Bohr radius, and f is the frequency of rotation.
For a mass of 9.11·10 -31 kg, a radius of rotation equal to 52.9·10 -12 m, and frequency of equivalent of 1.52·10 -16 s, the resultant energy would in the order of 6.93·10 -19 J, well within the range of visible light. The internal magnetic field generated by an electron as it orbits the central proton, can be approximated by the quotient of the rotational energy and its orbital magnetic moment (9.27·10 -24 A·m 2 ), generating a field whose strength would be 7.48·10 4 T. Again, using our dynamic phase equation (Equation 5), and substituting values of 7.48·10 4 T for B, whose area is defined by the Bohr radius (3.51·10 -20 m 2 ), and the rotation time (field pattern) of 1.52·10 -16 s, results in a dynamic phase of 2.63·10 16 Hz, the time equivalent to 3.79·10 -17 s, or within the order of half the rotation time around the atom. Persinger [43], postulated that the electron in orbit around the Bohr atom spent half of its rotation in the particle (matter) state, while for the other half of the time the electron took on the state of its wave-function. In essence, this would suggest that the electron is self modulating with dynamic phase transitions occurring throughout half its rotation. The most ideal candidate that could allow for dynamic phase shifts would be the waveform equivalent of the electron. Again assuming that this phase transition may involve, or be mediate by photons, one can calculate the wavelength and energy equivalent of 2.63·10 16 Hz using Equation 7. The energy carried by a 2.63·10 16 Hz photon would be 1.74·10 -17 J, with a corresponding wavelength of 1.14 ·10 -8 m. This latter wavelength is within the average thickness of the cell membrane and Exclusion Zones of water [47]. Incidentally, this value corresponds to the peak wavelength shift of water exposed to a physiologically patterned weak EMF in the dark for 28 days as measured by fluorescence microscopy [37].
In terms of biological relevance, if a cell has a primary operational energy of 2.0·10 -20 J [39], and whose size is approximately 10 -5 m, we can calculate the magnetic field component by looking at Equation 9 and solving for B.
Where E is the energy, B is the intensity of the magnetic field, u is the permeability of free space, and V is the volume.
The intensity derived from solving for B, would be 6.93·10 -6 T. Provided that the fundamental frequency of operation of a cell is approximately 10 Hz then, solving for Equation 5, the resulting dynamic phase would yield a value of 33 Hz. A 33 Hz dynamic phase approaches the 20 -25 msec refresh rate of consciousness [54] and can be elicited in samples of spring water exposed to microTesla intensity, physiologically patterned electromagnetic fields [36]. This would imply that consciousness may be the dynamic phase of a single cell's primary operational frequency. Furthermore, this would suggest that the whole is reflected in the sum of its parts (Σn = n) [42] and is manifested as that the holographic representation of information within a subset of lower levels of discourse. Burke [8] discussed that the record of interference patterns represented in a holographic state, contains a unique description of the parameters of phase, direction, and polarity of the electromagnetic radiation which was used in order to generate the holographic equivalent of a given structure. That is to say, that the information of that given system can be represented and stored within the holographic equivalent of the structure under investigation. In this vane, we consider electromagnetic radiation, in any form, as a possible mechanism by which a hologram can be manifested.
The contribution of dynamic phase to the storage and representation of information may involve redundancies or aggregates of units in order to be effectively represented. In fact, in order to accommodate a dynamic phase of 10 12 (hydrogen line) or even 10 16 (electron circling the Bohr atom), would require approximately 10 11 to 10 14 cells that were "phase locked". Additionally, if one changed the intensity of the incident field to that of 3.0·10 -5 T, with a point duration of 10 -3 s (1 or 3 msec) the dynamic phases allowed would be 10 4 /cell and would require approximately 10 8 to 10 12 cells in order to accommodate. The former value falls within the number of cells calculated to be involved with consciousness [53], and the latter reflects the total number of cells within the cerebral manifold.
THE INTERACTION OF MAGNETIC ENERGY AND DYNAMIC PHASE
Thus far we have demonstrated the relationship between an incident, or applied, EMF as a means to manipulate or change the phase of a particle's wave-function. However, we can also relate a change in phase, and consequently a change in dynamic phase, as a function of energy. If we re-arrange Equation 9 to solve for the intensity of the magnetic field (Equation 10) and substitute this relationship into Equation 5, the result corresponds to a change in dynamic phase contingent upon the energy contained within the magnetic field. B = (10).
Where E is the energy, B is the intensity of the magnetic field, u is the permeability of free space, and V is the volume.
And if were to isolate for the energy, which would then be derived from the area penetrated by the magnetic field, spatial structure of the observed unit, the volume in which the magnetic field is generated, the point duration (frequency) of the field, and the dynamic phase (Equation 12). E = (12). If we substitute the values of 0.965·10 11 Hz for dynamic phase, an area equivalent to the cross-section of the hydrogen line wavelength (21.12 cm), the volume equivalent of the neutral Hydrogen line, and the time equivalent of 1.42·10 9 Hz (7.04·10 -10 seconds), the resultant energy would be in the order of 7.39·10 -21 J. This latter value is within the range of energy necessary to transform (gain or lose) 1 bit of information according to Landauer's limit for a system operating at 37ºC. Posit, if you will, the potential overlap between the neutral Hydrogen and the operational parameters of the human cerebrum. This would provide a means in which one could "store" information in a non-local means. Furthermore, a magnetic field whose energy is 10 -20 J, applied over the cerebral geometry (volume and area) with a timing in the order of 1 or 3 msec would produce a dynamic phase in the order of ~10 11 , or that which was derived from the hydrogen line. The resulting intensity of the applied EMF would be in the order of 10 -5 T, or those which have been employed by our laboratory.
CASMIR-MAGNETIC PRESSURE AND PHASE CHANGE IN TIME
We have demonstrated that, in effect, we can modulate the dynamic phase of a system with the application of exogenous EMFs, and we have demonstrated the potential of internal (self-generated) fields with the possibility to effectively modulate the phase, and thus the information, within a given space-time system. Here we examine the generation of phase modulation through quantum phenomena.
When examining a set of non-conducting surfaces, whose separation is much smaller than the surface area of the conductor, a discrete pressure is generated and can be modeled using the Equation 13.
Where a is the separation between the plates, ħ is reduced Planck's constant, and c is the speed of light.
In order to consider the influence of the pressure generated by a Casimir effect [7] to affect the phase, and ultimately the dynamic phase, of a particle then we must consider this pressure to be related to a magnetic field. Then the magnetic pressure of a system is calculated by (Equation 14): (14).
Substituting Equation 14 into Equation 13
gives us: If we consider that the intensity of the applied EMF produces a given dynamic phase change, provided in our calculations above, then this might provide a necessary insight into the separation of given structures (non-conducting plates) that would accommodate such a dynamic phase change. For instance, if one assumes a magnetic field strength of 10 -5 T, the resulting separation between cells, the surface which was necessary in order to produce this intensity, would be in the order of ~ 1 um. This would be within the spatial extent of Pollack's exclusion zones [10,12], as well as within the Bohr's spatial limit for thinking and consciousness [43]. Additionally, if you were to take the intensity of the magnetic field generated by the electron rotating around a Bohr atom, with an intensity of ~10 5 T, the resultant separation would be in the order of 10 -11 m, or with the spatial extent of the Bohr radius.
If we postulate that our dynamic or time-varying phase change is related to the translation of information through means of phase-modulation then, the parameters by which a change in phase can occur is on a wave. In this light, we can relate our 10 16 Hz time-varying phase change to processes ongoing at the level of packets of photons. Here then we can calculate the relative distance between successive phase shifting points. The wavelength of light that would be associated with a dynamic phase of 10 16 Hz would be 10 -8 m. This value may correspond to a spatial distance necessary to observe the discrete shifts in phase. That is to say, that the spatial equivalence of 10 -8 m may be necessary in order to interpret the information stored within a time-varying phase change of 10 -16 Hz.
THE NECESSARY PARAMETERS TO INTERPRET INFORMATION WITHIN A PHASE
We have demonstrated that the wavelength of light equivalent for a dynamic phase of 10 16 Hz is 10 -8 m. Here we make the argument that cell membranes have an oscillatory component which accommodates a magnetic moment. Dotta et al., [24], calculated that the intrinsic magnetic moment of a cell would range between 10 -23 -10 -24 A·m 2 . The application of a 10 16 Hz phase shift equivalent energy would be in the order of 10 -18 -10 -17 J. When the quotient of the energy of phase change, and cell magnetic moment is equated the resultant electromagnetic field intensity would range between 10 -6 -10 -5 T. These latter electromagnetic intensities are within the range necessary to elicit changes in peak fluorescence readings in spring water exposed to patterned electromagnetic fields in the dark [37]. Furthermore, data suggests that the appropriate combination of patterned electromagnetic fields and light applied to B16-BL6 cells, increased photon emissions in these cells and were found to be highly correlated the incident field/light energy [31] reflecting a potential ability for cells to store electromagnetic radiation, a phenomenon originally postulated by Popp [50]. This may implicate the cell, or at least the cell's constituent structures, as a potential translator of information stored within a phase-modulated pattern.
Considering Del Guidice's interpretation of phase change, generated through electric potential energy, as being modulated by the formation of coherent domains in water [18,19], it could be argued that water, in a coherent state, has the capacity to interact with the process underlying the transformation of electrical potential to phase change. Experimentally, coherent domain water was expressed to have the capacity to undergo a change in phase with respect to incident electromagnetic potential [20,21]. Provided that the cell membrane exhibits the spatial structure necessary to process the dynamic, time-varying phase of a magnetic field and water, in a coherent state, can contribute to integration of information contained within a dynamic phase, we postulate that the dynamics of the membrane-water interface maybe the receiver/transformer necessary to decode changes in phase.
Furthermore, if one considers light to be involved in this process and we have demonstrated that a space of 10 -8 m as being potentially sufficient to interpret the information stored within the time-varying phase with a rate of 10 16 events per unit time, then we suggest that the cell membrane or a layer of water approximately 10 units thick, would set the parameters for separation of information within a phase. The time it takes light to travel a distance of 10 -8 m is approximately 10 -16 s. The relationship between this and the Bohr-orbital rotational time was detailed by Persinger and Lafrenie [45]. This would potentially allow the time-varying phase to be read, or interact with, processes occurring at the level of the Bohr magneton. Alternatively, the coherent, initially phase-locked, activity of 10 4 water molecules, whose limit of information processing is determined by association and dissociation of the hydronium complex, or 10 12 Hz would correspond to a mass of approximately 10 -22 kg. The volume, provided that the density of water is 10 3 kg·m -3 , would be in the order of 10 -25 m 3 , the cubed root of which would correspond to a linear distance of approximately 10 -8 m. This may suggest that nano-clusters of water [51], or even smaller segments, along the membrane may be used in order to accommodate higher information processing.
Finally, when looking at the Casimir effect, which allows for the creation of real particles from virtual particles from zero-point fluctuations, (Equation 16), and re-arranging to solve for the distance of separation between cells, whose diameter is 10 um, provided the resultant energy is 10 -17 to 10 -18 J yields a value in the order of 1.42·10 -6 to 3.14·10 -6 m. These values fall within the range Bohr's postulate of the quantum of energy for consciousness and thinking [43] as well as in the order of error with respect to the peak frequency most associated with effective inflation of the exclusion zone of water around a boundary [10,11]. In addition, this separation is typically what is found between synaptic connections in the neural network associated with memory (Crosby, 1962).
Where E is the energy created by zero point fluctuations, a is the distance between plates (cells; m) and A is the surface area of the plates (cells; m 2 ).
CONCLUSION
We have demonstrated the potential of an amplitude-modulated signal can produce a time-varying, changing electromagnetic potential. This interaction of electromagnetic potential can induce a change in phase of a charged particle in time. Thus resulting in a rate of change in phase, or tame-varying, dynamic phase. In addition, we have suggested that information can be stored within these time-varying phase modulations. The primary carrier associated with this information's transmission would be the photon.
The Casimir phenomenon generated from the presence of a magnetic pressure of 10 -5 T, would be accommodated by a separation between plates in the order of 1 um. When accommodating for the energy equivalent of the time-varying phase of the Bohr magneton, 2.62·10 16 Hz, the value of the separation between Casimir boundaries, provided the surface area is approximately that of the cell, would range between 1.42 to 3.14 micrometers. Furthermore, we have presented convergent quantification that may also implicate the cell membrane as a functional filter of photontransmitted, phase modulated information. Water along a boundary, provided that it is devoid of mechanical stimulation, may also suffice as an information receiver. | 7,246.4 | 2017-12-30T00:00:00.000 | [
"Physics"
] |
Cholesterol crystallization within hepatocyte lipid droplets and its role in murine NASH[S]
We recently reported that cholesterol crystals form in hepatocyte lipid droplets (LDs) in human and experimental nonalcoholic steatohepatitis. Herein, we assigned WT C57BL/6J mice to a high-fat (15%) diet for 6 months, supplemented with 0%, 0.25%, 0.5%, 0.75%, or 1% dietary cholesterol. Increasing dietary cholesterol led to cholesterol loading of the liver, but not of adipose tissue, resulting in fibrosing steatohepatitis at a dietary cholesterol concentration of ≥0.5%, whereas mice on lower-cholesterol diets developed only simple steatosis. Hepatic cholesterol crystals and crown-like structures also developed at a dietary cholesterol concentration ≥0.5%. Crown-like structures consisted of activated Kupffer cells (KCs) staining positive for NLRP3 and activated caspase 1, which surrounded and processed cholesterol crystal-containing remnant LDs of dead hepatocytes. The KCs processed LDs at the center of crown-like structures in the extracellular space by lysosomal enzymes, ultimately transforming into lipid-laden foam cells. When HepG2 cells were exposed to LDL cholesterol, they developed cholesterol crystals in LD membranes, which caused activation of THP1 cells (macrophages) grown in coculture; upregulation of TNF-alpha, NLRP3, and interleukin 1beta (IL1β) mRNA; and secretion of IL-1beta. In conclusion, cholesterol crystals form on the LD membrane of hepatocytes and cause activation and cholesterol loading of KCs that surround and process these LDs by lysosomal enzymes.
steatosis. In the majority of patients, hepatic steatosis occurs in the absence of concomitant inflammation or fibrosis. This "simple steatosis" carries a very low risk of progression to cirrhosis or liver dysfunction (1). However, a small subset of patients with NAFLD (10%-30%) develop a more aggressive condition known as nonalcoholic steatohepatitis (NASH), which can progress to cirrhosis (1,2) and is characterized by hepatocellular injury (e.g., manifesting as ballooning of hepatocytes) and varying degrees of hepatic inflammation and fibrosis, in addition to hepatic steatosis.
The factor (or factors) responsible for the development of progressive NASH, as opposed to simple steatosis, remains unclear. Recent studies suggest that cholesterol is an important lipotoxic molecule that promotes the development of NASH in many, diverse animal models (3)(4)(5)(6)(7)(8)(9)(10)(11)(12). Human epidemiological studies (13) and clinical trials of cholesterol-lowering drugs (14)(15)(16)(17)(18)(19)(20) appear to also support a role of cholesterol in the development of NASH. The mechanisms by which cholesterol might exert lipotoxicity and promote the development of NASH remain unclear (21). We recently reported that cholesterol crystals developed within the LDs of steatotic hepatocytes in patients with NASH and in a mouse model of NASH induced by a high-fat, high-cholesterol (HFHC) diet, but not in patients or mice with simple steatosis (22). We also demonstrated that enlarged Kupffer cells (KC) surrounded steatotic, dead hepatocytes containing cholesterol crystals and appeared to process the remnant LDs within these hepatocytes, forming "crown-like structures" (CLS) similar to those previously described in inflamed visceral adipose tissue (23,24). Cholesterol crystals have recently been shown to activate the NLRP3 inflammasome in animal models of atherosclerosis (25,26), thus providing a mechanism by which exposure of KC to cholesterol crystals can lead to chronic inflammation and NASH. Treatment with ezetimibe and atorvastatin led to resolution of cholesterol crystals and CLSs together with amelioration of NASH (27).
In the present study we aimed to further characterize the development and implications of hepatic cholesterol crystallization in vivo and to develop an in vitro cell culture model of steatosis and cholesterol crystallization.
Animal procedures
Four-month-old, male C57BL/6J, WT, littermate mice (Jackson Laboratory, Bar Harbor, ME) were assigned to a high-fat (15%, weight/weight) diet for 6 months, supplemented with 0%, 0.25%, 0.5%, 0.75%, or 1% dietary cholesterol (five groups; n = 12 mice/group). Cocoa butter, which contains approximately 60% saturated fat, was the source of the extra fat in these diets (22,27). Their composition is shown in supplemental Table S1. Mice were housed four per cage with unrestricted access to food and water. Mice were euthanized 6 months after initiation of the experimental diets by cervical dislocation following isoflurane anesthesia. All experimental procedures were approved by the Institutional Animal Care and Use Committee of the Veterans Affairs Puget Sound Health Care System.
Histological assessment of steatosis, inflammation, and fibrosis
Formalin-fixed, paraffin-embedded liver tissue sections were stained with H&E, Masson's trichrome, or Sirius red (for collagen). Histological steatosis, inflammation, and fibrosis were assessed semiquantitatively with the scoring system of Kleiner et al. (28) by a "blinded," expert hepatopathologist (M.M.Y.). Sirius redstained collagen fibers were also quantified by using a polarizing microscope with digital image analysis (National Institutes of Health Image J density software), as the average of 10 random 200× fields without major blood vessels (6). Immunohistochemical staining for -smooth muscle actin was performed as a marker of stellate cell activation.
Assessment of hepatic cholesterol crystals and free cholesterol
Liver pieces were embedded in OCT compound and frozen in liquid nitrogen immediately after harvest. Frozen sections (10 µM in thickness) were allowed to come to room temperature, immediately cover-slipped with glycerol as the mounting medium without applying any stain, and examined using a Nikon Eclipse microscope with or without a polarizing filter, to evaluate for the presence of birefringent cholesterol crystals, as we previously reported (22,27). Frozen liver sections were stained with filipin, which identifies free cholesterol by interacting with its 3-hydroxy group to fluoresce blue (29).
To better distinguish LDs with or without cholesterol crystals, we used a special method of osmium tetroxide fixation and staining, followed by methylene blue counterstaining, which we recently described (22). Small pieces (2 mm 2 ) of liver tissue previously fixed in Trump's fixative were submerged in 1% osmium tetroxide for 1 h on ice, frozen in OCT, cut into 5-m sections, and counterstained with 0.05% methylene blue. Osmium tetroxide binds at the carbon-carbon double bonds of unsaturated fatty acid chains in triglycerides and cholesterol esters and therefore nicely distinguishes the free-cholesterol crystals, which do not stain with osmium, from the triglyceride and cholesterol esters within LDs, which stain gray-black.
Identification of activated KCs and CLSs
Frozen liver sections were stained with anti-CLEC4F antibody (R&D Systems Inc., Minneapolis, MN), which identifies resident macrophages/KCs, and with anti-Ly6c (Thermo Fisher Scientific, Fremont, CA), which identifies recruited myeloid cells, to confirm that CLSs comprised resident macrophages/KCs as opposed to recruited myeloid cells. Frozen liver sections were also stained with anti-CD68 and anti-F4/80 antibodies, which identify macrophages (including hepatic KCs) and with anti-TNF- antibodies, which identify activated M1 macrophages as we have previously described (22,27). CLS can be readily identified by TNF- staining as rings of activated KCs surrounding and processing steatotic hepatocytes containing cholesterol crystals. The number of CLSs that stained with anti-TNF- in 10 random 200× fields (area = 0.14 mm 2 /field) per liver was averaged. Acid phosphatase staining was used to identify the lysosomes of KCs within CLSs.
Evidence of NLRP3 activation
We stained liver sections with anti-NLRP3 antibodies to look for expression of this component of the NLRP3 inflammasome in the KCs of CLSs, as we have previously described (27). Liver sections were also stained for activated (cleaved) caspase 1, using the FAM-FLICA caspase 1 assay kit (27). We used real-time PCR (RT-PCR) to quantify mRNA gene expression levels, as previously described (7), of the following components of the NLRP3 inflammasome in liver tissue: Caspase-1, Nalp3, and apoptosisassociated speck-like, caspase recruitment domain-containing protein (Asc) (30).
Hepatic lipid analysis
Lipids were extracted from frozen mouse liver by using the Folch method (31). The neutral lipid fractions were prepared by solid phase extraction, and the triglycerides, cholesterol esters, and free cholesterol were then separated and quantified by normalphase HPLC/Evaporative Light Scattering Detector (ELSD).
Cell culture
HepG2 cells plated in 24-well culture plates (or 2-well glass chamber slides) were cultured in basal medium with 10% fetal bovine serum and 2 mg/ml of the ACAT inhibitor (32,33), were directly cocultured with the HepG2 cells for 3 h (for mRNA expression studies) or for 24 h (for protein expression studies), after washing the HepG2 cells with regular media to remove any residual LDL or oleic acid. PMA-activated THP-1 macrophages have been shown to produce pro-IL-1beta and to secrete IL-1beta in response to direct exposure to cholesterol crystals in a dose-dependent manner (26). Cell culture systems were stained with Sudan black (fat) and were evaluated for cholesterol crystallization by polarized light microscopy and filipin staining and by NLRP3 activation by mRNA assays of NLRP3 components and protein studies for secreted IL-1 in the supernatants.
In additional experiments, PMA-treated THP-1 cells were grown on Transwell ® inserts (0.4-µm pore size); then the inserts were suspended above the HepG2 cells that had been treated with control or LDL or with OA, as was described above. Media from the top and bottom of the insert were collected at 24 h, and IL-1beta was measured by ELISA.
Fibrosing NASH develops at a threshold dietary cholesterol concentration of 0.5% in WT C57BL/6 mice
Increasing dietary cholesterol concentration from 0% to 1% caused increasing liver weight and liver weight: body weight ratio ( Table 1). This was caused primarily by an increase in hepatic cholesterol ester concentration, though hepatic triglyceride concentration did not increase. Plasma ALT (a marker of hepatic necroinflammation) and plasma cholesterol levels increased with increasing dietary cholesterol.
Although severe histological hepatic steatosis (grade 3) was evident with all dietary cholesterol concentrations, substantial hepatic histological inflammation and fibrosis, indicative of NASH, occurred only at a dietary cholesterol at or above 0.5% (Table 1 and Fig. 1). Quantitative Sirius red staining confirmed the abrupt rise in hepatic fibrosis at a threshold dietary cholesterol of 0.5%.
Although increasing dietary cholesterol composition from 0% to 1% led to a 60-fold increase in hepatic cholesterol ester content, it did not lead to any increase in the cholesterol ester content of subcutaneous or epididymal fat. Thus excess dietary cholesterol seems to be stored in the liver rather than in adipose tissue.
Hepatocyte cholesterol crystals and crown-like structures of activated KCs also develop at a threshold dietary cholesterol concentration of 0.5%
Substantial cholesterol crystallization developed within hepatocyte LDs at the same threshold dietary cholesterol concentration (at or above 0.5%) that induced fibrosing NASH ( Table 2 and Fig. 1). Filipin staining confirmed that the crystalline birefringent material within hepatocyte LDs was free cholesterol (Fig. 2). Activated TNF--positive KCs (Fig. 2), which were also positive for CD68, F4/80, and CLEC4F but negative for Ly6C (supplemental Fig. S2), surrounded the most intensely birefringent LDs forming characteristic CLSs, which also became evident at a threshold dietary cholesterol concentration of 0.5%.
Characterization of cholesterol crystallization
In intact hepatocytes that were not surrounded by CLS, cholesterol crystallization was evident in the periphery of their large LDs in association with the LD membrane (Fig. 3A, B). However, in the remnant LDs of dead hepatocytes that were surrounded and processed by KC in a CLS, the entire LD contained crystallized cholesterol (Fig. 3C, D). LDs that were entirely birefringent as in Fig. 3C, D were always in the middle of a CLS. This suggests that additional free cholesterol is formed after the hydrolysis of cholesterol esters by the lysosomal enzymes of the surrounding KCs, leading to more dramatic cholesterol crystallization.
Characterization of crown-like structures and NLRP3 activation
At high magnification, CLSs are shown to consist of multiple KCs and other macrophages that surround largeremnant LDs of dead hepatocytes containing multiple cholesterol crystals (Fig. 3C, D). The KCs that make up CLSs appear to be in direct apposition to each other, surrounding and enclosing the remnant LD and directly abutting it without any intervening hepatocyte cytoplasm or cell membrane (supplemental Fig. S3), thus demonstrating that the hepatocyte is dead. KCs processing remnant LDs of hepatocytes were transformed into characteristic foam cells containing multiple small LDs (Fig. 3E, F). Also these KCs in CLSs strongly express acid phosphatase (Fig. 3J), suggesting that lysosomal enzymes are released into the LD in the center of the CLS to hydrolyze its triglyceride and cholesterol ester content. In addition to TNF-, the KCs in CLSs are shown by immunohistochemistry to express Nlrp3 (Fig. 3G) and activated caspase 1 (Fig. 3I). The presence of cleaved caspase 1 confirms activation of the inflammasome pathway in KCs within CLS as a possible response to cholesterol crystals in NASH.
THP-1 cells cocultured with HepG2 cells induced to develop large LDs and cholesterol crystals
LDs developed in HepG2 cells that were exposed to either LDL or oleic acid but were more numerous and larger in cells exposed to both LDL and oleic acid (Fig. 4). Only cells exposed to both oleic acid and LDL developed cholesterol crystals within their LDs (Fig. 4). Cholesterol crystallization was noted in the periphery of the LDs adjacent to their membrane (Fig. 5), identical to "early" cholesterol crystallization that we observed in vivo in hepatocytes that were not surrounded by KCs in CLS.
Before direct coculture with THP-1 macrophages, HepG2 cells expressed very low levels of Tnf, Nlrp3, or Il1 mRNA and secreted no IL1b protein into their supernatants, even after exposure to LDL, OA, or LDL + OA for 20 days. Increased expression of Tnfa, Nlrp3, and Il1 mRNA (Fig. 6A) and increased secretion of IL1 into the supernatant (Fig. 6B) were demonstrated when PMA-activated THP-1 macrophages were cocultured with cholesterol crystalcontaining HepG2 cells (i.e., those previously exposed to LDL + OA) but not when they were cocultured with HepG2 cells without cholesterol crystals (i.e., those previously exposed to LDL alone, OA alone, or control). Direct exposure of THP-1 macrophages to synthetic cholesterol crystals led to profound secretion of IL1 in a dosedependent manner (Fig. 6C), despite almost no change in Il1 mRNA.
When THP-1 cells were added in Transwells ® above the HepG2 cells (i.e., "noncontact coculture"), there was no stimulation of THP-1-derived IL-1b, regardless of whether HepG2 cells had been exposed to LDL and oleic acid or not, arguing against the release of a humoral/soluble factor by the HepG2 cells that stimulates the THP-1 cells.
DISCUSSION
We recently reported that cholesterol crystals were present in hepatocyte LDs in experimental and human NASH and that CLSs consisting of activated KCs and macrophages surrounded and processed cholesterol crystalcontaining remnant LDs of dead hepatocytes (22). Furthermore, we showed that treatment with ezetimibe and atorvastatin caused resolution of NASH induced by an atherogenic diet in diabetic obese mice, while simultaneously causing dissolution of cholesterol crystals and dispersion of KC-CLSs (27). Here we demonstrate that excess dietary cholesterol is preferentially stored in the liver and that hepatic cholesterol crystals and CLSs develop at the same threshold concentration of dietary cholesterol that also leads to the development of NASH in mice fed a HFHC diet. This suggests a causative association between the development of hepatocyte cholesterol crystals and KC-CLS and the development of NASH. We demonstrate that early a Median values are reported for histological steatosis, inflammation, and fibrosis, scored as follows: steatosis is graded as <5% (0); 5%-33% (1); 34%-66% (2); and >66% (3) of hepatocytes being steatotic at ×200 magnification (28). Lobular inflammation combines mononuclear, fat granulomas and polymorphonuclear leukocytes and is graded as none (0); <2 (1); 2-4 (2); and >4 at ×200 magnification (28). Fibrosis is staged as none (0); perisinusoidal (1a); periportal (1b); periportal and perisinusoidal (2); bridging fibrosis (3); and cirrhosis (4) (28).
b Presented as the percentage of the surface area of the liver section that is positive and calculated as the average of 10 random ×200 fields.
c Average number of CLSs in 10 random ×200 fields. Recent work with artificial membranes suggested that free cholesterol present at high concentration and regular orientation in a membrane can act as a template for cholesterol crystallization to occur adjacent to it (34). Furthermore, models of cholesterol-phospholipid interactions in membranes such as the "umbrella model" predict that after the membrane becomes saturated with cholesterol, any additional cholesterol precipitates to form cholesterol crystals adjacent to the membrane (35). This provides a plausible mechanism for the cholesterol crystallization that we observed in association with LD membranes. We speculate that cholesterol saturation disrupts the function of the LD membrane and the large array of proteins now known to reside on LD membranes by dramatically reducing the fluidity of the LD membrane (21). Cholesterol content is known to critically affect the function of the plasma membrane and of the membranes of intracellular organelles such as the mitochondrial and endoplasmic reticulum by affecting membrane fluidity (36). By analogy the same could be true of the LD membrane, although a critical difference is that the LD membrane is a phospholipid monolayer rather than a bilayer. Furthermore, the precipitation of cholesterol crystals adjacent to the LD membrane may also affect the fluidity of the membrane and, in extreme cases of profound crystallization, may even disrupt the physical integrity of the membrane (36). Future experiments will be needed to elucidate the impact of cholesterol crystallization on LD function and to investigate whether it can directly lead to hepatocyte death.
Upon sensing a plethora of structurally diverse pathogen-associated molecular patterns and damage-associated molecular patterns, the NLRP3 inflammasome assembles into a multimolecular platform, which collectively activates caspase-1 by autocatalytic cleavage (37). The active form of caspase-1 then cleaves pro-IL-1 and pro-IL-18 to form biologically active IL-1 and IL-18, which engage innate immune defenses (37) and are important mediators of the inflammatory response. The NLRP3 inflammasome is now thought to drive inflammation in response to exposure to crystals that cause human inflammatory conditions such as gout (urate crystals), pseudogout (calcium phosphate dihydrate crystals), silicosis (silica crystals), and asbestosis (asbestos crystals) (38)(39)(40)(41). In these conditions, phagocytosis of crystals by macrophages leads to lysosomal swelling and release of cathepsin B, a lysosomal protease, which activates the NLRP3 inflammasome (41). Cholesterol crystals are the latest addition to the group of crystals shown to activate the NLRP3 inflammasome in the macrophages of an important, human chronic inflammatory condition, namely, atherosclerosis (25,26). Our findings suggest that exposure of KCs to cholesterol crystals within CLSs may activate the NLRP3 inflammasome, leading to chronic inflammation and contributing to the development of NASH.
Our coculture experiments demonstrate that inflammasome activation and production and secretion of IL-1 and TNF- occurred in activated THP-1 cells (macrophages) exposed to crystals containing HepG2 cells but not in crystals containing HepG2 cells that were not cocultured with THP-1 cells. Thus, even though hepatocytes are in theory capable of NLRP3 expression and activation (42,43), it is macrophages/KCs that have the primary function of secreting inflammatory cytokines when exposed to cholesterol crystals. PMA treatment has been shown to induce pro-IL-1b mRNA in THP-1 cells (26) and therefore acts as the "first signal" in this cell culture model, with subsequent exposure to the cholesterol crystals in the HepG2 cells being the "second signal" that causes NLRP3 activation and cleavage of pro-IL-1b into mature, secreted IL-1b.
It is generally believed that KCs, as with all macrophages, derive most of their cholesterol through receptormediated endocytosis of circulating lipoproteins. However, our results suggest a novel mechanism of cholesterol accumulation in hepatic KCs: the processing of hepatocyte (44). The main difference is that the digestion Fig. 6. Induction of IL-1b in THP-1 cells by coculture with cholesterol crystal-containing HepG2 cells or by direct exposure to synthetic cholesterol crystals. A-B: HepG2 cells exposed to 200 µM of oleic acid (OA), 2,000 µg/ml of LDL cholesterol (LDL), or both LDL + OA for 20 days, after which they were cocultured with PMA-activated THP-1 cells (macrophages). A: mRNA expression was measured after 3 h. Increased mRNA expression of Il-1b and to a lesser extent Tnf and Nlrp3 mRNA by the THP-1 cells was noted when cocultured with HepG2 cells that had been previously exposed to both LDL + OA and had developed cholesterol crystals in their lipid droplets. B: IL-1b in the media at 24 h as measured by ELISA. Results are shown as a percentage of control cells. Significantly increased IL-1b secretion occurred in the THP-1 cells cocultured with HepG2 cells that had been previously exposed to both LDL + OA. C: Exposure of PMA-activated THP-1 cells directly to synthetic cholesterol crystals causes a dose-dependent secretion of IL-1b. Results are significant at P < .05 in comparison with control.
of remnant LDs that we describe by KC-CLSs requires multiple KCs simultaneously forming a tight compartment around the LDs, given the large size of the LDs in relation to the KCs. Future studies will be needed to prove that there is exocytosis of lysosomes by KCs into the remnant LD in the middle of CLSs and acidification of this space required for the enzymatic activity of lysosomal acid lipase.
We found that increasing dietary cholesterol led to cholesterol loading of the liver but not of adipose tissue. Excess dietary cholesterol was stored almost exclusively in the liver within hepatocyte lipid droplets after conversion into cholesterol ester rather than in adipose tissue. We found limited confirmatory literature on this potentially important topic (45). Our findings are consistent with the extensive dysregulation of hepatic cholesterol homeostasis that has been documented in NAFLD, leading to increased hepatic cholesterol levels (46,47) and ultimately resulting in crystallization of cholesterol in hepatocyte lipid droplets. We found no cholesterol crystals in adipocyte lipid droplets, consistent with the low concentration of cholesterol in adipose tissue (data not shown).
A schematic representation of our hypothesized mechanism of hepatic cholesterol crystal-induced NASH is shown in Fig. 7. First, in the setting of hepatic cholesterol loading, cholesterol crystallization occurs initially in hepatocytes in the periphery of large LDs in close association with the LD membrane, likely as a result of high free-cholesterol concentration on the LD membrane. Cholesterol crystallization likely disrupts LD membrane function, because all cellular membranes are exquisitely sensitive to cholesterol content, but does not appear to activate the NLRP3 inflammasome in hepatocytes based on our cell culture findings. Fig. 7. Schematic representation of our hypothesized mechanism of hepatic cholesterol crystal-induced NASH, involving the following components: Steps 1-2: As a result of a HFHC diet (A) or other causes of hepatic cholesterol loading, large lipid droplets form within hepatocytes (B), and cholesterol crystallization occurs (C) initially in the periphery of large LDs in close association with the LD membrane, likely as a result of precipitation of supersaturated cholesterol from the LD membrane. Cholesterol crystallization disrupts LD membrane function.
Step 3: KCs aggregate around necrotic hepatocytes containing cholesterol crystals in response to the chemotactic signals produced by the hepatocytes, forming CLSs. KCs in the CLS hydrolyze the remnant LDs of dead hepatocytes in the extracellular space, possibly by lysosomal exocytosis (D), and additional cholesterol crystals are formed because of the hydrolysis of cholesterol esters into FC by lysosomal acid lipase.
Step 4: Uptake of this FC by KCs and exposure to cholesterol crystals transform KCs into activated lipid-laden foam cells (E). Exposure of KCs to cholesterol crystals causes activation of the NLRP3 inflammasome within KCs, which leads to production of proinflammatory cytokines and chemokines by KCs and propagates the chronic "sterile" inflammation of NASH.
Step 5: Chemotactic signals produced by crystalactivated KCs attract an inflammatory infiltrate of additional KCs and neutrophils, as well as causing aggregation, activation, and transformation of stellate cells into collagen-producing myofibroblasts (F), leading to fibrosing NASH and, ultimately, cirrhosis. Second, KCs aggregate around dead hepatocytes containing cholesterol crystals forming CLSs. KCs in the CLS hydrolyze the remnant LDs of dead hepatocytes in the extracellular space, possibly by lysosomal exocytosis, and additional cholesterol crystals are formed because of the hydrolysis of cholesterol esters into FC by lysosomal acid lipase. Third, KCs exposed to cholesterol crystals transform into activated, lipid-laden foam cells. Exposure of KCs to cholesterol crystals causes activation of the NLRP3 inflammasome within KCs, which leads to production of proinflammatory cytokines and chemokines by KCs and propagates the chronic "sterile" inflammation of NASH. Finally, chemotactic signals produced by crystal-activated KCs attract an inflammatory infiltrate of additional KCs and neutrophils, as well as causing aggregation, activation, and transformation of stellate cells into collagen-producing myofibroblasts, leading to fibrosing NASH and, ultimately, cirrhosis.
This mechanistic hypothesis has therapeutic implications for human NASH. Cholesterol-lowering drugs are widely available and, as we have recently shown, can reverse hepatic cholesterol crystallization and simultaneously reverse experimental fibrosing NASH (27). Recently a novel, first-in-class, oral, specific NLRP3 inhibitor has been described (48,49); therefore, pharmacological NLRP3 inhibition could also be a potential therapeutic approach in NASH if our hypotheses are proven correct. | 5,630.8 | 2017-05-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
A simple and fast algorithm for estimating the capacity credit of solar and storage
Energy storage is a leading option to enhance the resource adequacy contribution of solar energy. Detailed analysis of the capacity credit of solar energy and energy storage is limited in part due to the data intensive and computationally complex nature of probabilistic resource adequacy assessments. This paper presents a simple algorithm for calculating the capacity credit of energy-limited resources that, due to the low computational and data needs, is well suited to exploratory analysis. Validation against benchmarks based on probabilistic techniques shows that it can yield similar insights. The method is used to evaluate the impact of different solar and storage configurations, particularly with respect to the strategy for coupling storage and solar photovoltaic systems. Application of the method to a case study of utilities in Florida, where solar is rapidly growing and demand peaks in the winter and summer, demonstrates that it can improve on rules of thumb used in practice by some utilities. If storage is required to charge only from solar, periods of high demand driven by cold weather events accompanied by lower solar production can result in a capacity credit of solar and storage that is less than the capacity credit of storage alone.
Introduction
Worldwide, renewable energy is expected to grow by 50% between 2019 and 2024 with solar photovoltaics (PV) making up 60% of all renewables [1]. One factor contributing to the attractiveness of solar PV is its relatively high economic value in regions where solar production is aligned with periods of peak electricity demand [2]. Increasing the share of generation from solar PV, however, can shift timing of peaks in net demand (demand less solar PV generation) and displace generation with lower variable costs [3]. These changes contribute to a declining economic value of solar with higher penetration [4]. Energy storage stands out as one the more effective strategies to mitigate the decline in economic value of solar PV. In a scenario with 30% of annual energy met by solar PV, Mills and Wiser [5] find that the marginal economic value of solar PV increases by 80% when low-cost storage is deployed in the power system compared to a reference case without storage.
Energy storage can also be deployed at the same physical location as solar in a hybrid solar + storage facility. In the U.S., storage is eligible for the Federal Investment Tax Credit (ITC), equivalent to a 30% reduction in capital costs, when storage can be shown to charge from solar rather than from the grid. Commercial activity related to hybrid solar + storage plants is growing rapidly in the U.S., particularly in California where recent wholesale electricity market prices indicate the potential additional revenue from adding storage exceeds the additional cost (accounting for the ITC) [6].
One of the sources of economic value of solar + storage plants is its contribution toward meeting resource adequacy requirements. Adequacy is an aspect of overall power system reliability that "relates to the existence of sufficient facilities within the system to satisfy the consumer load demand or system operational constraints" [7]. In this paper the contribution of a resource toward adequacy is called the capacity credit. Estimates of a resource's capacity credit are often based on a probabilistic assessment that considers its reduction to the risk of a loss of load when available supply is less than demand. A prevailing method for estimating the capacity credit with a probabilistic assessment is called the effective load carrying capability (ELCC). The drivers of the capacity credit of stand-alone solar PV using probabilistic assessments are well established [8]. Many regulators, utilities, and regional planners account for the capacity credit of stand-alone solar PV in economic valuation studies [9]. Of particular importance, the capacity credit of solar declines with increasing penetration of solar PV as the timing of periods with the highest risk of insufficient generation can shift from the peak demand in the afternoon to peak net demand periods in the early evening when the sun sets [10]. Munoz and Mills show the importance of accounting for the decline in capacity credit of solar PV in capacity expansion modeling [11].
In contrast, the capacity credit of solar + storage resources is not yet well understood. The objective of this paper is to develop methods for exploring the drivers of the capacity credit of solar + storage. In practice, rules of thumb are used to determine the capacity credit of a combined resource. One approach is to calculate the capacity credit of solar + storage as the sum of the capacity credit of the independent components (e.g., the capacity credit of stand-alone solar plus the capacity credit of stand-alone storage) limited by the capacity of any shared equipment such as an inverter or point of interconnection limit. This rule of thumb is used for evaluating resource adequacy contribution of solar + storage in California [12] and for evaluating candidate resources for procurement in Colorado [13].
More generally, methods for calculating the capacity credit of energy-limited resources, including energy storage, are nascent. Sioshansi et al. [14] develop probabilistic methods to quantify the capacity credit of storage accounting for the storage level at the time of an outage and, through dynamic programming techniques, potential subsequent outages in later hours. Previous estimates of storage's capacity credit with probabilistic techniques from Tuohy et al. [15] do not account for subsequent outages, potentially overstating the contribution of storage. Both Sioshansi et al. [14] and Tuohy et al. [15] calculate the starting storage level in each hour assuming it is dispatched to maximize revenue from energy arbitrage, rather than to maximize the capacity credit. Alternatively, Parks [16] modifies the traditional probabilistic techniques to maximize the capacity credit of energylimited resources by discharging the energy-limited resources in periods of highest risk of a loss of load. Hall et al. [17] use a similar technique to find the capacity credit of storage for the New York Independent System Operator. Byers and Botterud [18] use probabilistic methods to calculate the capacity credit of energy storage based on Monte Carlo simulations of system-wide chronological unit commitment and economic dispatch.
Additional variations on probabilistic techniques for finding the capacity credit of energylimited resources include a two-stage optimization approach by Zhou et al. [19] and an approach by Nolan et al. [20] to calculate the capacity credit of a given demand response time series based on the assumption that demand response is dispatched to reduce peak demand.
A common challenge with probabilistic techniques is the large computational burden and detailed nature of the data required to conduct this analysis. This makes it more difficult to quickly explore the way capacity credit varies depending on technology configurations and characteristics of demand. Several simple alternatives to probabilistic techniques have been used in the literature. Denholm et al. [21] estimate the capacity credit of storage as the difference between the maximum demand and the maximum demand net of the storage dispatch. Storage discharge is chosen to maintain the maximum net demand at or below a target level and it can charge any time that does not increase the maximum net demand.
Richardson and Harvey [22] use storage to evenly allocate solar PV production across a day to meet a constant fraction of the demand above the minimum demand. They then calculate the capacity credit of the combined solar + storage facility using the Garver approximation to the effective load carrying capability [23]. Fattori et al. [24] dispatch storage to smooth the net demand profile based on a moving average window of different durations. The capacity credit is then based on the difference between the peak demand and peak demand net of solar + storage. None of these simple alternatives are directly validated against the more detailed probabilistic techniques. This paper presents a simple algorithm for calculating the capacity credit of energy-limited resources that, due to the low computational and data needs, is well suited to exploratory analysis. Importantly, validation against benchmarks based on probabilistic techniques shows that estimates based on the method can yield similar insights. The simple nature of the capacity credit calculations is used to evaluate the impact of a wide variety of different solar + storage configurations, particularly with respect to the strategy for coupling storage and solar PV. The case study focuses on Florida where solar is rapidly growing and demand peaks in the winter and summer.
Methods and Data
The foundation of this capacity credit calculation is the load duration curve (LDC) method employed in two capacity expansion models developed by the National Renewable Energy Laboratory (NREL) called the Regional Energy Deployment System (ReEDS) [19] and Resource Planning Model (RPM) [25]. With the LDC method, the capacity credit is calculated as the reduction in the average highest peak net load hours relative to the average highest peak load hours. The calculation method can be visualized as the difference between an LDC, which sorts the load from the highest to the lowest over a specified period, such as a year, and a net LDC during the peak hours ( Figure 1). Here the peak hours are defined as the top 100 hours of the year (the top 1.1% of hours). The net LDC is created by first reducing the hourly load by the corresponding generation from the resource in the same hour and then sorting the resulting net load from highest to lowest.
Because the load and net load duration curves are sorted independently, the gap between the load and net load duration curves represents the decrease in the highest net load hours, irrespective of when they occur. The LDC method can therefore capture any effects where deployment of a resource leads to a shift in the time of day or season in which the net load peak hours occur. In the case of energy-limited resources, the LDC method defines how to calculate the capacity credit for a given dispatch profile, but it does not specify how to dispatch resources to maximize its capacity credit. Section 2.1 presents an algorithm for finding the dispatch of energy-limited resources, which is central to calculating the capacity credit of storage using the LDC method as summarized in Figure 2. Sections 2.2 and 2.3 describe an approach to validate the capacity credit calculated with the LDC method against two benchmarks based on probabilistic techniques.
Dispatch Algorithm
The dispatch algorithm is a linear programming model whose solution maximizes the capacity credit of storage, where the capacity credit is defined based on the LDC method. In the case of storage, the net LDC is created by both reducing demand by the energy generation from discharging storage and increasing demand by the energy required for charging storage.
The approach leverages insights from the literature on optimizing the conditional value at risk (CVaR) [26] and the fact that maximizing the capacity credit of a resource, based on the LDC method, is equivalent to minimizing the area under the net LDC in the peak net load hours. Because the resulting optimization model is linear, it can be constructed and solved within 20 seconds on a computer with a 2 GHz processor using an open source solver (e.g., GLPK [27] with the Pyomo package for Python [28], [29]).
The basics of the CVaR optimization formulation and its adaptation to capacity credit maximization are presented before the specific application of finding the dispatch of energy storage.
Conditional Value at Risk Optimization Formulation
In mathematical finance theory, one of the most widely used coherent measures of risk is the so-called CVaR index [30]. The VaR (value at risk) calculates the losses of an investment portfolio with a specified probability. For example, the losses of an investment will have a 5% probability of being higher than the VaR at 95%. The conditional value at risk (CVaR) at 95% represents the average losses in those 5%-probability worst cases. Thus, it is directly proportional to the area of the density function for those 5% worst cases.
The CVaR optimization formulation minimizes the risk associated with buying a portfolio of assets by minimizing the CVaR at a certain percentile, according to different returns associated with several scenarios.
Given a portfolio of n assets, let Xi be the per-unit amount of each one: Let s = 1,…,S be the different scenarios considered for the evolution of the price of the different assets. For each considered scenario s, the losses per stock can be calculated as the product of the per-unit amount times the loss ris for each asset in each scenario. Hence, the total losses for each scenario can be calculated as: = . (2) Having the density function for the total losses, and given a percentile, the VaR is defined as the value of the density function at that specific percentile (). Typically, a 95% percentile is used (VaR95%). As mentioned, the CVaR is defined as the mean of losses in the 5% (or whatever VaR) tail of the distribution.
The CVaR minimization for a percentile can be expressed in equation (3).
Rockafellar and Uryasev [30] prove that this CVaR minimization can be solved by using the linear optimization problem described by equations (4-7).
Objective Function: Subject to: This methodology can be applied in other contexts where areas below some curves must be minimized or maximized. Particularly, the CVaR model can be adapted to maximize the capacity credit of a resource defined with the LDC method, because the final objective is to minimize the area below the net load curve during peak hours.
Maximization
The application of the CVaR optimization to the maximization of the capacity credit for storage facilities can be understood through an analogy between the different problems to be solved.
On the one hand, CVaR minimization can be expressed through equation (3). On the other hand, trying to maximize the capacity credit is equivalent to minimizing the net load, NLh, for the H peak hours. Minimizing the net load can be carried out through a minimization of the area below the net load-duration curve for the first H hours. The primary contribution of this paper is to connect the goal of maximizing the capacity credit of a resource, based on the LDC method, to its equivalent formulation in equation (8) which minimizes the area under the net LDC in the peak net load hours.
Where, as shown in Figure 1b, NL * H+1 represents the net load level at the peak hour H+1, NLh the net load for hour h, and X the storage dispatch decisions (subject to all the operational and technical constraints of the storage system, such as chronology, maximum level of storage, and maximum output).
An analogy between both problems shows how an equivalent linear problem can be used to solve the capacity credit maximization. The losses for each stock are analogous to the net load of the system after the storage dispatch for every hour has been determined. Scenarios in CVaR minimization are substituted by hours in the capacity credit maximization. Finally, the confidence level will correspond to the ratio of non-peak hours, that is (8760-H)/8760. The parallelism between both formulations is summarized in Table 1. The other major contribution of this paper is to leverage the insight from Rockafellar and Uryasev's solution to the CVaR minimization problem [30] to convert equation (8) into an equivalent linear problem, as described next. Applying the capacity credit formulation to storage requires fully modeling the storage dispatch decisions. This formulation characterizes a storage system by its nameplate capacity (MW), its energy capacity (MWh), and its roundtrip efficiency. It also assumes the storage system charges and discharges at rates up to its nameplate capacity. The storage duration (in hours) is therefore the ratio of the energy capacity to the nameplate capacity.
The analysis uses hourly time steps-no shorter time constraints or ramping limits have been considered. The model also assumes perfect foresight over the whole analysis period.
Because the model searches for an optimal storage dispatch profile, the hourly system demand net of storage generation (the net load) is also a decision variable obtained from the model. The decision variable of the level of the net load just outside of the peak net load hours, NL * H+1, is especially significant, as it defines the area of the net LDC that when minimized leads to the storage dispatch with the maximum capacity credit.
The calculation details are as follows:
Operational Constraints
Load and Net Load = + − Identify Peak Hours ≥ − * Ignore Net Load in Non-peak Hours ≥ 0 Maximum Storage Discharge 0 ≤ ≤ The objective function, equation (9), minimizes the area under the net LDC curve in the peak net load hours, as illustrated in Figure 1b. It is analogous to equation (4) in the CVaR minimization problem. The constraints that identify which hours are peak net load hours, equations (11) and (12), are analogous to equations (5) and (7), respectively, in the CVaR minimization problem. The remaining equations define the net load and storage constraints.
Validation 2: Western United States Utility Benchmark
In the second validation, the ELCC of 4-hour duration storage calculated by a utility in the western U.S. using method described by Parks [16] is compared to the capacity credit with the LDC method of 4-hour storage dispatched using equations (9)(10)(11)(12)(13)(14)(15)(16). The capacity credit from the ELCC and LDC methods both use the same demand and variable renewable energy data from the western U.S. utility. The capacity credit of storage is considered under two different scenarios: one with near-term solar penetration levels ("Low Solar") and one with solar penetration levels that reflect expected expansion in solar by 2030 ("High Solar").
The utility has a peak demand of more than 7GW. The ELCC is based on the utility having a target LOLE of 2.4 hours/year.
The primary difference between this validation and Validation 1 is that here the utility independently develops the storage dispatch profile for the ELCC calculation. In contrast, in Validation 1 the storage dispatch profile from solving equations (9-16) is used to calculate the capacity credit both with the LDC method and with the ELCC method. In conjunction with the results from the first validation, the second validation also illustrates the applicability of the method to multiple regions.
Case Study
The case study employs the dispatch algorithm in Section 2.1 to an evaluation of the (Table 2). In all coupled cases, the ratio of the inverter capacity to the PV module capacity is kept constant, rather than changing the inverter size as storage is added. The different configurations are illustrated in Figure 3.
Results and discussion
Results begin with the validation of the method described in Section 2.1 based on two different benchmarks. The method is then used to explore the capacity credit of different solar + storage configurations for two of the Florida utilities.
Validation 1: Florida Utility Benchmark
The capacity credit estimated with the LDC method is compared to the ELCC calculated with a probabilistic method for different storage durations using data from three Florida municipal utilities in Figure 4. As point of comparison, the capacity credit for stand-alone PV is also shown using both methods. The sensitivity of the capacity credit to different weather patterns is highlighted by the range of values depending on which year of data was used between 2006-2016. The capacity credit is also calculated using all 11 years of data at one time. In this case, the LDC method maximizes the capacity credit over the peak 1100 hours (i.e., the peak hours continue to be defined as the top 1.1% of all hours).
Irrespective of the calculation method, these results show that even though storage is fully dispatchable, its capacity credit is highly dependent on the duration of storage. With too few hours of energy, storage cannot continuously reduce the peak net load hours on days with high, broad peaks. On these days, storage is more likely to be depleted when reducing peak load, leaving it unavailable for discharge during other peak hours. Because storage's capacity credit depends on load shape, it can vary from year to year. Storage with a given duration is more likely to have a higher capacity credit in years with narrower peaks, while achieving a high capacity credit in years with broader peaks requires longer-duration storage. These findings are in line with previous estimates based on detailed probabilistic methods (e.g., [14]). The 20-50% capacity credits of solar are within the range of solar capacity credits, at low penetrations, reported in other studies or assumed in utility planning studies, though at the lower end [38], [39]. The solar capacity credit is somewhat lower than the 54% capacity benchmark for both storage and stand-alone solar. For these two utilities, the main difference is that the LDC method tends to overestimate the capacity credit of storage and stand-alone solar, particularly for longer storage durations. Even for the small utility with a peak demand of less than 1 GW (City of Tallahassee), the solar capacity credit with the LDC method is somewhat similar to, though slightly higher than, the ELCC.
On the other hand, for the small utility (City of Tallahassee), the capacity credit of storage estimated with the LDC method is much greater than the ELCC. This starkly different result stems from the small number of conventional generating stations operated by Tallahassee, with some relatively large compared to the load, which leads to a widely distributed risk of outages (or a widely distributed LOLP). Whereas the risk is concentrated in less than about 0.5% of the hours for JEA and FMPP, Tallahassee's risk is distributed over about 17% of the year, Figure 5. 4 As a result, short-duration storage makes a much smaller contribution to increasing the overall system reliability for the City of Tallahassee compared to the contribution of storage in JEA and FMPP. 4 The concentration of the risk of outages is measured as the percentage of hours in which the LOLP is greater than 5% of the maximum LOLP. A smaller percentage of hours in which the LOLP is greater than 5% of the maximum indicates that the risk of outage is more concentrated in peak hours.
Figure 5. Comparison of the concentration of risk of outages in peak hours between three Florida utilities
Though the deviation between the capacity credit estimated with the LDC method and the ELCC for the City of Tallahassee is important to understand, this situation is expected to be rare. Few utilities are as small as the City of Tallahassee, and even among small utilities it is unusual to find individual generators that constitute such a large fraction of the total capacity. More generally, since each utility is modeled in isolation, several factors that could be important in determining the true risk profile for utilities are not addressed.
These include the potential to access generation over other transmission lines and to leverage shared reserves for short-term events. Probabilistic methods that can account for transmission capacity to neighboring utilities exist [41], though they are not considered in this validation.
Validation 2: Western United States Utility Benchmark
Here the capacity credit of storage, found using the algorithm described in Section 2.1, is compared to the ELCC calculated from a detailed probabilistic loss of load probability model. In this validation the dispatch of the storage was also done independently. Figure 6 compares the marginal capacity credit of additional 4-hour duration storage as increasing amounts of storage are added to the system. The capacity credit with the algorithm in Section 2.1 is again directionally consistent and quantitatively similar to the ELCC benchmark. Both methods find the credit of 4-hour duration storage declines from an initial marginal capacity credit of 85-95% to a marginal capacity credit of about 40-60% as storage nameplate capacity increases to 15% of the utility's peak demand. This decline occurs because the residual peak net demand gets broader as more storage is deployed. Reducing broader peaks require longer storage durations. Alternatively, deploying more storage with a fixed duration results in a lower capacity credit. Both methods also show that the marginal capacity credit of storage is consistently 10-20 percentage points greater in the high solar scenario compared to the low solar scenario. This finding is consistent with results from Denholm et al. [42] which shows that high solar penetrations in California can narrow net load peaks and delay the decline in storage's capacity credit.
Case Study
The dispatch algorithm is used to evaluate the impact of different configurations on the capacity credit of solar + storage for JEA, a utility with similar peak demand levels in winter and summer, and FMPP, a utility whose demand is highest in the summer.
Using FMPP demand and solar data for 2012 along with the assumption that storage and PV both have a nameplate capacity of 100 MW, shows how a coupled solar + storage system can have a capacity credit less than that of an independent system (Figure 7a). In this particular case, the capacity credit of solar + storage is not impacted by configuration for short-duration storage (1 hour). Increasing the duration, however, produces a gap between the capacity credits of independent and coupled systems. The capacity credit of solar alone is 50 MW (50% of its nameplate capacity) and, with 4-5 hours of storage duration, the capacity credit of storage alone can exceed 90MW (90% of its nameplate capacity). Bringing these to resources together with a shared 100 MW inverter limits the capacity credit of the coupled solar + storage system. Similar behavior is observed with the JEA demand and solar data, but here the difference between the capacity credit of the tightly and loosely coupled configurations is greater. In the tightly coupled case, the capacity credit of solar + storage can even be less than the capacity credit of storage alone (Figure 7b). The reason restricting charging to solar impacts the capacity credit in JEA may be that some of the JEA peak hours occur in the winter, when solar production is lower and less able to fully recharge storage during cold weather events that drive peak winter demand. In addition, demand peaks are wider for JEA than for FMPP; wider peaks require more energy, which is limited by solar generation in the tightly coupled case. At 6 hours of storage duration, requiring storage to only charge from solar (tightly coupled) results in the capacity credit that is less than that of storage alone (which can charge from the grid during off-peak hours). In contrast, when storage size is reduced to only 20% of the solar nameplate capacity, the capacity credit of solar + storage is nearly equivalent across the independent, loosely coupled and tightly coupled configurations. With storage sized well below the inverter capacity, and the capacity credit of solar less than 50% of its nameplate capacity, there are few opportunities for storage and solar to compete for limited inverter capacity. Likewise, in the tightly coupled case, a less energy is required to charge the smaller storage system, making storage easier to charge only with solar energy.
Even if storage and solar are equally sized, it may be possible to achieve the same (or similar) capacity credit with a coupled system as with an independent system if the inverter capacity is increased in the coupled system. This increases the cost of the coupled system, but it may be worth the cost if increasing resource adequacy is a high priority for the utility. The requirement to only charge storage from solar in the tightly coupled case may continue to be a limiting factor.
Ultimately, the optimal configuration of solar and storage depends on much more than maximizing the resource adequacy contribution [37]. Storage might reduce solar's levelized cost of energy, especially when the photovoltaic panels are oversized relative to the inverter, by charging coupled storage using energy that would otherwise be clipped.
Storage can also provide additional value streams, including energy arbitrage and ancillary services. It can also smooth the solar production profile due to passing clouds. Future analysis could investigate how the different uses of storage alter the optimal solar + storage configuration and whether any of these other factors affect the capacity credit.
Conclusions
Many factors impact the capacity credit of solar and storage, including weather, utility demand profiles, solar and storage deployment levels, and the configuration of solar and storage systems. Exploratory analysis of the relative importance of different factors can be useful before evaluating specific cases via more detailed and resource-intensive modeling.
The algorithm developed in this paper is a fast and relatively simple approach for identifying the dispatch that maximizes the capacity credit of storage and solar, suitable for such exploratory analysis.
Validation of the method shows that it mimic results from more detailed probabilistic methods, except for a very small utility with relatively large generators and widely distributed high-risk hours. Future analysis could more broadly investigate the circumstances that cause deviations from the benchmark based on probabilistic techniques.
Application of this method to data representing Florida utilities illustrates how it can improve upon rules of thumb used in practice. In particular, application of this method shows how, depending on the demand profile of the utility, the capacity credit of tightly coupled solar and storage can be less than the capacity credit of storage alone. This interaction is missed if utilities instead assume the capacity credit of solar + storage is the sum of the capacity credit of the independent components limited by the capacity of any shared equipment. | 6,879.6 | 2020-11-01T00:00:00.000 | [
"Engineering"
] |
Mathematical Modeling of Double-Skin Facade in Northern Area of China
This paper focuses on the operation principles of the double-skin facade (DSF) in winter of severe cold area. The paper discussed the main influence factors of building energy consumption, including the heat storage cavity spacing, the air circulation mode, the building envelope, and the building orientation. First, we studied the relationship among the thermal storage cavity spacing, the temperature distribution in the cavity of the DSF, and the indoor temperature. Then, we discussed the influence on the ambient temperature in the building exerted by the air circulation system of the double-skin facade. Finally, we analyzed the influence on the whole building energy consumption of the DSF buildings under the situation of different building envelopes and different building orientations. Based on the results of the numerical simulation, the paper put forward an operation strategy analysis of the DSF buildings in severe cold area, in order to achieve the purpose of building energy saving.
Introduction
Modern architecture is dominated by transparent buildings.The large glazed areas result in high building heating and cooling loads, leading to high levels of energy consumption and therefore significant financial and environmental burdens.The double-skin facade is one potential response to these problems.On the other hand, in both developed and developing countries, building energy consumption accounts a large part in the total world energy consumption.All the countries around the world put the building energy-saving as the focus of their work [1].Architects have long been hoping to decrease the building energy consumption and make the shape of the building more beautiful and unique through the application of glass curtain wall in the meanwhile.Thus research on the thermophysical properties of the glass curtain wall has become a hot spot.The paper studied the energysaving properties of the DSF in severe cold area based on the winter weather conditions of Shenyang, respectively, using the flow field simulation software FLUENT and the building energy simulation software DEST to analyze temperature characteristics and building energy consumption of the double-skin facade.
Thermal Properties of the Double Skin Facade
The structure of the DSF is shown in Figure 1.The main factors driving airflow within the cavity are buoyancy and wind pressure [2].The pressure differences resulting from these forces can be determined with the equations below, respectively: where is the outdoor air density (kg/m 3 ), is gravitational acceleration (9.8 m/s 2 ), H is the cavity height (m), cav is the average cavity temperature, and is the outdoor air temperature; where met is the measured wind speed at height ℎ met , met is the wind boundary layer thickness, is the ASHRAE local terrain exponent, ℎ is the height of the lower cavity opening, and is the distance between inlet and outlet openings.The airflow rate through the cavity can be determined by where and are pressure loss characteristics of the cavity and openings.In a summer condition, the DSF can exploit the chimney effect formed in the cavity-taking away the indoor heat through natural ventilation and reducing the indoor temperature.In winter condition, the DSF needs to exploit as much solar radiation as possible to form the greenhouse effect in the cavity which will improve the insulation effect of the cavity and reduce the heating energy consumption.
The DSF is a special 3-layer glass curtain wall system.Under the condition of no ventilation in winter, the forms of heat transfer in the air layer between heating storage of the glass curtain include convective heat transfer, radiation heat transfer and thermal conductivity [3].The calculation formula for heat transfer coefficient of common glass curtain wall is where is the heat transfer coefficient of common glass curtain wall; ℎ and ℎ the surface heat transfer coefficients of the outdoor side and the indoor side, w/(m 2 ⋅K); the thermal conductivity of the glass, w/(m 2 ⋅K); the thickness of the glass.
The calculation formula for surface heat transfer coefficient of the outdoor side ℎ is where V is the outdoor wind speed (m/s).
In accordance with the standard above, when comparing the heat transfer of the glass, we set ℎ as 23 w/(m 2 ⋅K) when the outdoor wind speed is 3 m/s.And the calculation formula for surface heat transfer coefficient of the indoor side ℎ in is where is the Emission rate of the indoor side and the indoor side of the DSF is ordinary transparent glass, which will be taken as 0.83, ℎ in = 8.0.Thermal conductivity of the ordinary toughened glass is 0.76 w/(m 2 ⋅K), when the thickness of the glass is 6 mm, and the glass thermal resistance is 0.008 w/(m 2 ⋅K).According to the calculation formula above, we can get heat transfer coefficient of 6 mm thick ordinary toughened glass is 6.16 w/(m 2 ⋅K).Similarly we can compute out that heat transfer coefficient of the 24 mm thick Low-e insulating glass is 1.76 w/(m 2 ⋅K).In the setting of FLUENT material properties, all the glasses are set to translucent transmission medium [4].
CFD Numerical Simulations
Building indoor thermal environment in severe cold area is mainly influenced by the combined effect of thermal environment, wet environment, the air environment of the microclimate, and the atmospheric environment.These effects either directly or indirectly cause a certain degree of influence on the building indoor environment [5].In severe cold area, the design of the building envelope, especially the thermal design for adaptive glass curtain wall, has its own characteristics [6], which is different from the other thermal design areas in China.The severe cold area mainly controls the indicator of envelope insulation and generally does not take summer heat insulation into consideration.We will use the software FLUENT to explore the relationship among different spacing values of the heat storage cavity, different gas circulation systems, the temperature in the cavity of the double-skin facade, and the indoor temperature.2 shows the geometric model established by GAMBIT when the cavity spacing is 0.4 m.This subject is mainly to simulate the flow state of the airflow inside the DSF and the influence on airflow in the room which is exerted by curtain wall.The airflow inside the channel of DSF is a heat and mass transfer problem, so we adopt RNG − turbulent model and DO radiation model in the research.Hypothesis Boussinesq is used to make the processing of the buoyancy lift items produced by temperature differ easier [7].Select the climate characteristic of Shenyang prefecture as the research object, Shenyang: northern latitude 41.8 ∘ and east longitude 123.38 ∘ , in the east eighth zone.According to the Heating and Ventilation Design Specification.The designed indoor comfort conditions are as follows: temperature: 18 ∼ 22 ∘ C; relative humidity: 40% ∼ 60%; wind speed should not be more than 0.2 m/s; the solar radiation of winter solstice in Shenyang was shown in Table 1.
Simulation Results
Analysis.In winter condition, the DSF generally uses a closed and no-cyclic form, in order to form greenhouse effect and to improve the indoor temperature [8].The subject selects the main time on the winter solstice to have the simulation, for on the day, we can accept solar radiation of shortest duration during the daytime.Under this condition and when thermal channel spacing, respectively, choose 200 mm, 400 mm, 600 mm, simulate the temperature in the thermal channels and the changes of indoor temperature.We use FLUENT to stimulate these 27 examples, respectively, and the simulation results are shown in Figures 4 and 5.With reference to the change curve of solar radiation in the winter solstice day (Figure 3), solar radiation is an important parameter which affects the temperature of the cavity and the indoor temperature, and the changes of cavity temperature and indoor temperature are in proportion to changes of the solar radiation.
Figure 4 shows that during the daytime in the winter the temperature of DSF with three different spacing value in the cavity is obviously higher than that of the outdoor temperature; exactly the maximum temperature difference between temperature in the cavity and the outdoor temperature can be up to 26 ∘ C. Visibly the effect of the greenhouse is very significant, which can play a role in improving the cavity temperature.Meanwhile during the daytime in the winter the larger cavity spacing will be conducive to gain more solar The cavity spacing is 400 mm The cavity spacing is 600 mm The outdoor-bulb temperature radiation heat, which will come out with a positive effect on improving the cavity temperature.As shown in Figure 5, similar to the change rules of cavity temperature, the changes of indoor temperature are consistent with changes of solar radiation.When the solar radiation arrives at maximum value, the indoor temperature gets to the maximum; when the solar radiation gradually decreases, the cavity temperature reduces gradually.But double-skin facade of short cavity spacing is conducive to extend the exothermic time from the interior and cavity to the outside.So in a short period of time, solar radiation can play the role of thermal insulation.
Simulation about the Influence of Gas Circulation Mode on the Thermal Performance of the Double-Skin Facade
In severe cold area, it is usual to adopt the double-skin facade of external cycle in the summer [9].To make full use of solar radiation to reach the greenhouse effect in winter, we generally close the air inlet and outlet on the inner side of the south DSF.Under the action of direct sunlight, there will be a closed greenhouse in the cavity with a high temperature so that the heat dissipation from interior to outer environment will be reduced.But generally the north double-skin facade cannot get enough direct sunlight.In order to increase the thermal resistance of heat storage cavity effectively and reduce the indoor heat loss, the general approach is sending the indoor exhaust through the delivery outlet which is on the indoor side to the cavity, and then discharging the exhaust from the air outlet on the upper side of the cavity The cavity spacing is 400 mm The cavity spacing is 600 mm The outdoor-bulb temperature to outdoor environment (as shown in Figure 6).The exhaust whose temperature is close to the indoor temperature has a preheating effect on the ventilated cavity, and so to some degree, the effect reduces the heat loss through DSF to the outside and decreases the indoor air conditioning load which causess the decrease of air conditioning energy consumption through DSF [10].
Scheme Design of Gas Circulation Way Simulation.
From the analysis above, it is known that orientation and air distribution mode have an important influence on the thermal performance of the DSF.In the case of different orientations, the operational mode of the DSF will be different because of the solar radiation; while in the case of the same orientation, it is apparent that there are large distinctions between the thermal performances of the DSF with no air circulation and that with air circulation.Meanwhile the amount of air circulation to some degree influences the thermal performance of the DSF.The scheme design of the simulation is shown in Table 2. solstice: we simulate south-facing DSF hourly from 7:00 am to 15:00 am.We calculate the cavity of DSF and the indoor temperature in order to increase the air circulation inside the south-facing DSF in winter.
The Numerical Simulation and Comparative
(1) Geometric Model.This group of simulations is whether there is a simulation for air circulation.So when the simulation is going on, we need to build two sets of physical models.One group is no air circulation double-skin facade model, also known as a closed the DSF.It has already been built under the influence of regenerative cavity spacing on the thermal performance of the DSF construction, as shown in Figure 2. Another group is a model of the DSF with a group of air circulation, also known as the inner loop the DSF, as shown in Figure 7.
The simulation of two models using regenerative cavity spacing of 0.4 m glass curtain wall makes use of the geometry model cycle with a set of the DSF which is created by GAMBIT, as shown in Figure 8.The model and boundary conditions are selected as previously mentioned.
(2) The Analysis of Simulation Results.We still choose hourly simulation of major moment of the winter solstice, select the comparison of the hot channel spacing 0.4 m inner loop and airless cycle of the double skin-facde temperature and totally simulate 18 examples.
The simulation results are shown in Figures 9 and 10.By comparing, we can see that the DSF with the inner loop is more conducive to improving the indoor air temperature in the winter during the day, but during smaller solar radiation time or at night, the room temperature will lost quickly from the inside out.However, in the winter night, no air circulation double-skin glass curtain wall is more conducive to the holding of the indoor temperature.Double-skin glass curtain wall with an inner loop is virtually equivalent to an increase of the spacing of the heat storage cavity, which is not conducive to indoor delay insulation, which corroborated analog conclusions of the previous section from the side again.At night or solar radiation small moment, the small regenerative cavity spacing of the DSF is more conducive to hold the indoor temperature [11].
The Numerical Simulation and Comparative Analysis of the Amount of the DSF Air Circulation in Winter.
The simulation of this section needs to build a geometric model of two groups, one for a group of the air circulation mode of the DSF; this model has been constructed in a simulate analysis in the last section.For two sets of the air circulation mode a DSF geometry model is shown in Figure 11.
The choice of a model and the boundary conditions set as previously indicated, based on the simulation results shown in Figures 12 and 13, the analysis shows that the larger period of radiation in the winter sun, the set of air circulation is conducive to raise the average temperature in the room, with the increase in the amount of air circulation, the indoor temperature will be more improved.However, in a small period of solar radiation or at night, the air circulation does not have good insulation effect.It will speed up the heat transmission from the indoor to the outdoor, resulting in the decrease of room temperature rapidly.The greater the amount of air circulation, the faster heat loss will be.So when solar radiation is small or at night, it is better to use the closed no-loop DSF [12], in order to facilitate the preservation room temperature and reduce indoor heat loss.
DEST Numerical Simulations
This section mainly uses the DEST energy analysis tool to simulate the energy consumption of the DSF in the case of different external structures and different building orientations [13].Using front FLUENT simulation results, we can see that the major moment of the winter solstice, the temperature of the glass surface of the inner and outer layers of the DSF, Here first calculate the cavity width of 200 mm, 400 mm, 600 mm South to the DSF [14], Substituting into the previously described heat transfer coefficient calculation formula.When the outdoor temperature is 2.1 ∘ C, the solar radiation is 650 w/m 2 ; the calculated results are shown in Table 3.We can see from the table above that the cavity spacing has 200 mm∼600 mm range, the heat transfer coefficient of the DSF does not cause too much influence, and the cavity changes to only 0.8%.Therefore, the effect of cavity spacing can be roughly ignored in the energy simulation.
Establishing the Model.
By using the DEST energy analysis tool, we can make it more accurately on the office building energy simulation.The typical office building is a fivestory office building; main room types are ordinary office, reception room, office, leading the rest room, the bathroom, and the exhibition rooms.The established model shown in Figure 14 is three-dimensional model of the building generated by the DEST.
In order to investigate the power saving features of the DSF in cold regions, the DSF model is set as a typical office building with plated Low-e membrane hollow glass curtain wall, concrete wall-Jane-85 for energy simulation.The characteristics of these a three-envelope structure are shown in Table 4, respectively.
The DSF in winter with no shade measures the hot channel is closed, no external circulation ventilation mode; to the external structure plated of Low-e membrane hollow glass curtain wall and concrete wall plus steel outsider window, winter also using the no shading form [15].
External Structure of the Different Types of the DSF
to the Influence of Building Energy Consumption.Doing DEST simulation for these three different external structure building models generates a load summary table about the building energy consumption as shown in Table 5.
Table 5 shows that the DSF has a better energy-saving effect than Low-e hollow glass curtain wall, about 17.2% energy saving higher than Low-e hollow glass curtain wall, and 6.2% higher than concrete-perlites-85 energy consumption; the heating indicator is 29.3 w/m 2 [16][17][18].In summer cooling load, the DSF has 26.4% more energy consumption than the commonly used concrete-Perlites-85 exterior wall more and 12.5% energy saving than plated Low-e hollow glass curtain wall [19].
Thus, the DSF will not play an outstanding energysaving role in the summer.The summer air conditioning cold indicators also showed a similar rule with the cooling load.This reflected that the energy-saving effect of the DSF is more suitable for the cold climate region; cold region has a shortterm summer with little requirements for air conditioning and a cold long winter with higher heating requirements [20,21].Figure 15 obviously reflected the energy consumption of the DSF compared to other building envelopes in each season.
Low-e Hollow Glass Curtain Wall of Different Building toward to the Influence of Building Energy Consumption.
Different building orientation has a great impact on the thermal performance of the DSF.In this paper, a typical five-story office building in cold regions is using for the construction of instance, simulate three external structure column in Table 5 respectively by the use of DEST, each toward of the DSF adopts closed without internal circulation of the DSF, a total of nine examples.Due to the area of the DSF and indoor space of the architecture model in the southward, northward, eastward and westward are all different, directly compare of annual cumulative load have no practical significance, therefore choose the plated Lowe hollow glass curtain wall and concrete-perlites-85 as [22][23][24] the reference, compare the cumulative load.The specific simulation results are shown in Table 6.
Basis on the DSF, with reference to cooling and heating loads and annual cumulative load of the low-e hollow glass curtain wall and concrete-perlites-85 exterior wall, each toward lower energy efficiency ratio were calculated, the energy saving ratio tables are shown in Table 7.
Figures 16 and 17 were in the case of a different building toward the DSF total annual heating load being compared and the annual heating energy saving rate comparison.Normative provisions of the cold regions must consider the heating; air conditioning requirements generally may not be considered [25], and in this topic do comparative analysis mainly for winter heating energy consumption.Shown in Figure 16, the concrete-perlites-85 facades shows the energysaving sexual insulation exterior good performance; its energy consumption of winter heating is significantly lower than the DSF and Low-e hollow glass curtain wall regardless of the facing choice.
The DSF is lower than the low-e hollow glass curtain wall in the construction winter heating energy consumption in each toward, in the case of similar material; the DSF reveals strong advantage regarding the structure and function [26,27].Refering to Figure 17 histogram, the energy saving rate descends from eastward, westward, and northward to southward.
Conclusion
This subject starting from climatic characteristic of the cold regions, combined the operation principle in the winter, getting the conclusion of the energy-saving design of the DSF adapting to cold regions.
(1) We simulate the different cavity spacing of the DSF by using the CFD software, found that during the solar radiation during the hot cavity spacing a large DSF with higher indoor temperature as well as empty cavity temperature in the winter daytime, solar radiation is small or at night time, the smaller cavity spacing of DSF has the better thermal insulation properties.(2) In general winter conditions use the circulation of the DSF, but the simulation found that winter daytime with air circulation of the DSF will get more favorable indoor environment.With the increase in the amount of air circulation, this phenomenon became more apparent.Closing the DSF could maintain indoor temperatures favor at night.
(3) Building energy simulation analysis, respectively, for the DSF, Low-e hollow glass curtain wall, and concrete-perlites-85, the simulation found that the double-skin glass curtain wall with a good energy saving effect in winter has a good energy saving effect in winter but in summer it cannot show outstanding energy-saving features; such energy-saving effect decides that it is suitable for the climate characteristics in cold regions with long cold winter and short summer.
(4) Total annual heating load calculation through four different buildings orientations shows that the energy saving rate of the DSF ranging from high to low is eastward, westward, northward, and southward.
Figure 1 :
Figure 1: The structure of DSF.
Figure 3 :
Figure 3: The solar radiation of the winter solstice.
Figure 4 :
Figure 4: The interior temperature distributions when the cavity spacing of DSF is different.
Figure 5 :Figure 6 :
Figure 5: The room temperature distributions when the cavity spacing of DSF is different.
Figure 7 :
Figure 7: The inner loop of DSF.
Figure 8 :
Figure 8: The structure of DSF which has one pair of inner loops.
10 Figure 9 :Figure 10 :
Figure 9: The contrast of interior temperature of DSF when the inner loop is set or not.
Figure 11 :Figure 12 :
Figure 11: The structure of DSF which has two pairs of inner loop.
10 Figure 13 :
Figure 13: The contrast of indoor temperature when the amount of airflow is changed.
4 Figure 16 : 5 Figure 17 :
Figure16: The contrast of heating load in one year when the building orientation is changed.
Table 1 :
The South solar radiation of winter solstice in Shenyang.
Table 2 :
The simulation program of airflow movement.
Table 3 :
The heat transfer coefficient of DSF.
Table 4 :
The performance of wall.
Table 5 :
Building energy consumption when the external wall is changed.
Table 6 :
The heat and cold load of the building when the building orientation is changed.
Table 7 :
The contrast of the ratio of energy conversion when the building orientation is changed. | 5,252.8 | 2013-03-17T00:00:00.000 | [
"Engineering"
] |
Observation of an optical anisotropy in the deep glacial ice at the geographic South Pole using a laser dust logger
We report on a depth-dependent observation of a directional anisotropy in the recorded intensity of backscattered light as measured by an oriented laser dust logger. The measurement was performed in a drill hole at the geographic South Pole, about a kilometer away from the 5 IceCube Neutrino Observatory. The drill hole remains open for access, after the SPICEcore collaboration had retrieved a 1751 m ice core. We find the anisotropy axis of 126± 3◦ as measured below 1100 m to be compatible with the local flow direction. The observation is discussed in comparison 10 to a similar anisotropy observed in data from the IceCube Neutrino Observatory and favours a birefringence based scenario over previously suggested Mie scattering based explanations. In the future, the measurement principle, when combined with a full-chain simulation, may have the potential 15 to provide a continuous record of fabric properties along the entire depth of a drill hole.
Introduction
The viscosity of an individual ice crystal strongly depends on the direction of the applied strain. As a hexagonal crystal, 20 ice will most readily deform as shear is applied orthogonal to the c-axis, which leads to slip of the basal planes (Petrenko and Whitworth (2002)). Thus, individual grains elongate, with the major axis being aligned perpendicular to the c-axis. 25 In large-scale systems, such as glaciers or ice sheets, ice is compressed under its own weight and as a result flows away from the accumulation region. This leads to preferential c-axis orientations (see Alley, 1988), most commonly girdle fabrics, where c-axis are predominantly found on a 30 plane, with the plane's normal vector being aligned with the flow direction. Ice fabric can not only be observed through macroscopic imaging of ice cores (Weikusat et al., 2016), but also leads to a directionality in the propagation of mechanical and electromagnetic radiation, in principle 35 allowing for remote-sensing of the ice fabric.
The mechanical anisotropy of ice means that the speed of sound depends on the fabric realization. This has for example been derived and measured by Kluskiewicz et al. 40 (2017). Ice crystals are also a birefringent material, with any incoming electromagnetic radiation being separated into an ordinary and extra-ordinary ray of perpendicular polarization with respect to the c-axis, and which propagate with different refractive indices. This is classically observed 45 as a direction-dependent delay in the propagation of radio waves, as for example described by Fujita et al. (2006).
Recently, as part of ice calibration measurements for the IceCube Neutrino Observatory (Aartsen et al., 2017), 50 Chirkin (2013) described the observation of an optical anisotropy, where about twice as much light is observed along the glacial flow axis versus orthogonal to the flow axis, at a receiver 125 m away from an isotropic emitter. The effect was originally modelled as a direction dependent 55 modification to Mie scattering quantities, either through a modification of the scattering function as proposed by Chirkin (2013) or through the introduction of a direction dependent absorption as introduced by Rongen (2019). As also shown by Rongen (2019), both parameterizations 60 lack a thorough theoretical justification and resulted in an incomplete description of the IceCube data.
As the wavelength of ∼400 nm employed in the IceCube studies is significantly smaller then the average grain size, the effect is challenging to derive from first principles. First attempts have been made by Chirkin and Rongen (2019) by attributing the effect to the cumulative diffusion that a 5 light beam experiences as it is refracted or reflected on many grain boundary crossings in a birefringent polycrystal with a preferential c-axis distribution.
In this scenario the diffusion is found to be strongest when 10 photons initially propagate along the flow and smallest when initially propagating orthogonal to the flow. In addition photons are, on average, deflected towards the flow axis.
The deflection per unit distance increases for stronger 15 girdle fabrics, a larger average crystal elongation or a smaller average crystal size. For crystal realizations where the deflection outweighs the additional diffusion along the flow axis compared to the diffusion along the orthogonal direction, the photon flux along the flow axis will increase 20 with distance compared to the photon flux along the orthogonal axis.
We add to the body of anisotropy observations by providing the first direction-dependent measurement of the in-25 tensity of back-scattered, optical light returning to the oriented dust logger deployed down a glacial bore hole. If the anisotropy is caused by Mie scattering a reduced return signal is expected when the light source points along the flow, while more light is expected to return in case of the birefrin-30 gence and absorption explanations.
The oriented dust logger
The dust logger, as sketched in Figure 1 and introduced by Bramall et al. (2005), consists of a 404 nm laser line source, emitting a 2 mm thin, horizontal fan of light about 35 60 • across. A small fraction (10 −10 to 10 −6 ) of all emitted photons is back-scattered or reflected and returns to the bottom section of the dust logger where a 1" Hamamatsu photon-counter module is located.
40
Scattering and absorption on air bubbles, soot and other impurities as described by Mie scattering theory is traditionally thought to be the dominant contribution to the return signal. However, taking into consideration the findings of Chirkin and Rongen (2019), diffusion on grain boundaries 45 may also contribute non-negligibly to the signal.
The intensity of the light source can be adjusted throughout the logging process. To avoid stray light contamination from reflections on the hole-ice interface, multiple sets of 50 black nylon baffles are attached to the side of the pressure housing. These also sweep ice crystals and debris out of The depth of the logger is monitored through the cable payout and on-board pressure sensors. During offline analysis multiple logs from the same site are further aligned to achieve centimeter depth precision, using characteristic features such as volcanic horizons in the ice, as described by 60 Aartsen et al. (2013). This device has previously been deployed in West Antarctica, East Antarctica and Greenland. Due to excellent imaging properties, deployments down the water-filled drill 65 holes of the IceCube Neutrino Observatory resulted in one of the highest resolution particulate stratigraphies of any glacier available to-date, as described by Aartsen et al. (2013).
70
To measure a potential directionality of the return signal relative to the direction of the emitted light fan, an optional extension consisting of an Applied Physics Systems Model 547 Directional Sensor 1 has been fitted to the top of the logger. By measuring the local magnetic field it deduces the ab-75 solute orientation with an azimuthal accuracy of ±1.2 • for magnetic latitudes < ±40 • . For our application at the geographic South Pole we estimate the azimuthal accuracy to be ±3 • .
SPICEcore deployments
The South Pole Ice Core, SPC14 (see Casey et al., 2014), was drilled by the SPICEcore project in 2014-2016 at a location 2.7 km from the Amundsen-Scott station, using the Intermediate Depth Drill designed and deployed by the U.S. 5 Ice Drilling Program (IDP) (Johnson et al., 2014). It reached a final depth of 1751 m (Winski et al., 2019), surpassing the original 1500 m goal.
The core has been retrieved in 2 m segments with a diam-10 eter of 98 mm. The resulting 126 mm diameter drill hole was filled with the non-freezing drilling fluid Estisol-140 and has been preserved for future logging access. (2017) Unlike most ice coring sites, the Geographic South Pole is 15 not near an ice divide, but rather on a flank site with a local flow velocity of 10 m per year. The associated accumulation site for the deepest ice is believed to be Titan Dome (Lilien et al. (2018)), meaning that the ice has been transported as far as 200 km. The stress experienced below ∼ 800 m 20 depth has resulted in a very prominent and continuously strengthening girdle fabric as measured by Voigt (2017). Figure 2 shows example c-axis distributions from the SPC14 ice core at various depths.
25
The oriented dust logger has been deployed down the SPICEcore hole twice during the 2016/2017 season, both times using the Intermediate Depth Logging Winch provided by the IDP. Due to a limited available cable length, it was only able to reach a depth of 1577 m of the 1751 m cored. 30 During the first log the laser intensity was not yet optimized, leading to saturated and thus unusable data above 1000 m.
Two further deployments down to ∼1700 m were performed during the 2018/2019 season. Due to mechanical 35 problems with the winch cable payout, depth readings from the winch itself are inaccurate. Only one deployment could be depth aligned to the required precision using characteristic features as previously discussed. This deployment includes an additional round-trip between 1354-1703 m. As shown in Fig.3, the logger rotates as it descends and ascends the hole, mainly due to the residual twist in the 45 logging cable. On ascent, as the cable is pulling the tool up, it undergoes a smooth rotation of slightly varying angular velocity. On descent the logger sinks under its own weight and the rotation is not continuous. The most likely explanation is that the logger is repeatedly stuck on the wall before slipping. 50
Anisotropy signature
The data obtained from the oriented dust logger consists of orientation, depth and optical return signal measurements at 10 ms intervals. At the usual deployment speed this is 55 equivalent to a sampling distance of ∼2.5 mm.
While the photomultiplier is located ∼850 mm below the laser light source, the depth resolution, as measured by Bramall et al. (2005) as the smearing of an ash layer, is dominated by the vertical extent of the laser beam and less than a few mm. This allows for a continuous record 5 of optical properties down the entire depth of a drill hole at the vertical resolution of less than a year of deposition, assuming an annual layer thickness of 1-2 cm in the deep ice as reported by Aartsen et al. (2013).
Above the transition region of air bubbles to clathrate hydrate (Miller, 1969) at 700-1300 m the return signal is dominated by scattering on air bubbles. Below, the return signal is primarily proportional to the concentration of impurities contributing to scattering. The resulting high resolution stratig-15 raphy is exemplified in Fig. 4. This figure also shows that the optical return signals at the same depths are not consistent between logs. Instead the signal depends on the absolute orientation of the logger. We extract this anisotropy signature by taking the ratio of two 20 logs. Usually, the ratios of raw data are on average non-unity and show slow, continuous variations. These systematic offsets, caused for example by the changing clarity of the drilling fluid or grime accumulation on the logger, are corrected using a second-degree polynomial fitted to the 25 ratio.
Example ratios for ∼100 m depth slices and after fitting and correcting the offset for each depth slice are given in Fig. 5. When the device's orientation between logs is out of 30 phase, pointing in different directions at the same depth, the ratio in these examples becomes as large as 1.5. When the logs are in phase, the observed intensities are equal and thus the ratio is unity.
35
Analysis of the 18/19 logging season reveals that two logs, Down3-Leg1 and Down3-Leg2, exhibit strongly correlated orientations. As exemplified in Fig. 6, the orientations of the two descending segments in the depth range between 1354 m and 1703 m show near identical orientations, suggesting that 40 the rotation of the tool may have been governed by the hole geometry itself. As a result, the ratio of the two logs is consistent with unity where the logs align and the spread of 2% standard deviation indicates the typical short term intensity fluctuations seen in the measurement. 45 Figure 5. Example fits to the intensity ratio in ∼100m depth slices. The red and black dotted lines denote the orientations of the two used logs. (The orientation is defined as the cosine of the azimuth angle.) Blue is the intensity ratio. Green is the fitted intensity ratio using eq. (1).
In the following analysis, log Down3-Leg2 is excluded from the analysis as it is fully correlated with Down3-Leg1. The number of remaining usable ratios at each depth range available from N logs is given by the binomial coefficient N 2 and varies between 3 and 21. 50 Figure 6. The intensity ratio of two log segments with near identical orientations is unity and shows the typical spread of the data.
(1) 5 Here α 1 and α 2 denote the azimuthal orientations of the logger during the two logs. φ denotes the azimuthal phase angle of the anisotropy effect, also called anisotropy axis and is limited to 0 • − 180 • . a is a measure of the strength of the observed effect.
10
The orientations and the intensity ratio are given by the dust logger data. The free parameters a and φ can be determined by fitting eq. (1) to the data from a given depth range.
15
Note that the chosen relationship, while being very robust and easy to extract, implicitly assumes that the anisotropy causes a relative modulation to the total signal. In case the signal is additive on top of a contribution caused by Mie 20 scattering on impurities, as would be expected from a fabric driven birefringence scenario, the derived strength parameter a can not directly be interpreted as the strength of the underlying effect. For example, assuming an overall constant return signal from the anisotropy, the strength parameter a 25 would be seen to increase as the overall return signal decreases as a function of depth.
Depth evolution
To study the depth evolution of the anisotropy signature the data is binned into 100 m slices. While this only allows 30 for a rather coarse depth resolution, it ensures that at least a few rotations are seen in each ratio. Otherwise the correction polynomial could bias the signal introduced by the anisotropy and the fit would not be able to reliably determine the phase and amplitude of each log. The systematic 35 shift introduced by the correction polynomial was further accessed by varying its degree between 1 and 3 and was found to be below the error on the mean, which is introduced below, for all depth-slices.
40
In the future, a finer depth resolution may experimentally be achieved by not relying on the natural rotation of the logger as it is deployed but by artificially inducing a fast rotation, or by including several azimuthally offset light sources.
45
(a) Anisotropy strength. An example log is superimposed as reference.
(b) Anisotropy axis with respect to Local Grid bearings.
Grid North (0 • ) aligns with the Greenwich meridian. The depth evolution of the fitted anisotropy axis φ and strength a as a function of depth are seen in Fig. 7. The spread of these fitted quantities between different ratios far outweighs the statistical error of the fit of each ratio. The errors on the means of each depth bin are thus con-5 structed from the standard deviation of all ratios. While N 2 ratios can be constructed, only N − 1 are statistically independent. The error on each mean is thus given as σ mean = σ/ √ N − 1.
10
Individual ratios are seen to yield consistent anisotropy axes in the deep ice. In the shallow ice above ∼1100 m, where the mean strength of the observed anisotropy signal vanishes, the phase angle is unconstrained. The average axis in Local Grid bearings (Grid North (0 • ) aligns with the 15 Greenwich meridian) in the deep ice is 126±1 (statistical) • . Considering the 3 • systematic uncertainty of the orientation sensor, this direction is in good agreement with the local ice flow direction as measured by Lilien et al. (2018) as well as the optical axis of 126 • or 130 • as fitted by Chirkin (2013) 20 and Rongen (2019) respectively, both using IceCube data.
While no anisotropy signatures are seen above 1100 m, the observed strength parameter is continuously increasing in the deeper ice. It is currently unclear which fraction of the in-25 crease in observed anisotropy strength versus depth is caused by the continuously stronger girdle fabric or by the decrease in overall scattering. However, no anisotropy signal is observed at 1000 m where the girdle fabric is already clearly developed (see Fig. 2) but bubbles still dominate the scatter- 30 ing at this depth. Therefore, we suspect that the anisotropy signature is smeared out to some extent due strong local diffusion.
Conclusions
We have presented the first direction-dependent measure-35 ment of the intensity of back-scattered, optical light in deep glacial ice. The measurement has been performed using an oriented dust logger deployed down the SPICEcore drill hole.
40
Below ∼1100 m a consistent increase in received intensity is observed when the laser is aligned with the local flow axis. This is consistent with the birefringence explanation offered by Chirkin and Rongen (2019) where more diffusion is observed along the flow, thus leading to a higher return in-45 tensity and inconsistent with the previous explanation given by Chirkin (2013) where the effect was attributed to reduced Mie scattering along this flow. The observed sign is qualitatively also consistent with an anisotropy based on absorption.
50
The amplitude of the intensity modulation increases with depth. This is in part likely caused by the strengthening of the girdle fabric as well as the strong reduction in overall scattering, as bubbles are transformed to clathrate hydrates.
55
To disentangle these two effects, and to potentially transition from the presented experimental ratios to a quantitative measurement of fabric properties, will require a full photon propagation simulation incorporating both Mie scattering on impurities as well as the diffusion introduced through 60 the polycrystalline, birefringent fabric. While the basics for such a simulation have been outlined by Chirkin and Rongen (2019), more work will be required for the simulation to be efficient enough to be used for this application. It is currently also unclear if the intensity ratio alone will be sufficient to 65 constrain the different fabric properties, namely the Woodcock parameters and the average grain size and shape, or if more information such as the distribution of propagation delays of individual photons may be required.
Data availability. A single stratigraphy is currently available from 70 https://doi.org/10.15784/601222 (Bay, R. (2019)). The full set of logs may be released in the future.
Competing interests. The authors declare that they have no competing interests (both financial or non-financial).
Author contributions. RB designed the dust logger. The logging 75 was carried out by RB and SB. MR and RB developed the data processing and analysis. The manuscript was prepared by MR with contributions from all co-authors. | 4,452 | 2020-03-11T00:00:00.000 | [
"Physics"
] |
Synthesis of Novel 1,4-Diketone Derivatives and Their Further Cyclization
One of the important reactions to obtain a new carbon–carbon bond is the Stetter reaction, which is generally via a nucleophilic catalyst like cyanide or thiazolium-NHC catalysts. In particular, 1,4-diketones with very functional properties are obtained by the Stetter reaction with the intermolecular reaction of an aldehyde and an α,β-unsaturated ketone. In this study, we synthesized new derivatives (substituted arenoxy) of 1,4-diketone compounds (2a–2n) with useful features by a new version of the Stetter reaction method. In our work, arenoxy benzaldehyde derivatives with different structures as the Michael donor and methyl vinyl ketone as the Michael acceptor were used for the intermolecular Stetter reaction. The reaction was catalyzed by 3-benzyl-5-(2-hydroxyethyl)-4-methylthiazolium chloride (3b), using triethylamine for the basic medium and dimethyl sulfoxide as the solvent. As a result, some novel arenoxy-substituted 1,4-diketones were gained with good yields at room temperature within 24 h through an intermolecular Stetter reaction. In addition, new furan and pyrrole derivatives were prepared by performing the cyclization reaction with one of the obtained new diketone compounds.
■ INTRODUCTION
The Stetter reaction is one of the significant carbon−carbon bond formation reactions using a nucleophilic catalyst. It has a different mechanism than the classical Michael addition, 1 aldol reaction, 2,3 and Mannich reaction, 4,5 which makes other C−C bonds. It takes place by reaction of aldehydes with Michael acceptors of the 1,4 addition type with nucleophilic catalysts such as cyanide ions or N-heterocyclic carbenes (NHCs). 6 First, aldehydes undergo the umpolung reaction along with a catalyst. To make the carbonyl carbon nucleophilic, the umpolung reaction allows carbon−carbon bond formation in milder conditions by reversing the actual polarity of the carbonyls. 7−14 Then, a 1,4-addition reaction takes place with electrophilic carbon double bonds (Michael acceptors), and the creation of new carbon−carbon bonds takes place. 15−17 This reaction was first discovered by Hermann Stetter in 1973 in the production of 1,4-dicarbonyl compounds and named after him. 18 The reaction allows the synthesis of γ-keto nitriles, γ-keto esters, and γ-diketone products, which are important intermediates or starting materials in the synthesis of various heterocyclic molecules and bioactive heterocyclic systems found in natural products. 16,19,20 It is very useful and versatile due to its applicability to substrates such as various heteroaromatic aldehydes and substituted aryl aldehydes. In the Stetter reaction, ketones, α,β-unsaturated esters, nitriles, aldehydes, and nitrous oxides are preferred as Michael's receivers. 17 Cyanide ion, 18,21 thiazolium salt, 21,22 bis(amino)cyclopropenylidenes, 23 chiral bicyclic thiazolium salt, 20 ThDP-linked enzymes like lyases, MenD, and PigD, 24,25 and NHCs 9,13,16,26−32 have been utilized as catalysts in the Stetter reaction. The first isolation of free carbenes was carried out independently by Bertrand et al. 33 and Arduengo et al., 34 and this discovery led to the emergence of suitable approaches for obtaining medically and biologically important compounds. 9,30 Recently, NHCs have performed effective reactions with homoenolates, enolates, vinyl enolates, acyl azoles, and acyl anion reagents to provide products that are not readily available by other means, and their interest in the field of catalytic synthesis is increasing. 35 NHCs are excellent donors and form complexes containing strong metal−carbon bonds with thermal stability and higher catalytic activities. Simultaneously, singlet NHCs as unique Lewis bases are potent organocatalysts with both basicity and π acidity to allow the formation of a second nucleophile during a reaction (Breslow intermediate). As effective catalysts, NHCs are widely used in a variety of chemical syntheses and applications. Especially, the selective reactions mediated by chiral NHCs, high yield, and excellent regioselectivity, diastereoselectivity, or enantioselectivity aroused great interest.
1,4-Diketones, with two carbonyl groups in one molecule, are an important structure frequently found in biologically active natural products. 36−38 Additionally, they are very useful in the synthesis of some important heterocycles such as furan, thiophene, pyrrole, and pyridazines using Paal−Knorr synthesis 39−41 (Scheme 1). These heterocycles are valuable building blocks of natural and pharmaceutical substances such as lophotoxin, non-natural amino acid Fmoc-D-3-Ala(2thienyl)-OH, minaprine, and Lipitor. 42 Synthesis of 1,4dicarbonyl compounds is accomplished by one of the oxidative cross-coupling, nucleophile−electrophile coupling, or Stetter reactions. 43 Synthesis of 1,4-diketones is more difficult than other 1,4-carbonyls, and they are obtained by coupling reactions of multifunctional substrates. In these coupling reactions, either multiple coupling partners are used or multiple pre-steps are required for the polyfunctionalization of a single partner. 44−48 Especially, ortho-substituted alkoxy or arenoxy groups (salicylaldehyde derivatives) have some very important biological activities, such as EP1 receptor antagonists. 49,50 Also, there is a lack of this kind of original diketones and their furan and pyrrole derivatives in the literature. Since steric hindrance is more likely in ortho-substituted structures, it is our first priority to obtain these structures in high yields in this study.
Here, it was developed an optimized procedure of the Stetter reaction using some ortho-(thio)arenoxy benzaldehyde compounds (1a−1n) synthesized by us 51−53 and methyl vinyl ketone. We tried some NHC catalysts (Scheme 2) in order to obtain a series of original ortho-(thio)arenoxy-substituted 1,4diketone compounds (2a−2n), and 3-benzyl-5-(2-hydroxyethyl)-4-methylthiazolium chloride (3b) was found as the best NHC catalyst. The 3b catalyst has been used in a previous Stetter reaction study for aliphatic aldehydes. 21 For orthosubstituted benzaldehydes, ethyl-5-(2-hydroxyethyl)-4-methylthiazolium bromide (3a) was used and low−middle yields were obtained in the same study. As for our work, we used a 3b catalyst for the first time to convert sterically hindered orthosubstituted arenoxy substrates to diketone derivatives under mild conditions and obtained high yields. In addition, new furan (4a) and pyrrole derivatives (4b and 4c) were synthesized by cyclization using one of the synthesized 1,4diketones (2g) by the Paal−Knorr reaction.
As can be seen from Table 1, KCN was the first used catalyst with different bases and solvents. But it was found that KCN was ineffective to obtain a 1,4-diketone product of 1a. Then, further experiments were made with 3a from NHC catalysts. Then, we tried the most commonly used solvents and bases in similar reactions. For solvent trials, triethylamine (TEA) as a base and various organic polar protic (EtOH, i-PrOH, and t-BuOH) and aprotic solvents (DMF and DMSO) and also a nonpolar solvent (THF) were used in these experiments. Interestingly, among the solvents used, DMSO was the only one that showed good results. DMSO was used both at 100°C (entry 8) and at room temperature (entry 11), and better conversion was obtained at room temperature in the presence of the 3a catalyst. For this reason, DMSO was preferred as a solvent in the investigation of the other NHC catalysts (entries 12−15) at room temperature. It was observed that thiazolium catalysts and especially 3-benzyl-5-(2-hydroxyethyl)-4-methylthiazolium chloride (3b) gave the best result (entry 12), while imidazolium derivatives (3d and 3e) had no activity among NHC catalysts (entries 14 and 15). In our subsequent experiments, we used the 3b catalyst and tested different bases and catalyst ratios to further increase the yield (Table 2). DBU, DMAP, KOtBu, imidazole, and benzimidazole were used as bases in these experiments. However, it was determined that other bases did not show higher conversion than TEA. When we changed the amount of the catalyst and bases, it was seen that the highest conversion was obtained when a 30% equivalent 3b catalyst and 50 mol % TEA were used (entry 10). When the reaction time was also controlled, it was observed that the highest result was reached in 24 h with 98% conversion.
By using the new optimized Stetter method, condensation products of phenoxy aldehydes (1a−1n) prepared by using different substituted phenols and thiophenols with MVK were obtained. Thus, the method has been shown to be effective in a wide range of products. Looking at the isolated yields of the newly synthesized 1,4-diketones, good results were obtained between 71 and 96% (Table 3).
We wanted to show that new derivatives of phenoxy-and thiophenoxy-derived 1,4-diketones in the original structure can be obtained by cyclization reactions. For this reason, we carried out a series of derivatization studies. In these studies, we performed the 2g diketone compound. We used different reagents for the cyclization experiments. First, we performed its reaction with trifluoroacetic acid in DMSO at 150°C. After 5 h, it was observed that the desired furan derivative (4a) product was formed in 95% yield (Scheme 3a).
Our second derivatization reaction is the synthesis of pyrrole-derived 2g using aniline (4b). In this reaction, a 98% yield was obtained using p-toluene sulfonic acid (p-TSA) in toluene at 110°C and 2 h (Scheme 3b). We obtained a new pyrrole derivative using 1-naphthylamine as the third derivatization reaction (4c). In this experiment, a 91% yield was observed after the diketone and amine refluxing in MeOH for 48 h (Scheme 3c).
■ CONCLUSIONS
In this study, the new 1,4-diketone products obtained as a result of the NHC-catalyzed Stetter reaction are important intermediates that can be used in drug synthesis, thanks to their molecular structures. Apart from that, it can form a starting material or intermediate product in many different organic syntheses. In particular, the structures synthesized in this study are very suitable for the synthesis of new heterocyclic compounds, which will increase the biological activity by the Paal−Knorr synthesis. Therefore, we were able to obtain three new heterocyclic derivatives by cyclization reaction using one of the new 1,4-diketones. With the synthesis of such compounds, it will be possible to discover new compounds with high biological activity. In particular, it has been shown that similar structures with ortho-substituted alkoxy or arenoxy groups increase their biological activities. 49,50 Finally, we succeeded in developing a method for the synthesis of arenoxy-derived 1,4-diketones in original structures, which can be the precursors of new heterocyclic compounds. ■ EXPERIMENTAL SECTION General Information. The predominance of the materials used in this work was commercially available from Acros, Merck, and Aldrich. The starting compounds 1a−1n were prepared by a reaction of 2-fluorobenzaldehyde and substituted phenol or thiophenol compounds. The whole new products were described by IR, 1 H-NMR, 13 C-NMR, GC−MS, and elemental analysis. The reactions were observed using TLC by silica gel plates and the products were made pure by column chromatography systems on silica gel (Merck; 230−400 mesh), eluting with hexane−ethyl acetate (v/v 9:1). GC− MS were recorded on a Shimadzu QP2010 Plus. The IR spectra were recorded on a Mattson 1000 spectrometer. The NMR spectra were recorded at 500 or 400 MHz for 1 H and 125 or 101 MHz for 13 C using Me 4 Si as the internal standard in CDCl 3 . Melting points were measured using Buchi Melting Point B-540.
General Procedure for the Stetter Reaction to Synthesize 1,4-Diketones. To a solution of a starting aldehyde compound (1a−1n) (0.1 mmol), MVK (2.5 mmol), catalyst 3b (30 mol %), and TEA (50 mol %) in DMSO (1 mL) were mixed at room temperature for 24 h. After completion of the reaction as monitored on TLC, the solution was concentrated in vacuo and was extracted with DCM. Then, the common reaction workup and concentration were done, and the remaining product was purified by column chromatography with a mixture of hexane and ethyl acetate (v/ v 9:1). | 2,567.6 | 2023-04-07T00:00:00.000 | [
"Chemistry"
] |
Toward a Greener World—Cyclodextrin Derivatization by Mechanochemistry
Cyclodextrin (CD) derivatives are a challenge, mainly due to solubility problems. In many cases, the synthesis of CD derivatives requires high-boiling solvents, whereas the product isolation from the aqueous methods often requires energy-intensive processes. Complex formation faces similar challenges in that it involves interacting materials with conflicting properties. However, many authors also refer to the formation of non-covalent bonds, such as the formation of inclusion complexes or metal–organic networks, as reactions or synthesis, which makes it difficult to classify the technical papers. In many cases, the solubility of both the starting material and the product in the same solvent differs significantly. The sweetest point of mechanochemistry is the reduced demand or complete elimination of solvents from the synthesis. The lack of solvents can make syntheses more economical and greener. The limited molecular movements in solid-state allow the preparation of CD derivatives, which are difficult to produce under solvent reaction conditions. A mechanochemical reaction generally has a higher reagent utilization rate. When the reaction yields a good guest co-product, solvent-free conditions can be slower than in solution conditions. Regioselective syntheses of per-6-amino and alkylthio-CD derivatives or insoluble cyclodextrin polymers and nanosponges are good examples of what a greener technology can offer through solvent-free reaction conditions. In the case of thiolated CD derivatives, the absence of solvents results in significant suppression of the thiol group oxidation, too. The insoluble polymer synthesis is also more efficient when using the same molar ratio of the reagents as the solution reaction. Solid reactants not only reduce the chance of hydrolysis of multifunctional reactants or side reactions, but the spatial proximity of macrocycles also reduces the length of the spacing formed by the crosslinker. The structure of insoluble polymers of the mechanochemical reactions generally is more compact, with fewer and shorter hydrophilic arms than the products of the solution reactions.
Introduction
The fabrication of cyclodextrins (CDs) is essentially a green process in which the industrial waste can further utilize in various other procedures, such as industrial alcohol production. Although mass production of CDs is essentially a green process, the same can hardly be true for CD derivatization in general. The most used CD derivatives, such as (2-hydroxy)propyl-and 4-sulfobutyl-βCDs (HPβCD and SBβCD), are produced in a concentrated aqueous solution; the purification and, particularly, solidification are energy-intensive processes. From a synthetic point of view, due to limited solubility in volatile green organic solvents, the production of many CD derivatives does not meet the requirements of green chemistry. Neither does, usually, the preparation of CD complexes. Although water is an environment-compatible solvent, the energy demand of its removal can considerably increase the costs of manufacturing.
At the beginning of this century, the Twelve Principles of Green Engineering laid down the fundamental aspects in process design for a sustainable future [1]. Despite all efforts, and mainly due to the often conflicting requirements, it is not always possible to comply at the same time with all these directives in the chemical industry. In addition to new synthetic methods, the repurposing of some old large-scale applications can replace many energy-intensive, often polluting technologies. Although many inorganic chemical productions utilize mechanochemical techniques, the penetration of this technology to the organic syntheses is relatively new [2][3][4].
Mechanochemistry combines mechanical and chemical transformations at the molecular level and comprises chemical conversions triggered by physical and physicochemical processes. These transformations are milling, shearing, and sonochemical manipulations; utilization of hydrodynamic cavitation [5]; and tribology [6][7][8][9]. Many times the limit between these processes is not sharp, and the elementary steps are mixed. Mechanochemistry is principally the transfer of mechanical energy to the chemical activation of the reactant(s), with the chemical reactions either in the presence or lack of a liquid phase rarely restricted to a simple energy transfer. Technologically, some fundamental processes of mechanochemistry, such as changing the particle sizes by grinding, or influencing the shearing and tribologic behavior, have been utilized for thousands of years. Using the ultrasound (US) in laboratory practice, as the technology has allowed it, is a relatively old method; the imposing of the potential in hydrodynamic cavitation is still a curiosity and being continuously developed.
The green point of view can divide mechanochemical manipulations into two technologies: (a) reaction in solution and (b) reaction in solid-or quasi-solid state. The standard reaction media of sonochemical and hydrodynamic cavitation-induced reactions uses solutions, where the solvent is usually water. When the reaction products are not isolated, or if the manipulations aim to remove or degrade chemicals, e.g., in wastewater treatment, these technologies offer a green and energy-efficient alternative to conventional methods. Solid-state reactions use a minor part of mechanochemical technology; however, in general, they can offer a truly green approach when compared with traditional synthetic methods. In many cases, the reactant may also be partially liquid or gaseous, or it may become liquid/gaseous during the operation, i.e., these reactions are not always purely solid-state reactions. Liquefaction can be the result of either temperature or chemical reasons. On the other hand, there are cases where small changes in the same technology can change a mechanical transformation into a chemical reaction. In this mini-overview, we use mechanochemistry in a slightly broader sense and briefly summarize the processes otherwise used in mechanochemical treatments in complex preparation.
Although shear and tribological reactions can represent key chemical transformations, they are less suitable for manufacturing only, while acoustic or hydrodynamic cavitation and grinding methods provide green production tools. It is also true that shear and tribological transformation are involved in almost all mechanochemical processes.
The solid-state reactions are carried out characteristically in mills or mortars. A remarkable case of mechanochemical transformations is when the treatment uses a mortar and pestle. This manipulation, often erroneously called milling, can be used to induce chemical reactions, as well. Although mortar technology is the simplest and fastest way for mechanochemical activation, the transferred energy is limited and rarely enough to trigger chemical reactions. It is a practical tool for transformations that are little affected by the environment (light, oxygen, CO 2 , etc., or evaporation of components).
In recent decades, many new grinding technologies developed, and existing methods significantly improved. The theory of grinding was developed mainly by the mineral industry, principally for particle-size-reduction purposes. Although the literature gives many examples of experimental studies with different types of mills and with various mineral powders, there are still little data available on both host-guest complexation and their use in chemical reactions. Another limiting factor is that the mineral manufacturing industries developed approaches aimed at their needs, so relatively little experience has accumulated on the property changes of organics and pharmaceuticals during milling. The fundamental criteria include the ease of determining operating parameters and the simplicity of cleaning to remove contaminants, which is particularly important for multiproduct installations. Another potentially limiting factor is the physical contamination of the ground materials, but fortunately, many practically non-degradable milling media are now available, and all the parameters of the grinding process can be tailor-made.
There are principally two main groups of grinding technology [10], within which there are, of course, further subgroups [11]. Jet mills use high-speed compressed air or inert gas jets to grind materials by colliding particles against each other and the wall. The compressed gas is forced into the mill through nozzles perpendicular to the cylinder wall, creating a vortex. The gas usually exits the mill through a tube along the cylinder axis, while the powdery product discharges at the bottom of the unit.
The other class is the impact mills, where the attrition or grinding mills grind materials by mechanical impact. Before the industrial revolution, grinding was mainly performed by attrition grinding or grinding the material between two surfaces, which is still the dominant comminution method for agricultural products today.
Gravity impact mills pulverize the material in a rotating chamber, and then the pulverized material is converted into finer particles by the repeated impact of larger pieces. These autogenous impact mills usually do not contain foreign material. The addition of balls (ball mills, BMs) or rods can intensify the crushing operation. There are several subclasses of gravity impact mills, commonly referred to as dynamic impact mills, because a dynamic method is applied to increase the energy efficiency of the impacts.
While jet technology is typically for pulverization purposes, impact mills are also suitable for homogenizing materials. From the point of view of chemical reactions, inorganic chemical syntheses exploited first the impact mills, and their use in organic chemistry spread only later, especially with the advent of vibrating and BMs with higher energy transfer. With the increasing number of publications describing applications in organic synthesis, the view that this method is only utilizable for reactions that proceed almost spontaneously when reactants are in close spatial proximity is becoming obsolete. Although this observation is more valid for manual manipulation in a mortar, it is less for syntheses in a BM.
Although the rotation speed of classical BMs is low, and thus the energy transfer is lower in a given time, these devices are also suitable for industrial-scale production. Vibratory, mixer, and planetary versions, often referred to as high-energy grinding technologies, are mainly advantageous for the high kinetic energy conversion to chemical activation. These technologies have enabled numerous new chemical syntheses, mainly in the solid phase, in recent decades. In this review, the main focus is on these technologies.
The laboratory practice widely uses US, and in the case of low-energy US irradiation, the principal purpose is to disintegrate particles or prepare solutions and emulsions. In these manipulations, the main goal is to reach smaller solid particles or larger contact surfaces. The laboratory cleaners are commonly used to accelerate the dissolution of solids, removing impurities from the glassware; initialize crystallization; or degas solutions, and are rarely used to trigger chemical reactions. The latter usually occurs when a poorly soluble reactant or a heterogeneous catalyst requires further physical destruction by the US. In these cases, the size of the reaction is limited, and moderate warming of the UStransmission medium is often sufficient, too. Although the US-assisted dissolution or cleaning is a part of the laboratory routine, only a few scientists and assistants connect the US to cavitation.
When the velocity of a flowing system suddenly increases, Bernoulli's law states that the pressure decreases. Cavitation occurs when the absolute pressure somewhere in the flow decreases to the pressure of saturated water vapor at the prevailing temperature, and then the homogeneity of the flowing medium is lost, forming vapor bubbles inside the liquid. The bubbles are again at higher pressure, collapsing. The consequences are unwanted vibrations, oscillations, noise, and structural failures due to the collapse of bubbles under higher pressure. There are cases where the phenomenon can be exploited, such as forming emulsions, cleaning surfaces, cavitation pumping, or energy transfer for Molecules 2021, 26, 5193 4 of 21 chemical reactions. Since violent collapse only occurs when the enlarged bubble is mostly vapor, this condition is unfavorable in applications where the aim is to induce cavitation. Triggering cavitation has two ways.
(a) Acoustic cavitation appears when ultrasound waves entering the liquid medium cause alternating frequency-dependent high-and low-pressure cycles. In the low-pressure period, the high-intensity ultrasonic waves create tiny vacuum bubbles in the liquid. When these bubbles reach a volume at which they can no longer absorb energy, they collapse violently in the high-pressure cycle. The collision can generate locally high temperatures and pressures. Cavitation bubble collision results in high-velocity liquid jets, too. The acoustic cavitation transfers about one order of magnitude less energy than hydrodynamic cavitation [12].
(b) Hydrodynamic cavitation can be created by passing a fluid through a channel constricted by a defined flow rate or by mechanical rotation in the liquid. In a constricted channel based on the specific geometry of the system, the combination of pressure and kinetic energy can create high-energy bubbles that result in hydrodynamic cavitation after the local constriction. Using ultra-turrax for dispersing or dissolving various materials is the laboratory utilization of rotating object-created cavitation.
The reaction mechanism in BMing, particularly when no liquid phase presents, significantly differs from conventional solution reactions [2][3][4][13][14][15]. However, the application of liquids that cannot solubilize the reaction components and rarely participate in the chemical transformations can influence the reaction energetics. The wet-milling technologies use minimal amounts of solvents, and the reaction characteristics are still solid-solid reactions. In many cases, water formation can significantly affect the consistency of the milled materials. In general, the water increases the density of the solids by sticking the particles stick together, resulting in a hardly crackable solid. This unwanted phenomenon can restrict the movement of the reacting particles and can reduce the reaction speed. The shearing forces can disrupt the formed hard solids, which finally revitalize the reaction.
All physical treatments transfer energies to the medium, which, depending on the conditions, can accelerate the targeted reaction or initialize decomposition. Thermal changes in solutions are relatively easy to control on both laboratory and industrial scales. Although both heat and mass transfer are limited in solids, this feature offers many exploitable advantages. The thermal effect depends on the grinding conditions, and under milling conditions, this is not unusual, and local overheating often occurs within the reactor because the mass of the colliding balls is much greater than the mass of the particles entrapped between them. Despite the intensive energy transfer, due to the limited heat transfer, the bulk temperature can remain below 100-120 • C [16][17][18].
Common reaction vessels are usually not transparent, and this prevents visual control. When a liquid phase appears during milling, the reaction mechanisms may become more complex and mix with solution mechanisms. A solid-state reaction setup is usually significantly easier than conventional cases, but the absence of solvents can also reduce the time and energy required for workup and isolation. Many grinding media are available, but not all are suitable for all types of reactions. The choice of the best grinding media depends on the chemical and physical properties of both the reactants and the products and often influences the reaction efficacy. The vast and long experience in mineral processing can also help to select the most suitable conditions.
In this review, because substantially only physical processes are involved in many mechanochemical methods of complex preparations, the complexation methods are superficially discussed.
Complexes
The publication distribution of mechanochemical methods used in the preparation of CD complexes is seen in Figure 1.
Complexes
The publication distribution of mechanochemical methods used in the preparation of CD complexes is seen in Figure 1.
Complexation in Mortar
Mechanical agitations are usually complicated processes, particularly in complex preparations, and cannot always describe one methodology.
The mixing of solids in a mortar is a complex mechanical process, crushing mixed with shearing, and often tribological effects also playing a significant role. Simple mixing of solid components rarely results in complexation, but the addition of some water or organic solvents transforms the process into kneading, which is another frequent CD complexation method. Although many papers discuss complexes prepared by kneading, the publications often contain a typical theoretical error, namely that if the components were mixed in a 1:1 molar ratio, the composition of the resulting complex is 1:1. The complex formation is an equilibrium process, and one or both of the components may dissolve in the solvent used. The finally obtained solid is a mixture of complex(es) and free species. Though it is possible to induce a mechanochemical reaction in a mortar, this manipulation does not usually provide sufficient mechanical energy to break or form chemical bonds. In the case of complexes prepared by kneading, chemical reactions are unlikely to occur, except for hydrolysis at extreme pH, photochemical oxidation of sensitive compounds from the open system, or carbonate formation due to the presence of oxygen in air or CO2. These reactions can usually occur without mortar treatment for compounds that are sensitive to environmental influences. The mechanochemical points of mortar grinding are discussed in a later section.
Complexation in Mills
Although milling technologies have been around for thousands of years, they have changed more than ever in the last century. Whereas in the past the industrialization was
Complexation in Mortar
Mechanical agitations are usually complicated processes, particularly in complex preparations, and cannot always describe one methodology.
The mixing of solids in a mortar is a complex mechanical process, crushing mixed with shearing, and often tribological effects also playing a significant role. Simple mixing of solid components rarely results in complexation, but the addition of some water or organic solvents transforms the process into kneading, which is another frequent CD complexation method. Although many papers discuss complexes prepared by kneading, the publications often contain a typical theoretical error, namely that if the components were mixed in a 1:1 molar ratio, the composition of the resulting complex is 1:1. The complex formation is an equilibrium process, and one or both of the components may dissolve in the solvent used. The finally obtained solid is a mixture of complex(es) and free species. Though it is possible to induce a mechanochemical reaction in a mortar, this manipulation does not usually provide sufficient mechanical energy to break or form chemical bonds. In the case of complexes prepared by kneading, chemical reactions are unlikely to occur, except for hydrolysis at extreme pH, photochemical oxidation of sensitive compounds from the open system, or carbonate formation due to the presence of oxygen in air or CO 2 . These reactions can usually occur without mortar treatment for compounds that are sensitive to environmental influences. The mechanochemical points of mortar grinding are discussed in a later section.
Complexation in Mills
Although milling technologies have been around for thousands of years, they have changed more than ever in the last century. Whereas in the past the industrialization was characterized primarily by the increase in size, milling technologies have changed significantly in recent decades, both theoretical and materials terms, allowing much more energy-efficient processes to be designed and applied. In contrast to mills, crushers are used primarily in comminutions, and homogenization of various materials is usually a secondary target. Grinding combines both the comminution and homogenization tasks. The use of milling technology in solid complex production or targeted selective extraction is relatively new. With the development of mechanically more resistant milling media materials, such as agate or zirconium oxide (ZrO 2 ), the fundamental problem of grinding technology, namely contamination due to mechanical wear of the grinding media, has been solved. Intensive developments have not only made complex preparation on an industrial scale possible, but the methodology has also become applicable to the pharmaceutical industry. The milling process can be quite complex and lead to physical transformations that are difficult to describe. At first, the phase transformations in grinding of an aspirin/DIMEB complex, and although the ground complex has a better solubility profile, the chemical stability of aspirin was reduced [19].
Naproxen/βCD complex prepared in a ceramic BM showed a significantly better solubility profile than the crystalline version [20]. The similarly produced complex of amphiphilic carboxymethylated ethyl-βCD and diltiazem showed excellent controlled release in an acidic aqueous solution [21]. Almost ten years later, the shearing force effect studies on the physicochemical properties of ibuprofen/βCD complexes in a roll mill was a significant step ahead for the industrialization of complex preparation in solid-state [22]. The first movement toward the use of high-energy BMing studied the phase transitions of permethylated βCD in a vibration mill [23]. The phase transitions in another highenergy milling in a planetary BM revealed that the glass transition of the amorphized solid occurs above the thermal degradation point [24]. Milling-condition-optimization studies on steroid/βCD complexes in a planetary BM showed 200-300 rpm optimal evolution speed [25]. Since the planetary mill used has a sun-wheel-to-jar ratio of 1:2, the results suggest that the method is transferable to roll-mills, since the maximum speed of laboratory mills is about 400 rpm. Although this is still far from the 20-50 rpm of pilot plant/industrial mills and the limited number of experiments does not allow generalization, the production of drug/CD complexes in kilogram sizes seems feasible with laboratory roller mills.
Since the first application of BM in the CD complex preparation, more than 100 technical papers have appeared as the publication dynamics seen in Figure 2, demonstrating the slow penetration of the grinding technologies into everyday practice. A recent review [26] well-summarizes the various aspects from the amorphization to the preparation of ground CD complexes, and as a general conclusion, the ground complexes have improved physicochemical properties.
characterized primarily by the increase in size, milling technologies have changed significantly in recent decades, both theoretical and materials terms, allowing much more energy-efficient processes to be designed and applied. In contrast to mills, crushers are used primarily in comminutions, and homogenization of various materials is usually a secondary target. Grinding combines both the comminution and homogenization tasks. The use of milling technology in solid complex production or targeted selective extraction is relatively new. With the development of mechanically more resistant milling media materials, such as agate or zirconium oxide (ZrO2), the fundamental problem of grinding technology, namely contamination due to mechanical wear of the grinding media, has been solved. Intensive developments have not only made complex preparation on an industrial scale possible, but the methodology has also become applicable to the pharmaceutical industry. The milling process can be quite complex and lead to physical transformations that are difficult to describe. At first, the phase transformations in grinding of an aspirin/DIMEB complex, and although the ground complex has a better solubility profile, the chemical stability of aspirin was reduced [19].
Naproxen/βCD complex prepared in a ceramic BM showed a significantly better solubility profile than the crystalline version [20]. The similarly produced complex of amphiphilic carboxymethylated ethyl-βCD and diltiazem showed excellent controlled release in an acidic aqueous solution [21]. Almost ten years later, the shearing force effect studies on the physicochemical properties of ibuprofen/βCD complexes in a roll mill was a significant step ahead for the industrialization of complex preparation in solid-state [22]. The first movement toward the use of high-energy BMing studied the phase transitions of permethylated βCD in a vibration mill [23]. The phase transitions in another high-energy milling in a planetary BM revealed that the glass transition of the amorphized solid occurs above the thermal degradation point [24]. Milling-condition-optimization studies on steroid/βCD complexes in a planetary BM showed 200-300 rpm optimal evolution speed [25]. Since the planetary mill used has a sun-wheel-to-jar ratio of 1:2, the results suggest that the method is transferable to roll-mills, since the maximum speed of laboratory mills is about 400 rpm. Although this is still far from the 20-50 rpm of pilot plant/industrial mills and the limited number of experiments does not allow generalization, the production of drug/CD complexes in kilogram sizes seems feasible with laboratory roller mills.
Since the first application of BM in the CD complex preparation, more than 100 technical papers have appeared as the publication dynamics seen in Figure 2, demonstrating the slow penetration of the grinding technologies into everyday practice. A recent review [26] well-summarizes the various aspects from the amorphization to the preparation of ground CD complexes, and as a general conclusion, the ground complexes have improved physicochemical properties.
Complexation with Kneading
The kneading preparation of CD complexes is among the oldest methods, and usually, it is carried out in mortar by hand. Technically, this is a slight variation of wet grinding, where the wetting agent is usually water or EtOH; however, it is not a literal mechanochemical process. Energy transfer is both moderate and low efficiency, with prolonged hand kneading through the viscous mass resulting in personally perceptible energy transfer.
This method targets the structural changes by secondary forces, such as H-bonds, conformational energies, or hydrophobic interactions [27,28]. Even though this method uses an open system and is difficult to reproduce due to manual manipulation, many publications report that this method belongs to one of the best complex preparation methods. However, it is also true that many times the descriptions are superficial for correct reproduction. In the early days of the widespread investigations of CD-complexes, a systematic study revealed the importance of choosing the appropriate CD/water and guest/water ratios [29]. Freeze-drying of a homogeneous solution of the ketoprofen/SBβCD complex was more efficient than kneading in producing the solid complex; however, the host/guest ratios for both methods resulted in similar trends in the physicochemical properties of the solids [30]. However, the natural βCD and oleoresin complexation worked oppositely [31]. Most of the publications deal with comparisons of the complexation efficiency by the various complex preparation methods, but a general conclusion is difficult to state [32][33][34].
Complexation with Ultra-Turrax, a Transition to Hydrodynamic Cavitation
The high-speed stirring of solid/liquid or liquid/liquid systems is a daily laboratory homogenization method. The operating speed of the ultra-turrax is 15,000-30,000 rpm, and although the conditions are theoretically suitable for cavitation effects, this rarely occurs. Therefore, although complexation articles mention this method, the high-speed mixing is not associated with the development of mechanochemical activation conditions and cavitation effects. Primarily, in these cases, the dissolved gas content of the liquid may be high, partly due to the design of the device and partly due to the lack of a system to remove dissolved gases from the system. The initial nucleus size where the cavitation phenomenon can start at about 100 µm. Bubbles of this size can exist in a liquid in the absence of cavitation if the dissolved gas content of the liquid is high. Therefore, in general, the ultra-turrax-induced cavitation is transient and short-lived, and thus this phenomenon is not significant, unlike ultrasonic homogenization. Therefore, and due to the incomplete literature data, we omit further discussion.
Spray-Drying, a Transition to Hydrodynamic Cavitation
Spray-drying is a mild and large-scale applicable process for solvent-dominantly water-removal. Many factors on both the inlet and outlet sides influence the particle size of the obtained solids. Although hydrodynamic cavitation can further reduce the particle size produced during spray drying, this new technology has not been included yet in the toolbox of solidification of CD complexes. However, spray-drying, similar to freeze-drying, is mainly used to remove solvents from homogeneous solutions of complexes [35,36].
Another new technology in spray-drying, namely electro-spinning, uses an electric field for producing polymeric nanomaterials in spray-drying [36]. The method combines many different micronization techniques for further size reduction of the solution droplets [37]. Although the spray-drying technology is not new, the connection with mechanochemistry is not always clear.
Complexation with Hydrodynamic Cavitation
The effect of hydrodynamic cavitation on the non-covalent associations, particularly by the short-term temperature and pressure shocks, needs further studies, and the currently available information in the CD complexes is limited [37,38].
The homogenization process using Tween 80 and Span 80 surfactants was energetically more effective by hydrodynamic cavitation than by the US [39]. CD complexation in insect-repellent formulations can exploit the potential of the hydrodynamic cavitation process [40].
Supercritical assisted spray-drying has recently become one of the most efficient techniques for the production of nanoparticles. This process prepares the mix of supercritical CO 2 and a complex solution, and then this mixture is injected into the cyclone at near atmospheric pressure. Since the surface tension of the expanded solution is almost zero, together with the solution's low viscosity, the formation of a solid complex in the solution is limited, which improves the efficacy of the sputtering process. A hydrodynamic cavitation mixer can further enhance this effect by increasing mass transfer, which is beneficial for solidifying complexes of thermosensitive materials [41].
Complexation with Ultrasound
Another method to trigger cavitation effects is the US irradiations [38]. Although the sonication of suspensions and emulsions is a common practice, the manipulation details are rarely published. There are many US laboratory cleaners on the market, but not all of them provide the same wavelength and energy. Comparisons of probe (horn) type and bath methods are even less frequently published. This type of comparison would be necessary for method transfer from experimental laboratory scale to kilolab or pilot plant operation. Although the probe version seems more efficient in particle disintegration and possibly also in complex formation, the available information is limited to draw a general conclusion [42,43].
Preparation of βCD templated CdSe spherical hollow quantum dots used 75 W/cm 2 flux of 20 kHz US [44][45][46]. The formed uniform solid showed electrochemical luminescence in sensor applications. Hexavalent chromium ion, Cr(VI), is a carcinogen metal ion, and its quantitative analysis is mandatory in many products, workplaces, or in the environment as ground and potable water or wastewater plants. A colorimetric method development complexed βCD with functionalized gold-iron nanoparticles (βCD/Au-FeNPs), using high-energy US irradiation, which reduced the spectrophotometric limit of detection to the 50 nM level [47]. The higher sensitivity can be associated with the clustering of chromate ions with βCD/Au-FeNPs complexes.
Concentration, frequency, and DS dependency of (2-hydroxy)propyl βCD [42] (HPβCD) templated preparation of 1-hexyl-3-methylimidazolium tetraphenylborate nano-assemblies studies showed significantly smaller particle sizes (hydrodynamic radius) in comparison with the non-templated method. Although the microwave irradiation further reduced the particle size of salts produced by non-template sonication (55 and 20 kHz), the average hydrodynamic radii were still 5 to 6 times larger than in the presence of HPβCD. Zetapotential differences showed no concentration and frequency dependence.
βCD, HPβCD, and 4-sulfobutylated βCD (SBβCD) could effectively complex a nonsteroidal anti-inflammatory drug, salicyl salicylate (salsalate), using 60 W of 24 kHz sonication [48]. The preparation of HPβCD complexes of cumin aldehyde [49] and isoeugenol [50] used 60 W horn-type 20 kHz US irradiation and was significantly more effective than classic methods of the native form of host molecules. Resveratrol is a naturally occurring pharmacologically active and light-and oxygen-sensitive antioxidant. A nanoemulsion of resveratrol/HPβCD prepared by acoustic cavitation resulted in 20-30 nm particles with higher encapsulation efficiency than other attempts with ≈10% resveratrol content. The aqueous solution of the formed composite was almost transparent after 7 min US treatment both in batch (40 kHz, 450 W) and probe (26 kHz, 60 W) sonication [51]. Bioactive compounds of various physical forms of olive pomace, using βCD and US (40 kHz, 100 W), significantly increased the total phenolic content of the aqueous solution [52].
Imidazolinone herbicides are popular herbicides in soybean and other legume plants to control different grasses and broadleaf weeds. These herbicides have phytotoxic effects on various plants such as cotton, rape, potatoes, etc. The pH of soils and the organic and clay content significantly influence the adsorption and persistence of the imidazolinone herbicides, and their cheap and effective removal would be desirable. With 30 min of 40 kHz sonication, various chitosan/βCD supramolecular associates were prepared, and their biocomposites were suitable for the decontamination of Indian soil samples [53].
The degradation of the CD complex is going in the opposite direction to the traditional ones, and as CDs are becoming utilized for the recovery of certain valuable biological materials, these CD-recovery technologies will come to the fore soon. An example is the cholesterol removal of foods. Although the method was patented more than a half-century ago, the applied benzene and hexane are not food-friendly organic solvents [54]. Twenty years later, as CDs became more accessible and economical, the production of cholesterol-free foods became safer, and slowly an entire industry developed for this methodology [55,56]. Green production requires a green regeneration process of cholesterol and βCD also, instead of their wasting. Usually, due to solubility issues of cholesterol, the regeneration of CD from the complex requires low-polarity solvents. The solid/liquid extraction methodology of cholesterol complex decomposition requires the safe handling of organic solvents and high energy demand. The food industry is the largest cholesterol CD complex producing segment, and among other continuously developed methods, a US-assisted extraction procedure showed the highest recovery ratio compared to reflux and Soxhlet extractions. Energy efficiency point of view in a 10% EtOH suspension, using 40 kHz US and 0.49 W/cm 3 (250 W input power) acoustic energy density, showed the best result [43].
The first well-characterized 4,6-benzylidene glucosamine catalyst prepared by acoustic cavitation used 26 kHz and 200 W probe sonication to achieve a soluble complex with βCD [57].
The sonication-generated microturbulence in organic liquid/water mixtures results in fine emulsion that intensifies the interphase transport of a contaminant of liquid fuels, dibenzothiophene. The radicals produced by transient cavitation oxidize dibenzothiophene to sulfone and sulfoxide, which also enhances microbial biodesulfurization. Transport of substrate and product across the cell wall followed Michaelis-Menten kinetics. Due to high shear and cavitation induced by ultrasound, the presence of βCD negatively influenced the dibenzothiophene transport [58].
Complexes under Shearing
In systems with solid particles, the shearing forces occur whenever the particles collide, and fragmentation occurs as a result. It is difficult to separate these forces in individual mechanochemical processes. Although there are currently a few publications that show that shear forces alone influence the complex formation, various CD complexes can exploit these forces. Shear forces are also present in liquids, because adjacent layers of the fluid move with different velocities compared to each other. As Newton's law of viscosity defines [59], in a laminar flow, the shear stress between adjacent flowing layers is proportional to the velocity gradients between the two layers. The energy in fluids is usually insufficient to utilize for complex preparations, but the reverse effect, i.e., the decomposition of complexes by shear forces, is more common.
The combination of Fe 3 O 4 nanoparticles and βCD-based ethanolamine-functionalized poly(glycidyl methacrylate) showed anticancer efficacy by the magnetic nanoparticle accumulation in tumor cells. The alternating magnetic field can generate a high shear force that destroys tumor cells [61].
In many cases, complexes can increase the shear strength of the formulation, but the role of CDs or their complexes is not always clear. CD maleate, copolymerized with acrylamide and 2-acrylamide-2-methylpropanesulfonic acid, as a silica gel [62] or montmorillonite hybrid [63], showed significantly higher physical stability and shear strength compared to the organic polymer alone. A betaine/cyclodextrin hyperbranched copolymer showed similar favorable property changes [64,65]. βCD-modified alginate and a methacrylated gelatin enabled the design of a shear-thinnable hydrogel that disintegrates under the shear forces applied during injection and becomes self-healing after they cease [66]. A combination of αCD/nonanyl modified poly(vinyl alcohol) with α-tricalcium phosphate showed excellent adhesion properties with dental implants. The complex exhibits thixotropic properties and provides significantly higher bonding and shear adhesion than the titanium plate-based commercial products [67]. Supramolecular αCD/4-PEG hydrogels also exhibit thixotropic properties, and they form shear stress-controlled reversible gel-solution transition. From these hydrogels, the release kinetics of a drug, such as the glaucoma drug brimonidine, can be controlled by shear stress, making them suitable for the preparation of injectable drug formulations [68]. The αCD content can govern both the shearing dynamics and strength properties of a methoxy polyethylene glycol conjugated arginine-functionalized poly(l-lysine)dendron complex. The formed hydrogel was suitable for a tailored shRNA plasmid gene therapy [69]. Another αCD content controlled shearing properties of a hydrogel complex with a glycol chitosan-Pluronic F127 conjugate and doxorubicin showed good local cell-targeting properties for chemotherapy [70]. The γCD can regulate the drug-release enantiopreferences by shear forces in sodium deoxycholate/TRIS hydrogels [71]. Polyrotaxanes, prepared from αCD and PEG-2000, also showed CD-contentdependent shearing properties in DMSO [72]. Three-dimensional bioprinting technology offers a promising strategy for the production of artificial tissues and organs. To this end, the main goal of the bioprinting process is to produce a bioink with ideal mechanical properties, without sacrificing biocompatibility; thus, PEG/chitosan, αCD, and gelatin-based hydrogel-based bioinks seem suitable. Aggregation of pseudopolyrotaxane-like side chains formed by host-substrate interactions between αCD and PEG side chains causes structural changes in the pre-crosslinked hydrogel under shear forces [73].
Hydrophobic ethoxylated urethane is another type of hydrophobically modified polyethylene glycol that can form a temporary network structure in water. Methylated βCD in the solution can control the effective elastic chain density and relaxation time and can potentially be used to release various drug molecules [74].
Tribology of Complexes
Although tribology is less suitable for complex preparations, it can utilize complexes to improve the tribological properties of materials or protect the used materials from various chemical decompositions.
Possible applications for polystyrene/poly(dimethylsiloxane) blends include hydrophobic surfaces, membranes, and tribology [75,76]. The chemically dissimilar polymers generally have positive enthalpy and minimal entropy of mixing, so they are immiscible. Improving the mixing and stabilization of incompatible blends can usually be achieved in several ways: reactive compatibilization physically binds the blended polymer in a crosslinked microstructure or using a compatibilizer-a block copolymer consisting of chemically identical blocks to the homopolymers in the blend. The compatibilizer reduces the interfacial energy at the interface between the two phases, resulting in a tighter mixing. The γ-cyclodextrin core and polystyrene arms as compatibilizers have developed with several advantages over conventional block copolymer compatibilizers. Many different polymers are compatible with the CD core, which means that the same CD-star molecule is applicable for different polymer blends. In addition, the diameter of the CDs limits the polymers that can fit into the cavity, which also limits the complex formation, and the different cavity diameters of CDs are suitable to incorporate the desired polymers selectively. Since CD-star blends have a very homogeneous morphology, blends without CD-star resulted in a high degree of phase separation. The CD-star/poly(dimethylsiloxane) films tested exhibited significantly different thermal and mechanical properties with improved retention of poly(dimethylsiloxane). About twelve of the 24 γ-CD hydroxyl groups were involved in the polystyrene arms formation, using a brominated initiator for radical atom transfer polymerization [77]. Although the presented preparations for CD-star polymers and composites are far from green chemistry processes, the prepared materials can lessen the ambient impact of tribological aids by reducing the amounts of environmentally incompatible technical materials.
Chemical Transformations
While solution reactions take a smaller share of the complexation, they are more dominant in mechanochemical manipulations, as shown in Figure 3, while the simplest milling method virtually disappears in chemical transformations.
proved retention of poly(dimethylsiloxane). About twelve of the 24 γ-CD hydroxyl groups were involved in the polystyrene arms formation, using a brominated initiator for radical atom transfer polymerization [77]. Although the presented preparations for CDstar polymers and composites are far from green chemistry processes, the prepared materials can lessen the ambient impact of tribological aids by reducing the amounts of environmentally incompatible technical materials.
Chemical Transformations
While solution reactions take a smaller share of the complexation, they are more dominant in mechanochemical manipulations, as shown in Figure 3, while the simplest milling method virtually disappears in chemical transformations.
Mechanochemical Transformation in Mortar
Thousands of years have passed from the first inorganic synthesis in a mortar, the cinnabar conversion to mercury in a copper mortar [78], to the first organic synthesis by milling. After the mechanochemical synthesis of tetrachloroquinhydrone [79], silence reigned for nearly a hundred years [80]. In both cases, the prepared charge-transfer complexes are co-crystals, and the process did not lead to the formation of covalent bonds. These attempts were the first steps in organic mechanochemistry. After that, it took another 30 years or so for the first mortar and pestle actual organic mechanosynthesis. Although the preparation of phenylaldoxime from cavitands only appeared about a decade ago [81], this simple synthetic method has now begun to penetrate educational laboratory practice [82]. Albeit the synthesized cavitand is somewhat similar to cyclodextrins, reports on the production of CD derivatives in mortars have not appeared in recent decades.
Mortar and pestle manipulation of αCD and bis(3-aminopropyl)-terminated polytetrahydrofuran resulted in a complex, a pseudorotaxane, in which the terminal amines reacted with 3,5-dinitrobenzoyl chloride or 2,4-dinitrofluorobenzene in a mortar, too, providing a rotaxane after one-hour grinding. Both reactions use mortar, unlike the PEG and polytetrahydrofuran with allyl ether end cappings, where an N-oxide converted the pseudorotaxanes of native and permethylated αCDs to the rotaxane derivatives. In the latter case, the preparation of the pseudorotaxane used US, which is also a green technique.
Mechanochemical Transformation in Mills
As described earlier, BMs are suitable for the preparation of CD complexes. The advent of high-speed or high-energy BM variants has radically changed and expanded the spectrum of synthetic possibilities. Today, these devices are capable of producing up to kilograms of compounds. Not all the BM-types are suitable for all reactions, and the experimental conditions are not always comparable. In the so-called Finkelstein reaction, the benzyl chloride conversion to benzyl, a better conversion was achieved in a significantly shorter grinding time in a planetary than vibrating mill [16].
The limited solubility of CD intermediates in various solvents often makes the CDderivatizations laborious, and sometimes the reagent's solvent incompatibility also poses a challenge to chemists. The potentially contrasting solubility profiles of the starting and target CD derivatives can often complicate syntheses, too. Frequently, but not always, some high-boiling solvents can be a reasonable compromise, but difficulties in removing the solvents, reagent residuals, or byproducts make production expensive or environmentally unsuitable. In many cases, the syntheses, except the easiest-to-produce CD derivatives, are only possible in DMF or DMSO, which has high energy demand to remove. For CD derivatives prepared in aqueous media, energy-intensive water removal is also necessary. Water can also be very reactive under certain reaction conditions, primarily by hydrolyzing the reagents, which complicates the preparation of many CD derivatives, as HPβCD or SBβCD.
A good example is the preparation of HPβCD. This reaction occurs in an aqueous solution in the presence of a strong base, usually NaOH. In a BM reaction, the much smaller amount of water significantly decreases the hydrolysis, and the better utilization of the reagent also allows a reduction in the molar amount of the reagent. As long as the DS of the product is low, HPβCD can be "crystallized" from acetone, which can reduce the propylene glycol to an acceptable level. Acetone, while not exactly a green solvent, is at least easily regenerated. At a higher DS, the removal of propylene glycol can be a complication, and a multistep synthesis is necessary [83]. In a BM reaction, due to the much smaller amount of water, the hydrolysis is significantly suppressed; furthermore, the better utilization of the reagent also allows for the reduction of the molar amount of the reagent. The aqueous solubility of the epoxides used in CD derivatizations is low, so in anhydrous media, the contact area in BM is increasing [84]. This method enables the preparation of novel lipidyl CDs, which were previously much more difficult or impossible to prepare, and the reproducible preparation of polymerized CDs, too [85].
The p-toluenesulfonic acid derivatives are proper guest molecules for βCD, and the orientation of the sulfonyl group depends on many factors. In water, the continuous exchange of the host considerably reduces the secondary hydroxyl substitution. In solids, on the other hand, the orientation of the chlorosulfonyl group is fixed in the βCD cavity after complexation for geometrical reasons. Thus, when the chlorosulfonic acid group is in spatial proximity to the secondary alcohol groups, this allows selective derivatization of secondary hydroxyls without protecting the primary ones. The reaction center is usually O(2) because of their more favorable position than the O(3) groups. In the presence of a strong base, a mannoepoxide formed immediately from the O(2) tosyl ester [86]. Although in BM, the complex formation with the oppositely oriented tosyl group also occurs, the primary tosylation is minimal. This reaction can also provide indirect evidence for the greater reactivity of O(2), as has previously been deduced from the substitution pattern of hydroxyalkylation and methylation reactions. The use of 6-periodo CDs in the exchange reaction of the halogens → 3-carboxyethylthio moieties is less favorable than the per-6-bromo CDs, which worked not only faster but also had a higher yield [17]. A similar effect exists in the halogen-azide exchange, too. When the NaI complex did not affect a further reaction, as in the monosubstitution, minimal reaction rate differences were in the halogen (or tosyl) groups. In the 6-persubstituted case, the formed iodide complex can leave the cavity only by a slow diffusion, which reduces the rate of further reactions. Although the worst guest among the alkali salts, the chloride salts, would be the most suitable 6-perhalogeno derivatives (the fluorides are inappropriate for a nucleophilic exchange), the reduced reactivity of per-6-chloro CDs did not allow a productive reaction [17].
Random substitution, although broadly following the reactivity trend observed in the solution reactions, resulted in more primary hydroxyl-side substitution. Because of the limited diffusion, not surprisingly, the reactive species is reacting the closest appropriate hydroxyl while in solution, the reagent has more time to find the most reactive partner [18].
Only a few kinetic studies have appeared in the short history of the CD-related BM reactions. In vibrating BM, the reaction rate between mesitylenesulfonyl chloride and CDs showed initial particle dependency [87], which lacks in the case of rotational BMs. The production of transition metal/CD composites takes advantage of the higher comminution effectiveness of rotary BMs compared to vibrating versions [88]. The water content of the components in both types of BMs can cause difficulties, as the warming of the ground solid can result in phase transitions. These can transform the powders into rocky or sticky solid cores, lumps, or glassy materials [24,89]. Clumping can-but not always-disappear as the reaction proceeds [85]. Dehydration of the CDs or an inert liquid additive can reduce these unwanted side effects, and finally, even mechanical intervention by the operator can break up the initially formed hard materials [90].
BM reactions are suitable for the synthesis of SiO 2 /CD composites [91] and chemically stable CD polymers [85,92]. The water and pH more sensitive CD-nanosponge preparations are also an excellent target of the mechanochemical syntheses. Classical nanosponge syntheses also use high-boiling solvents [93,94]. The potential pharmaceutical applications require removal not only of the solvent but also of unreacted and degraded reagents. The suppressed decomposition of the reagents makes the workup of the nanosponge crude products prepared in BM simple, and the filtration of the water-insoluble residual reagents can provide high purity products [95].
Mechanochemical Transformation Using Ultra-Turrax
As ultra-turrax works well in homogenization, solubilization, disintegration, and complexation, its use in chemical reactions has a technical limit. The design of the devices does not allow a continuous long-period operation, and since most organic reactions are not instantaneous, the use of these devices for synthetic purposes is severely limited. Even though the more effective energy-transfer green methods use 30-60 min reactions, the fast-stirring-triggered reactions are rare. The ultra-turrax-assisted preparation usually results in non-covalent associations between CDs or CD complexes and matrix. These composites can be stable for a long time and suitable for controlled release of the complexed drug. The preparation of tuned CD-contained hydrogels is possible by high-speed mixing technology [96], but these are not chemical reactions in the classic meanings.
Mechanochemical Transformation Using Ultrasound
Sonication, such as the ultra-turrax, is very popular in hydrogel preparation, which rarely results in stable, covalently bound CD derivatives [97,98]. Without reading the article, it is usually hard to deduce whether the topic is about a verbatim sonochemical reaction or solution preparation, homogenization, and eventually complexation. In many times, the technical papers refer to the formation of non-covalent interactions, H-bonds, or ionic associations as syntheses or derivatizations also. Obviously, not all metal ion-CD interactions are stable or of a specific composition enough to function as compounds in various applications. Although sonication in dissolution, crystallization, or homogenization has a long history, the application of US in CD derivatization by chemical reactions is relatively new [99]. A complicated mechanism of complexation and zinc promoted dehydrohalogenation of glycosyl bromides provided glycals [100]; however, the role of βCD and the reaction mechanism is not entirely understood. Alkaline earth metal oxides and hydroxides are strong enough to ionize the secondary alcohols of CDs and suitable to form stable complex 3D structures with suitable inorganic salts for analytical applications [101]. Another example of the utilization of stable salt formation is the preparation of the copper nanoparticles. The synthesis of microporous Cu(I) or Cu(0) solids [102][103][104] also uses sonication, exploiting the salt formation of Cu 2+ to βCD [105] as a template. US irradiation provided a stable Si-O-CD compound copper catalyst with enhanced CD and Cu content [106].
Silver doped nanosponges and magnetic SiO 2 /CD hollow spheres showed limited stability under US irradiation and demonstrated restricted recyclability caused by acoustic cavitation. In contrast, zinc peroxide and citric acid/βCD adduct sonication produced a composite that was more stable and effective in hydrogen peroxide decomposition than the ZnO 2 alone [107]. A magnetic nanoparticle of CD complex preparation is an example of the non-conventional CD complexes; that is, in this case, the 6-monoamino-βCD is the ligand which forms a stable complex with magnetic particles in a deep eutectic solvent [108]. Amino acid-modified βCD can form a stable association with CdSe/CdS quantum dots after sonication in a hexane/water emulsion [44,46,109]. Many publications appear on the metal-organic networks which report their synthesis with ultrasound participation. Although these composites can be stable under various conditions, they form a transition between the complexes and covalently or ionic bound components. A recent review on the preparation methods of various metal-organic networks summarizes the state-of-the-art knowledge, and these composites are out of the scope of the present paper [110].
In situ preparation of permethylated αCD polyrotaxanes from the initially prepared PEG pseudorotaxanes was effectively carried out by sonication [111].
The synthesis of various per-2,3-O-alkyl-6-TBDMS-βCDs was carried out in DMF, using US [112,113]. These fluidizable derivatives are suitable for coating glass surfaces and showed excellent chiral gas-chromatographic separation properties. The best utilization of the US-assisted synthesis is when an activated CD derivative reacts in a neat liquid reagent. The classic preparation method uses dry DMF with a couple of molar excess (to the CD) of the reagent, which is very effective in monosubstitution cases, but when a multisubstitution has aimed the traces of dimethylamine, a decomposition product of the DMF, it inevitably forms contaminates. In these cases, the high mole ratios of the required amines allow their use as solvents. The poor solubility of these activated CDs can be overcome by sonication, as was performed in the synthesis of per-6-alkylamino βCDs. Even though the non-conventional reaction conditions, MW or US irradiation, did not improve the yield in the reaction of βCD and methyl (3-bromopropyl)-2-iodobenzoate, the reduction of the long reaction time was successful, from three days to 4 h (US) and 1 h (MW) [114]. The 6-monotosylation of CDs in water is always a challenge because of the reagent and product instabilities. The hydrolysis of the activated tosyl reagent can reduce the yields, while in the case of α-and γCDs, the reaction is unsuccessful. Use of US in a more concentrated solution of CDs and tosyl imidazole resulted in 6-monotosylated α/β/γCDs in good yield [115]. The use of US in a more concentrated solution of CDs and tosyl imidazole resulted in 6-monotosylated α/β/γCDs in good yield. A slight modification in the reaction conditions resulted in 2-monotosylated CDs, of which further conversion gave mono-altro-azido analog of the CDs. A combination of US and MW activation of the copper powder and IRIS 3 and IRIS5 cyanine dyes successively reacted with 6-monoazido-βCD in 1,3-dipolar cycloadditions [116].
A nanosponge synthesis used high-temperature (90 • C) sonication to get unified solid particles; however, the purification used Soxhlet extraction, which reduces the green value of the synthetic method [117,118]. Hexamethylene diisocyanate crosslinked βCD nanosponge in warm DMF was prepared in high yield and had an almost three-timeshigher BET surface area than without sonication. The US treatment does not always result in significant reaction acceleration, as seen in the 6-monoazido-βCD conversion by Pd/H 2 to the monoamino derivative [119], compared with the classic transfer hydrogenation method [120].
Although no chemical reaction occurred, US treatment resulted in nanoparticles of insoluble βCD polymer in organic solvent mixtures, for which gel was suitable for coating capillaries for chiral gas chromatography [122].
Mechanochemical Transformation Using Hydrodynamic Cavitation
In many cases, acoustic cavitation is also called hydrodynamic cavitation, despite the different backgrounds. Unlike the flowing systems, the US waves do not have translational movement in an ideal case. To our best knowledge, no publication on the use of hydrodynamic cavitation has been registered in the major literature databases on the synthesis of CD derivatives or destruction of the macrocycle by it.
Mechanochemical Transformation Utilizes Shearing
Shearing can occur in any mechanochemical manipulation and is usually cumbersome to study on its own. Although the synthetic utilization of the effect alone is difficult to exploit, many times the controlled release of a drug substance or, as mentioned in Section 2.1.4, the prepared cell-targeting magnetic CD nanoparticles are suitable to destruct cancer cells by shear triggered decomposition of macromolecules.
Mechanochemical Transformation Using Tribology
Mechanochemical transformation modified tribological properties of iron surfaces in an aqueous PEG 600 solution with βCD/dialkyl pentasulfide (DPS) inclusion complex. The complex showed better tribological properties than βCD alone and better anti-friction properties than DPS alone. The βCD molecule decomposed into different molecular fragments, which released DPS molecules under friction conditions. The iron sulfide films formed from DPS and iron surface resulted in the creation of an anti-friction property at the FeS-FeS interface. In this process, the mechanochemical transformation decomposed the cyclodextrin, which resulted in the release of the guest molecule, and finally led to the reaction between the released DPS and iron [123].
Conclusions
Mechanochemistry is a constantly and almost exponentially evolving field with many green aspects. There is still little information available on macrocyclic mechanochemistry, and although many chemists have used mechanochemical manipulations for a long time, most of them are rarely directed consciously at exploiting the benefits of mechanochemistry. Any mechanochemical manipulations constitute different micro-processes, and it is impossible to create a clean environment controlled by a single process. Energy transfer also varies over a wide range, which significantly affects the behavior of the studied system. In cyclodextrin chemistry, mechanochemical methods are predominantly for particle size reduction, homogenization, or complexation. In a traditional sense, the formation or modification of non-covalent interactions is not a true chemical transformation.
The most popular technique is the application of ultrasound for a complex preparation, whether it is a CD complex, injection of a CD into a matrix, or the CD dispersion on the surface of a metal compound, or the preparation of metal-organic networks. US-assisted chemical transformation is usually, but not always, faster than classical reactions and yet often suffers from the use of solvents.
In terms of energy efficiency, solid-state transformations appear to be the most ecofriendly technology due to the significantly reduced amounts of solvent used. The limited mobility and degradability of components in solid-state reactions can open up an efficient synthetic route. Sometimes, it also allows the synthesis of CD derivatives that is too complicated to achieve by conventional ways, if possible at all. It should also keep in mind that the production of starting materials for these syntheses is often far from the requirements of green chemistry. Although the various mechanochemical transformations are useful tools, and as with all processes, they are not universally applicable. Recent and emerging research could significantly expand the greener transformations in CD derivatization.
Author Contributions: Both authors, L.J. and G.C., equally contributed to the conceptualization and writing the manuscript. Both authors have read and agreed to the published version of the manuscript.
Funding: The University of Turin, Turin, Italy, is warmly acknowledged for its financial support (Fondi Ricerca Locale 2021).
Conflicts of Interest:
The authors declare no conflict of interest. | 12,885.6 | 2021-08-27T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Materials Science"
] |
Conditional entropy of glueball states
The conditional entropy of glueball states is calculated using a holographic description. Glueball states are represented by a supergravity dual picture, consisting of a 5-dimensional graviton-dilaton action of a dynamical holographic AdS/QCD model. The conditional entropy is studied as a function of the glueball spin and of the mass, providing information about the stability of the glueball states.
The conditional entropy of glueball states is calculated using a holographic description. Glueball states are represented by a supergravity dual picture, consisting of a 5-dimensional graviton-dilaton action of a dynamical holographic AdS/QCD model. The conditional entropy is studied as a function of the glueball spin and of the mass, providing information about the stability of the glueball states.
I. INTRODUCTION
AdS/QCD models, inspired in the AdS/CFT correspondence [1][2][3], provide an important phenomenological tool for describing hadronic properties in the low-energy regime, where QCD is non perturbative. The hadronic states are represented in the dual supergravity picture by normalizable solutions of fields that live in a fivedimensional anti-de Sitter (AdS 5 ) space, endowed with a hard [4][5][6] or soft [7] infrared (IR) cut off. The cut off in AdS space breaks conformal invariance, introducing a mass parameter in the models, that sets the scale for the mass spectra of hadrons.
Glueballs are bound states of gluons, that are expected to appear in high energy physical processes, as a consequence of the self-coupling of gluons in QCD. Conclusive experimental data, about these type of particles, still lack. Lattice QCD provides an important tool to calculate glueball masses (see for example [8][9][10]). On the other hand, the decay process of glueballs (and other hadrons) is in general difficult to describe. One of the problems faced is that radially excited states, that have the same quantum numbers, get mixed in lattice imaginary time numerical simulations.
The conditional entropy is an interesting tool for investigating the configurational stability underlying physical systems. It has recently been shown, for the case of mesons, that the conditional entropy measures the relative occurrence of the physical states [11], suggesting that the entropy indeed provides information about the relative stability of states. Here we will apply the lattice approach of Shannon entropy [12,13] and its statistical mechanics underlying structure, described in [11], to the glueball case. We shall represent the glueballs using a recent model proposed in refs. [14,15], that is a modified soft-wall model and provides nice fits of glueball masses for even and odd spins. We will develop a procedure that leads to a relation between the glueballs spins -and the glueballs masses -and the associated conditional entropy. This analysis can shed some light into the relative stability of the different glueball states.
The so-called information entropy setup is related to the irresolution of information in a physical system [13,16]. Besides, the conditional entropy can extend the Shannon information entropy [17] to some continuum limit of modes, that comprise the physical system. Recently, the modal fractions -in information entropy theory -were defined as the ratio between collective coordinates and the structure factor -in the thermodynamical entropy setup -further providing the statistical mechanical analogue of the conditional entropy setup [11].
This work is organized as follows: in Sect. II, the anomalous dynamical AdS/QCD holographic model is introduced by a dilaton-graviton bulk action, with a subsequent scalar glueball action. A beta function with an IR fixed point, at finite coupling, is then used in the model. The dimension of the operators in the N = 4 CFT is used for defining the 5-dimensional glueball mass, and hence, the 4-dimensional glueball mass, as a function of the glueball spin and the beta function. Employing the collective coordinates and the structure factor, calculated upon the energy density of the system, the thermodynamical entropy is a foundation to compute the conditional entropy associated with glueball states. Hence the information entropy, and the stability of glueballs, are quantitatively studied, for different values of the model parameters. Our concluding remarks are presented in Sect. III.
II. CONDITIONAL ENTROPY AND GLUEBALL STABILITY
The energy density of the bulk modes, associated with the glueball states, is a relevant tool for the information entropy analysis of the glueballs stability, in the AdS/QCD framework. Glueballs are predicted by QCD and are modeled using lattice gauge theory. The ground state is the scalar glueball 0 ++ that is expected, from lattice computation, to have a mass 1.6 to 1.7 GeV [18]. The search for this state has been, and still is, in the arXiv:1609.01258v1 [hep-th] 5 Sep 2016 center of vivid activity in the framework of low-energy QCD. This state is also important because it is related to two basic phenomena of QCD: the generation of the gluon condensate and also the anomalous breaking of dilatation invariance [19].
Recently, a new holographic model for calculating glueball masses appeared in Ref. [14]. It consists of a modification of the soft-wall model [7], that is analytically solvable and provides the masses for the high spin states. In this framework, the 5-dimensional action for the graviton-dilaton coupling reads, in the Einstein frame, where φ = φ(z) denotes the dilaton field, V (φ) stands for the dilatonic potential and the conformal metric with µ, ν = 0, 1, 2, 3 and g µν denotes the Minkowski metric; whereas the 5-dimensional AdS indices attain the values M, N, Q, R, S = 0, 1, 2, 3, 4. The 5-dimensional metric determinant is denoted by g and the Einstein-Hilbert part of the action in (1) regards the scalar curvature R. Hereon, normalized units 16πG 5 = 1 shall be adopted, where G 5 is the Newton 5-dimensional coupling constant. The equations of motion read [20][21][22] where G RS is the Einstein tensor.
Using the conformal metric given by Eq. (2), the equations of the motion (3, 4) yield, by denoting B (z) = dB/dz, for any quantity B: where [15,23] Solving Eqs. (5) and (6) for the quadratic dilaton background, it yields expressions for the warp factor and the potential, respectively: where R denotes the AdS radius and 0 F 1 is a confluent hypergeometric limit function 1 . Using Eqs. (7) and (9) yields It means that the metric in Eq. (2) in this dynamical model is an asymptotically AdS 5 metric, in the ultraviolet (UV) limit [15,23,24]. Now, the 5-dimensional action for the scalar glueball, represented by the field G, has the following form [14,15,23]: and its equations of motion, using the metric (2), are expressed as: and using the quadratic dilaton of eq. (8), one finds where, for simplicity, we denote ψ(z) by ψ. A similar Schrödinger-like equation was solved numerically in [25]. The masses found for the scalar glueball and its radial (spin 0) excitations are compatible with those obtained by lattice QCD. Up to linear order in k, and relating p µ p µ to the 4-dimensional glueball states masses m 2 n , it yields [15]: where n = 0, 1, 2, ... . It is worth to mention that for the lightest scalar glueball state, corresponding to the spin 0 ++ , that is dual to the bulk fields of zero mass, M 2 5 = 0, it yields m 2 n = (4 + 4n)k [15]. The energy density associated with the glueball states immediately reads from the action in Eq. (12), by taking into account Eq. (14) [26]: 1 It is related to the Bessel functions In order to take into account dynamical corrections as well as the anomalous dimension effects, glueball states have full dimension as a function of spin J [15]: (18) where the spin J = 0, 1, 2, . . . , shall thus define the even and odd glueball states. It is worth to mention that this expression comes from the correspondence between supergravity on AdS 5 × S 5 and chiral fields in N = 4 (super)conformal theory in 4 dimensions [1]. In this setup, the mass of a 0-form on AdS 5 is related to the dimension ∆ in Eq. (18), of a 4-form operator, in the CFT, by the expression [27] It means that the full dimension ∆ in Eqs. (18) gives the expression for the bulk glueball mass M 5 [15,23]. In order to describe even and odd spin glueball states, one can replace eq. (18) into the Schrödinger-like equation obtained from the dynamical soft-wall model, eq. (15), and solve it numerically, for glueball states. Following ref. [23], one chooses a beta function, with a finite coupling IR fixed point: In ref. [23], masses of glueball states with even and odd spins were calculated for different values of the model parameters k, for λ = 350. Table I shows results of [14,23], according to eq. (16).
Glueball states (odd and even spins) It is worth to mention that the results of Table I are in agreement with glueball mass spectrum models, that predict a maximum value of 1.7 ± 0.1 GeV for the mass of the ground state [18]. The states f 0 (1500) or, alternatively, the f 0 (1710), have been proposed as candidates for the scalar glueball [19,28]. Now, the conditional entropy can be employed as the lattice approach of Shannon information entropy, that was shown in [11] to have underlying statistical mechanics grounds. The entropic information, realized by the conditional entropy, was used in the lattice to study physical systems [12,13]. We can use it to study the stability of glueball state configurations, within the AdS/QCD setup. In fact, any physical system has the classical field configuration to be the one corresponding either to a critical point of the action, in the classical field theory setup, or to a critical point of the effective action, in a semiclassical approximation of a quantum theory. Furthermore, any physical system has the conditional entropy critical points corresponding to the most stable configurations, from the information entropy point of view [29]. States of higher conditional entropy either request a higher amount of energy to be created, or are more seldom detected (or observed) than their counterparts that present configurational stability, or both [11].
The conditional entropy generalizes the information entropy for density functions that are naturally spatially localized, as the Fourier transform of the energy density function related to the physical setup. The information entropy was originally defined, for a system with n modes, by S c = − N j=1 h j ln(h j ),where {h j } is a set of probability density functions ruling the physical system [17]. The conditional entropy, hence, has critical points that define configurations that are stable and that correspond to the best compression of information in the system. More than a single stable configuration can appear, as in the case of oscillating configurations for the evolution of domain walls. In this case, phase transitions occur by the decay of the false vacuum [30]. Moreover, the conditional entropy can be analogous to the thermodynamical entropy, being also potentially related to entanglement entropy [31].
To implement the conditional entropy for glueball states we use, as before, z as the usual bulk coordinate of AdS space and write the corresponding Fourier transform of the energy density as: This can be thought as a continuum limit of the well-known collective coordinates in statistical mechanics, ρ(z) = N j=1 ρ(ω j ) exp (−iω j z). The structure factor, s N = 1 N N j=1 ρ(ω j )ρ * (ω j ) , normalizes the correlation of collective coordinates, as The structure factor measures energy density fluctuations and, hence, also the system behavior so as to approach homogenization. By taking the N → ∞ limit, and regarding Eq. (21), the structure factor is then used, to yield the modal fraction to be defined as the correlation of collective coordinates-to-structure factor ratio: The lattice approach of the conditional entropy is denoted by [12,13] S Now, the anomalous dynamical soft-wall model is employed, to derive the relationship between the conditional entropy and the glueball state spins or, equivalently, the glueball state 4-dimensional masses. In fact, the energy density in Eq. (17) is appropriate for computing the conditional entropy, since they are spatially localized functions, encoded in the T 00 component. By using Eqs. (23) and (24), that define the conditional entropy, together with Eq. (17) -that takes into account Eqs. (8) and (10) for the warp factor and the dilaton potential, respectively -the profiles for the conditional entropy, as a function of the glueballs spins, are then obtained.
To compute the conditional entropy for the energy density (17), it is worth to observe that when the UV limit is taken into account, the metric (2) is asymptotically AdS. The conditional entropy (24) is calculated from the modal fraction (21), when Eqs. (19) and (20) are regarded. The numerical results for three different values of k, as functions of the spin, are depicted in Fig. 1. Table I), and k = 0.09 (line 2, Table I), and k = 0.16 (line 3, Table I Table I) Table I) Table I) Figs. 2, 3 and 4 illustrate the dependence of the conditional entropy with the masses of the glueball states, for k = 0.04, k = 0.09 and k = 0.16, respectively. It is noticeable, as in the analysis regarding Fig. 1, that the lower the glueball mass, the lower the conditional entropy is. Hence, the conditional entropy is an additional technique that can indicate the behavior of glueball states regarding their stability, implying that the states with higher masses are more unstable. Moreover, Fig. 1 shows that for different values of the constant k, that defines the quadratic dilaton in Eq. (8), the higher the value of k, the higher the conditional entropy is, for any fixed glueball spin J, accordingly. In the next section we present our conclusions.
III. CONCLUDING REMARKS
Glueballs are not particularly light and have no nontrivial flavor content. The extraction of a signature in the presence of vacuum fluctuations is therefore more difficult than for many other hadrons. Fig. 1 shows that the higher the glueball states spin J, the higher the associated conditional entropy is. Despite of glueballs to lack phenomenological support, this study points towards a manner to analyze glueballs stability and production, in the context of lattice AdS/QCD. Figs. 2, 3 and 4 illustrate a quantitative analysis, relating the conditional entropy to the glueballs masses, for different values of the constant k, that defines the quadratic dilaton (8). Irrespectively of the value of k here studied, the conditional entropy increases as a function of the glueballs masses. Moreover, the conditional entropy is a monotonic increasing function of k, for fixed values of the glueball spin, according to Fig. 1. This analysis is an useful technique to point toward quantitative physical features of glueball states, that still lack in the literature, despite of the advances in lattice QCD. Topological mass constraints could be further employed, in the deformed defects setup, to refine the analysis presented here [32]. | 3,486 | 2016-09-05T00:00:00.000 | [
"Physics"
] |
Laminar Burning Velocity and Ignition Delay Time of Oxygenated Biofuel
: The need for lowering the environmental impacts has incentivized the investigation of biomass and biofuels as possible alternative sources for energy supply. Among the others, oxygenated bio ‐ derived molecules such as alcohols, esters, acids, aldehydes, and furans are attractive substances as chemical feedstock and for sustainable energy production. Indeed, the presence of oxygen atoms limits the production of aromatic compounds, improves combustion efficiency (thus heat production) and al ‐ leviates the formation of carbon soot. On the other hand, the variability of their composition has repre ‐ sented one of the major challenges for the complete characterization of combustion behaviour. This work gives an overview of the current understanding of the detailed chemical mechanisms, as well as experi ‐ mental investigations characterizing the combustion process of these species, with an emphasis on the laminar burning velocity and the ignition delay time. From the review, the common intermediates for the most relevant functional groups and combustion of biofuels were identified. The gathered infor ‐ mation can be intended for the sake of core mechanism generation.
Introduction
Fossil fuels are still the main feedstock for global energy production [1,2]. However, sustainable sources like biofuels may offer many economic, technological, and environmental advantages due to the significant reduction of particulate matter, soot formation, unburned hydrocarbon, and NOx emissions [3]. On the contrary, their incomplete combustion produces a small amount of harmful chemical components for the environment and human health (e.g., acetic acid, aldehydes, and ketones) [1,2,4]. Recently, the use of oxygenated bio-derived fuels (oxy-biofuels) such as alcohols, esters, acids, aldehydes, and furans have attracted the attention of researchers worldwide [5][6][7][8][9]. This trend is due to their positive answers to the environmental issues and also complying with the strict emission regulations of transportation sectors [2,4]. Indeed, the existence of oxygenated functional groups in the molecular arrangement changes the electronic structure of the fuel, thus limiting the production of aromatic compounds, carbon soot [3,10,11]. Besides, the presence of oxygen reduces the C-H bond strength being bond dissociation energies 80.6 kcal mol −1 and 257.3 kcal mol −1 in the absence and presence of oxygen, respectively [12]. In this framework, the design and optimization of any combustion process based on oxy-biofuels need the definition of a detailed chemical kinetic model. However, many experimental studies are hindered by technical difficulties [13][14][15] related to the different functionality of oxygen-rich biomass, intermediates, and products; to the temperature sensitivity of the products [4,16]; to the short lifetime of intermediate products; and the product dependency on the residence time of volatiles.
This work gathers the available information in the current literature on the progress in experimental and modelling efforts geared towards the combustion of common primary alcohols (methanol, ethanol, and butanol); organic acids (acetic acid and crotonic acid); and other important oxygenated substances like acetaldehyde and furan.
Research Metrics of Oxy-Biofuel
The International Energy Agency (IEA) released a report on the current state and trends of renewable energy sources, including the global policies and market distributions. As part of this, an increase in worldwide biofuel production by 9.84 • 10 6 tons to 1.5 • 10 8 tons has been reported from 2018 to 2020. Based on the rising energy production, IEA forecasts a consistent increase of up to 25% in the next five years, reaching 1.87 • 10 8 tons by 2024. Figure 1 shows data collected from the most important scientific and patent database over the past ten years (2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020). The following figure contains information on the number of publications devoted to the production or consumption of liquid and solid biofuels (Figure 1a), distinguished in terms of chemical class (Figure 1b), and authors' affiliations ( Figure 1c). Quite clearly, the high number of scientific articles per year is approximately constant in recent years. Similarly, scientific research on oxygenated biofuels (such as alcohols, furans, aldehydes, esters, phenols, and acids) is increasing [5,6,9,17]. Indeed, most of the biofuel research is devoted to the investigation of these chemical classes, as shown in Figure 1b, which reports data referring to 2020. Additional information can be gained by the comparison of the share of published articles for countries, as shown in Figure 1c. It is worth noting that Europe, the United States of America, and China lead the innovation in this field, followed by India and Brazil, confirming the relevance of local policies. In addition to their scientific contributions, the European Union set two new policy directives in 2003 on biofuel [18]. The first policy aimed to make 20% of their automobile fuel; biofuel, hydrogen, natural gas, and other renewable fuels by 2020, as per the agreement by all EU member states in the Renewable energy Directives (RED) 2009/28/EC. The second EU directive was to put tax deductions on biofuels. Besides, under the climate and energy framework of 2030, the EU member countries have agreed to reduce greenhouse gas emissions by 40% by 2030 (compared to 1990), make 27% of their energy from renewable sources, and increase in energy efficiency by at least 27% [18]. To achieve this and realize the EU bio-based economy beyond 2020, the member states urged and prioritized detailed biofuel research, development of safe and environmentally friendly combustors, financial incentives, and upgrading of biorefineries [18]. Given such concerns from the EU Member States, the United States Energy Independence and Security Act (EISA) of 2007 defined the Renewable Fuel Standard program known as RFS2.1 [19]. EISA directed the use of biofuels and established to attain a 50% reduction in greenhouse gas emissions up to 2022. Despite the number of publications, some unsolved problems limit the application of this technological solution yet. In this light, an extensive review focused on the combustion mechanisms of oxygenated fuels could highlight the existing gap of knowledge.
Combustion Chemistry of Oxy-Biofuels
Oxygenated species as a potential replacement to conventional fuels must be strictly reviewed from different practical viewpoints. In addition to the sustainability of the source, the compatibility of the fuel within transportation sectors and combustion machinery need to be analyzed [20]. In advance, it is worth knowing the common pyrolysis products and pathways of biomass degradation in general. Besides, the chemical behaviours of flammable mixtures can be estimated by aggregating the kinetic mechanisms of each component, in agreement with the hierarchical approach adopted for mechanism generations [21]. Biomass can be transformed into biofuel by using different processes, as recently reviewed by Cossu et al. [22]. A schematic representation of alternative routes to produce biofuels is given in Figure 2. Simplified pathway representative of alternatives for biomass transformation toward biofuels, adapted from Cossu et al. [22].
Recalling that most of the species included in syngas, alkanes, and alkenes have been widely studied as intermediates of traditional fuels as well, this review will be focused on oxygenated compounds (i.e., alcohols, aldehydes, heterocyclic organic compounds, and acids), which is pertinently a new topic. Due to the oxygenated functional groups, biofuel combustion can result in different reaction sequences and primary reactions from that of conventional fuel chemistry [23]. More specifically, the oxygen atom in the hydrocarbons affects the electronic structure and reactivity of the fuel, because it modifies the bond dissociation energies and increases, hinders, and initiates various reaction pathways as compared to the parent fuel molecule (e.g., alkane) [24]. To make it clear, the bond dissociation energies of methane and methanol at room temperature are presented as 104.5 kcal mol −1 and 96.1 kcal mol −1 , respectively [23]. To understand combustion behaviour and to identify their decomposition patterns, it is important to look at typical groups of potential biofuels. The growing interest in biofuels from biomass pyrolysis has motivated systematic investigations of different chemical families such as alcohols [7,25], aldehydes [26][27][28][29], acids [6], and oxygenated aromatics [30][31][32] to elucidate the effects of oxygenated groups on combustion chemistry. In this regard, the accurate evaluation of the fuel decomposition and oxidation reaction mechanisms of alcohols, acids, furans and aldehydes classes of oxygenated fuels is a valuable step toward the awareness of the reaction paths ruling the formation of relevant intermediates [20]. The shortlist of oxygenated species representative for the most relevant functional groups is reported in Figure 1, together with some related properties, which is provided in Table 1. More specifically, the lower heating value (LHV), the heat of vaporization (λ), and the autoignition temperature (AIT) were included as macroscopic properties. Additionally, the most relevant chemical groups involved in the H-abstraction reaction, referred to as abstracting agents from now on, were listed for each species, since the H-abstraction rules the activation step of biofuels in a wide range of conditions [33]. Other than the abstracting agent, the combustion behavior of flammable species can be expressed in terms of the overall reactivity under the given initial conditions. In this sense, an overview of the conditions used so far to collect either experimental (Exp) or numerical (Mod) data for the ignition delay time (IDT) and laminar burning velocity (Su) of these species is provided in Table 2, whereas a detailed analysis on the combustion mechanisms will be provided in the following sections.
Light Alcohols
The energy production via alcohols is primarily satisfied by using them as alternative fuels or additive in blends [7,25]. However, in the kinetic field, alcohols are commonly adopted as per the definition of a surrogate to mimic the combustion behaviour of more complex mixtures characterized by flexible compositions (e.g., biodiesels) [77]. Among them, the primary alcohols (such as methanol, ethanol, and butanol) are ideal for engine combustion [78]. These fuels have no negative temperature coefficient (NTC) behaviours and are all water-soluble [78]. Additionally, their moderate tendency to form soot and elevated octane rating make the light alcohols (i.e., ≤C5) good aspirants for lean to rich stratified combustion [79] and low-temperature combustion [7,80]. Moreover, in homogeneous charge compression ignition, methanol and ethanol have limited sensitivity to the equivalent ratio but high sensitivity to the temperature, while n-butanol has similar reactivity to equivalent ratios and temperatures like that of gasoline [81,82]. The average bond dissociation energies of alcohol fuels are around 105 kcal mol −1 . Due to the good electron losing the ability of the hydroxyl functional group, the bond dissociation energies of the secondary C−H bond in the α-position largely decreases to ∼95 kcal mol −1 and that of βposition to ∼100 kcal mol −1 [7]. In addition, the location of the hydroxyl group (-OH) attached to the carbon atom in alcohol plays a crucial role in the physical-chemical properties. Further, this functional group acts as a radical chain terminating group following Habstraction, which ends up hindering the cool flame reactivity [83,84]. The presence of the -OH functional group also helps them to suppress the NTC bearing of other fuels [78].
Methanol
The high H/C ratio, the lack of C-C bonds, and the high latent heat of methanol help to reduce the peak temperature and, ultimately, result in low NOx emissions. Besides, the low molecular weight and high oxygen content of methanol result in a high combustion speed and high-octane number, thereby providing an elevated thermal efficiency [47]. Bowman [43] conducted both experimental and numerical studies of methanol-air mixture behind reflected shockwaves over the temperature range of 1545-2180 K and pressures of 1.5-4.2 atm. The times required to obtain the maximum concentrations of CO and O-atom were taken as ignition delay times. Fieweger et al. [85] reported the self-ignition features of various fuel species, including stoichiometric methanol/air mixtures at pressures of 13 and 40 bar and a temperature range of 800-1200 K. The point at which the CH band emission and maximum change in the rate of pressure occurred was defined as the ignition delay time. Moreover, the high-temperature ignition delay time of C1-C4 primary alcohols under pressures of 2, 10, and 12 atm were studied by Noorani et al. [45], and CH emissions were considered as a measure of the ignition delay time. Methanol oxidation under a rapid compression machine (RCM) was hardly reported in the literature, and the most commonly used data was reported by Kumar and Sung [44] and was performed at an equivalence ratio of 0.25-1.00, a pressure of 7-30 bar, and temperature range of 850-1100 K. The maximum rate of the pressure increase was used to define the ignition delay time. Cathonnet et al. [86] performed pyrolysis experiments of methanol using a static reactor at a pressure of 0.3-0.5 atm and a temperature range of 875-975 K. Additionally, methanol oxidation in a stirred reactor at nearly atmospheric pressure and a temperature range of 650-700 K was studied by Aniolek and Wilk [87]. Recently, a comprehensive study on the ignition phenomena of a stoichiometric methanol/oxygen/argon mixture was reported at the pressure range of 12-24 bar and temperature range of 840-1000 K under a rapid compression machine (RCM) by Wang et al. [47]. On the other hand, a flame speciation study was reported by Akrich et al. [34]. From the study, several species such as CH3OH, O2, H2O, H2, CO, and CO2 were measured as a function of the distance from the burner, and H, OH, and HO2 were found to be responsible species for H-abstraction during methanol oxidation. A laminar flame speed study of methanol at atmospheric pressure and a temperature range of 298-368 K using a counter-flow twin flame method was reported [88]. Furthermore, Liao et al. [89] studied the laminar burning velocity of methanol-air mixtures at 358 K using the spherical combustion bomb technique. This study emphasized the decrease in Markstein lengths with an increasing equivalence ratio. Veloo et al. [8] experimentally reported the laminar flame speed of methanol using the counter-flow configuration technique at atmospheric pressure and an unburned mixture temperature of 343 K. From the study, CH2O, HCO, and H were reported to be the dominating radical species. Additionally, the laminar burning velocity of methanol-air mixtures at atmospheric pressure and a temperature ranging from 298 to 358 K was reported using the heat flux method [90].
Having robust experimental data, the need for a detailed kinetic model has been growing, and Mech 15.34 is the first kinetic mechanism model available to predict the experimental data during engine-relevant conditions [91]. Westbrook and Dryer [92] developed the first comprehensive detailed methanol kinetic model that accounted for both high and intermediate temperatures. Besides, many important rate constants for the thermal decomposition of methanol and hydroxymethyl (CH2OH) and the abstract reaction rate constants for H and OH were estimated. However, the lack of elementary rate constants, methoxy radical (CH3O) formation, and reaction path information hindered this work. Later, Norton and Dryer updated the kinetic model using more reliable rate constants and a coherent set of thermochemical parameters [93]. Similarly, a model that was applied to both a premixed laminar flame speed [94,95] and autoignition in a spark-ignition engine [95] was developed, and a good agreement with the experimental data was observed. Besides, HO2 + H→ products and the decomposition of hydroxymethyl (CH2OH) were identified as important steps for the determination of the flame speed. The OH abstraction reaction was reported to be a predominant fuel consumption route in the methanol mechanism. Decades later, Aranda et al. [96] developed a detailed kinetic model for methanol oxidation and validated it with experimental data reported at high pressure (20-100 bar) and intermediate temperatures (600-900 K). As well, the rate constants of important reactions (Equations 1-4) were obtained by ab initio calculations. Similar oxidation pathways with those of high temperature and low-pressure reactions were revealed, while model predictions at a high pressure for onset reactions were particularly sensitive to H-abstraction by a hydroperoxyl radical, as shown below (5).
To better illustrate the works being reported, some experimental data for the ignition delay time and laminar burning velocity of methanol oxidation reported in the current literature is shown below in Figure 3. The most relevant steps involved in the oxidation of methanol are reported in Figure 4.
Ethanol
Due to its high demand in the gasoline engine, ethanol combustion has been widely studied using different experimental systems (e.g., shock tubes [97][98][99], rapid compression machines [50,100,101], plate burner [102,103], counter-flow twin flames [104,105], and constant volume chambers [106,107]. In addition, the oxidation of ethanol under several conditions has been experimentally investigated in flow reactors as well [39]. Barraza-Botet et al. [50] reported that the H-abstraction by HO2 (Equation 6) significantly affected the overall reactivity of ethanol in ethanol oxidation.
The oxidation of ethanol under shock tubes was investigated by Mathieu et al. [52] by measuring the ignition delay times and water time history profiles of the species at a temperature of 944-1580 K; a pressure range of 1.3-53 atm; and equivalence ratios of 0.5, 1, and 2. From the study, it has been revealed that most of the models used were not accurately reproduced the experimental data at temperature < 1300 K. Similarly, recent studies performed by Laich et al. [51] have shown that the CO time-histories and ignition delay times behind reflected shockwaves under elevated pressures are fairly predicted by the existing mechanisms at elevated temperatures, whereas significant deviations were observed at low temperatures. Bimolecular methyl radical and hydroperoxide radical reaction (Equation 7) and H-abstraction reaction at the α position of ethanol (Equation 6) were identified as the reactions significantly affecting the low-temperature chemistry.
Xu et al. [58] conducted an experimental study on premixed laminar combustion with laser-induced spark ignition (LISI) and electric spark ignition (SI) at a low initial temperature and atmospheric pressure. Similar conditions were investigated by using the counterflow flame. Veloo et al. [8]. Katoch et al. [59] performed an experimental study of laminar burning velocities of ethanol-air mixtures at different initial temperatures. Moreover, laminar burning velocities of ethanol-water-air mixtures studied using the heat flux method under adiabatic conditions has been reported [56]. From the study, C2H4, CH3CHO, CH2O, CH4, and CH3, were revealed as major intermediate species. Besides, three H-abstraction sites (at CH3, CH2, or OH) were observed where the abstraction from CH3 led to C2H4 and OH production, from where the CH2 site led to the production of CH3CHO and H, and finally, abstraction from the OH group led to the formation of CH3CH2O. With the extensive experimental data present in the literature, the demand for the relevant detailed kinetic model for the prediction of the combustion parameters is growing and attracting attention. Given this, Dunphy and Simmie [108] developed a kinetic mechanism for ethanol comprising 30 species and 97 reactions. The authors used the detailed mechanism previously reported for methanol, assembled with additional reactions that accounted for ethanol combustion, obtaining satisfactorily predictions for the shock tube experimental data at high temperatures and pressure of 2-3.4 bar. Marinov [109] developed a kinetic model for ethanol oxidation by assembling the sub-mechanisms reported in the literature for methane, hydrogen, ethane, ethylene, and propane oxidation. The model was validated using numerous experimental data of ignition delay times, laminar flame speeds, and species concentrations in the temperature range of 1000-1700 K and the pressure of 1-4.5 atm and was in excellent agreement with the experimental data. Decades before, ethanol kinetic mechanisms were developed basing mostly on a shock tube ignition delay analysis [108,110,111] and validated with limited experimental conditions. Using the mechanism developed by Marinov as a base, Li et al. [112,113] updated the kinetic model. A new mechanism model called AramcoMech1.3 was developed [114] for the combustion of C1-C2 hydrocarbons (methane, ethane, ethylene, acetylene, and acetaldehyde) and oxygenated species such as methanol and ethanol. Mittal et al. [101] validated the model with experimental ignition delay time data and confirmed the higher accuracy of the model than other reported kinetic models. A detailed kinetic mechanism developed by the University of San Diego is comparatively small, yet detailed; however, it has been widely used in research works [115]. Alternatively, reduced mechanisms were produced [56,116] based on the one developed by LLNL, resulting in a significant reduction in computational time with a poor impact on the accuracy of the estimations. Recently, a mechanism on ethanol pyrolysis at high pressures was published by Hashemi et al. [117], whereas a new detailed mechanism was produced by Zyada and Samimi-Abianeh [118] through automated tools, i.e., a reaction mechanism generator (RMG). Several kinetic mechanisms have been reported in the literature [36] at different operating conditions for ethanol. Through the improvement in computational methods, numerically generated mechanisms are becoming more powerful. However, it is still hard to make a comprehensive mechanism suitable for all operating ranges based on the information available in the current literature [109,116]. Roy and Askari [119] developed a new ethanol detailed kinetic mechanism called PCRL-Mech1 (with 67 species and 1016 reactions) at the engine-relevant conditions based on RMG. The important reactions were selected by a sensitivity and path flux analysis, and the rate parameters of these reactions were adjusted during the development of the new mechanism. The model showed an excellent agreement with experimental results of the laminar burning velocity (obtained at temperature (300-600 K), pressure (1-10 atm); equivalence ratio (0.6-1.4); and ignition delay time (at a temperature within the range 820-1450 K, pressures included in 3.3-80 atm, and equivalence ratio ranging from 0.3-2). Besides, H, OH, O2, and HO2 were reported to be the radicals responsible for H-abstraction during ethanol oxidation. The experimental ignition delay time and laminar burning velocity data available for the ethanol/air mixtures reported in the current literature are shown below in Figure 5. As well, a reduced reaction pathway is presented in Figure 6.
Butanol
Due to its high octane number, high energy density, hydrophobicity, and compatibility with existing internal combustion engines, the use of butanol as a fuel source has attracted the attention of many researchers [120]. In light of this, Feng et al. [121] and Gu et al. [122] studied the laminar burning velocity of butanol-air mixtures at various temperatures, pressures, and equivalence ratios. The latter article concluded that functional groups and branching are the main factors affecting the laminar burning velocity of butanol/air mixtures, and the molecular structure has the least effect on flame instability. The laminar flame speed of n-butanol was experimentally measured at atmospheric pressure in the counterflow configuration by Veloo et al. [8] and Veloo and Egolfopoulos [123]. The results showed that the propagation of n-butanol/air flame is slightly faster than that of sec-butanol/air flame and iso-butanol/air flame, and the propagation speed of tert-butanol/air flame is significantly slower than that of the other three isomers. Wu and Law [124] studied the laminar flame speed and flame chemistry of butanol isomers at a pressure range of 1-5 bar.
From the computational study, the designed kinetic model was found to accurately predict the laminar burning velocity of n-butanol and sec-butanol, whereas the mechanism overestimated and underestimated this parameter for iso-butanol and tert-butanol, respectively. On the other hand, Moss et al. [125] studied the ignition delay times for all four isomers of butanol under reflected shockwaves and showed hydrogen abstraction by OH as the most responsible radical for alcohol consumption. The reaction mechanisms of all isomers of butanol were developed and validated with various experimental data: tertbutanol pyrolysis in a shock tube [126], butanol and tert-butanol oxidation products in the flow reactor [127], n-butanol oxidation in the jet stirring reactor [36], and iso-butanol counterflow non-premixed flames [128]. Moreover, Stranic et al. [60] measured the ignition delay times of butanol isomers containing 4% O2 diluted in argon by using shock tube machines under a wide range of reaction conditions. To evaluate the effect of n-butanol/heptane mixtures, Zhang et al. [129] studied the experimental and numerical study of n-butanol/heptane auto-ignition behind reflected shock tubes. Recently, Pelucchi et al. [80] performed an experimental and modelling study on the combustion of C3-C6 linear alcohols at temperature ranges of 550-1100 K; the pressure of 10; 30 bar; and equivalence ratio of 0.5, 1.0, and 2.0. Good agreement between the experimental and modelling results and no NTC behaviour was observed for ethanol and propanol at both pressures and for nbutanol at P = 10 bar. Furthermore, as indicated in the study conducted by Dagaut et al. [36] on the detailed combustion chemistry of n-butanol, it has been observed that, for the comparable experimental conditions, the partition between different reaction channels mostly depends on the equivalence ratio (flame stoichiometry). Based on that, the authors recommended a similarly detailed combustion analysis for the other typical biofuel families and intermediate species. From this, one can understand that to accurately predict the combustion of a particular fuel, providing global parameters such as octane number or ignition delay is not enough, since combustion is a complex process where different free radicals and reactive intermediates play a key role. Besides, it has been reported that the combustion process sensitively depends on the molecular properties of the corresponding fuel [130]. Many signs of progress have also been made in the development of kinetic models describing the chemical kinetics of butanol oxidation. Dagaut et al. [36] studied the chemical kinetic modelling of n-butanol oxidation at a pressure of 10 atm and wide ranges of equivalence ratios in a jet stirred reactor. CO, CO2, H2, H2O, C1-C4 hydrocarbons, and C1-C4 oxygenated compounds were the main decomposition products. The proposed kinetic mechanism indicated H-atom abstraction from α, β, and γ carbon atoms as the dominant decomposition pathway for n-butanol oxidation. Similarly, Sarathy et al. [67] performed the kinetic modelling of n-butanol combustion at atmospheric pressure and equivalence ratios of 0.5, 1, and 2 in a jet stirred reactor. Considering the laminar flame speeds and species concentrations, the authors modelled the oxidation mechanism using an improved detailed chemical kinetic mechanism containing 118 species and 878 reactions. H-atom abstraction and β-scission were shown as the key reaction pathways of the combustion process. Grana et al. [131] conducted kinetic modelling for the combustion of butanol isomers (n-C4H9OH, sec-C4H9OH, iso-C4H9OH, and tert-C4H9OH) using a hierarchal approach and validated it with the burning velocity experimental data. The flame structures and overall combustion characteristics of the four butanol isomers were found to be similar. Moreover, to better understand the combustion chemistry of both linear and branched-chain alcohols, Sarathy et al. [132] performed a comprehensive chemical kinetic model study on the combustion of four butanol isomers. A model accounting for the highand low-temperature chemistry of linear and branched alcohol has been proposed. The reaction of 1-hydroxybutyl radical with O2 to formaldehyde/ketone and water was reported to be the key step under low pressure. The model implicated H-atom abstraction reactions as a prevalent reaction in the oxidation of butanol for premixed flames under low pressure, while β-scission reactions were revealed to be an important factor for higher temperature reactions (above 800 K). Experimental data available in the current literature on the overall reactivity of n-butanol in the oxidative environment has been reported in terms of IDT and Su under several initial conditions (Figure 7), whereas simplified reaction pathways were produced for all the isomers (Figures 8 and 9).
Carboxylic Acids
Oxygenated fuels with carboxylic acid functionality, especially acetic acid, are the dominant fractions in the tar released from biomass pyrolysis [133][134][135], and an accurate description of biofuel combustion must take into account the formation of these relevant intermediates. Most importantly, on top of their use as fuel surrogate components [134], oxygenated species can be intended as intermediates formed through the decomposition of hydrocarbon. Hence, they are essential in the hierarchical nature of kinetic models [7].
Acetic Acid
The experimental research of acid combustion poses a huge challenge to the combustion community due to issues related to adsorption [136], corrosion [14], and dimerization [137]. Indeed, only a few experimental studies in the literature are available, as reviewed in recent works [14,15]. Many researchers have measured and reported organic acid emissions from spark-ignition engines [138] and rapid compression engines [139]. The studies indicated that, out of the total hydrocarbon emissions from the combustion engines, organic acid emissions measured in spark-ignition engines are 4-27%, with acetic acid being the most important. Numerical and experimental studies of acetic acid combustion in laminar premixed flames were reported by Leplat and Vandooren [70]. Apart from its combustion chemistry, the study also reported ketene as an intermediate product. Mackie and Doolan [68] studied the thermal decomposition kinetics of acetic acid and its products in a single pulse shock tube within the temperature range of 1300-1950 K. As part of this, decomposition kinetics having 21 species and 46 reactions were modelled and simulated using experimental data. From the decomposition kinetics, decarboxylation and dehydration were confirmed to be the two key decomposition reactions producing methane and carbon dioxide, on the one hand, through (Equation 8) and ketene and water, on the other, through ( Equation 9), respectively. Ketene further decomposed to a methyl radical and CO2, followed by a further reaction of the methyl radical with CH to form C2H4 and CO. Besides, methyl radicals were revealed to play an important role in determining the main products.
CH3COOH → CH4 + CO2 (8) CH3COOH → CH2CO + H2O (9) Similarly, Gg. Wagner and Zabel [69] studied the further decomposition kinetics of ketene (CH2CO) behind reflected shocks at low pressure and reported the degradation rate constant-coefficient K = 3.6 × 10 15 exp (-248 kJ mol −1 K −1 ) cm 3 mol −1 s −1 . In the same way, the gas-phase reactivity analysis of acetic acid, rate constant estimation, and kinetic simulation were studied by Cavallotti et al. [37]. The 1D master equation was also integrated on the potential energy surface (PES) to determine the rate coefficient of acetic acid degradation under a wide range of temperatures (700-2100 K) and pressures (0.1-100 atm). The simulation showed a gradual decrease in the reaction rate at a temperature above 1200 K and a pressure of smaller than 10 atm. Besides, H-abstraction by H, OH, OOH, O2, and CH3 was reported to be the responsible radicals in the decomposition of acetic acid [37]. Lately, Zhang et al. [13] studied the laminar flame propagation and kinetic modelling of acetic acid at a low initial temperature and atmospheric pressure. The authors indicated the pathway related to ketene consumption (Equation 10) as the main in the propagation of acetic acid flames.
CH2CO + H → CH3 + CO (10) The laminar burning velocity measured by Christensen and Konnov [14] of acetic acid at different initial temperatures are reported in Figure 10. Based on the reported observations, a simplified reaction pathway representative of the oxidation of acetic acid is produced and reported in Figure 11.
Crotonic Acid
Crotonic acid is the major intermediate product from bioplastic (e.g., polyhydroxybutyrate, PHB) degradation, as reported by many researchers demonstrating the conversion of PHB to 3-hydroxybutyric acid (3HBA) and crotonic acid. However, the degradation kinetics and reaction mechanisms of these monomers have been largely overlooked. As a result, reliable combustion parameter data (such as ignition delay time, laminar burning velocity, or species profiles) are lacking. For instance, Li and Strathmann [16] studied the hydrothermal degradation and kinetic mechanism of PHB conversion to 3HBA and crotonic acid and, further, the decomposition of crotonic acid to carbon dioxide and propylene. From the developed kinetic network model, it was found that crotonic acid is mainly generated by hydration to 3HBA, followed by a synergistic dehydration-decarboxylation route to propylene and CO2. Additionally, the rate constants for each reaction were determined [16]. Nevertheless, the ignition delay and laminar flame speed of crotonic acid have not been reported so far.
Light Aldehydes
Acetaldehyde is a key intermediate in the oxidation of hydrocarbons and alcohols, especially ethanol, which is increasingly being used as a fuel for automobiles. However, it is one of the most abundant toxic oxidative emissions from the combustion of biofuels [140,141], and its atmospheric reaction generates several secondary pollutants [142,143]. Thus, the pyrolysis mechanism study of this intermediate at various reaction conditions can help to understand the overall combustion mechanism of hydrocarbons and alcoholbased fuels [71]. In this regard, several authors reported on the degradation kinetics and combustion chemistry of acetaldehyde. For instance, Sivaramakrishnan et al. [144] conducted a study on the theoretical calculations of acetaldehyde (C2H4O) and ethoxide (C2H5O) potential energy surfaces (PES) and updated the kinetic model of acetaldehyde pyrolysis. The study revealed C-C bond fission with a minor contribution from the roaming mechanism to form CH4 and CO as the main decomposition pathway of acetaldehyde during high-temperature processing. The model developed by the author incorporates a master equation for the analysis of H + CH2CHOH as a primary reaction mechanism for the removal of CH2CHOH. The governing H-abstraction route at the aldehydic site was found to form a carbonyl radical (Rn-CO), which quickly further decomposes to an alkyl radical (Rn) and CO. Based on that, there is a general implication that the low-temperature oxidation of the generic Cn aldehyde degraded to Cn−1 alkyl radicals [26].
To better understand the combustion parameters, the ignition delay times of acetaldehyde behind shock tube waves under ranges of reaction conditions were reported by Mével et al. [72]. additionally, a sensitivity analysis, energy release, and rate of production were conducted, indicating four important elementary reactions (Equations 11-14) taking place during acetaldehyde pyrolysis and oxidation: CH3CHO → CH3 + HCO (11) CH3CHO + CH3 → CH3CO + CH4 CH3CHO + H → CH3CO + H2 (13) In the end, due to the huge differences observed during the research, the authors recommended the need for new experimental and detailed numerical studies. Tao et al. [145] reported nearly 40 species in laminar and premixed flames of acetaldehyde. Christensen et al. [146] studied the laminar burning velocities at atmospheric pressure and different initial temperatures. Similarly, Christensen and Konnov [147] reported the laminar burning velocity of diacetyl and the updated sub-mechanism model of acetaldehyde and CH3CO in their model. Halstead et al. [148] studied the kinetic development of acetaldehyde in the perspective of the cool flame feature and suggested models containing 14 steps. From the study, acetyl was found to play a significant role in the chain-branching process through CH3CO → CH3CO3 → CH3CO3H → CH3CO2 + OH. The theoretical work reported by Felton et al. [149] and the detailed kinetic model developed by Cavanagh et al. [150] supported the result of Halstead et al. [148].
Nevertheless, Gibson et al. [151] came up with another cool flame phenomena of acetaldehyde to be processed by CH3OOH (CH3 → CH3OO → CH3OOH → CH3O + OH). On the other hand, the study conducted by Kaiser et al. [152] revealed the radical decomposition reaction (Equation 15) and O2 addition to acetyl (Equation 16) as the main determining step of the chain-branching process.
CH3CO → CH3 + CO (15) CH3CO + O2 → CH3CO3 Recently, researchers [27,153] have developed a kinetic model for the low-temperature oxidation of acetaldehyde, as well as C3 and C4 aldehydes. Zhang et al. [40] studied the oxidation of acetaldehyde under a wide range of conditions and revealed CH3OO, CH3OOH, and HOOCOCHO as the main oxidation products. Besides, H-abstracting agents were found to be processed by H, OH, HO2, CH3, O2, CH3COOO, CH3OO, and CH3O. At the lean condition, OH was found to be the most important H-abstracting agent. It was concluded that CH3COOOH and CH3OOH are the main decomposition pathways of acetaldehyde oxidation via the chain-branching reaction, and the reactions related to methyl oxidation were reported to be very sensitive to CH3OO and CH3OOH under the studied conditions [40]. Bentz et al. [154] studied the shock tube thermal decomposition of CH3CHO and CH3CHO + H at a temperature within 1250-1650 K and a pressure range of 1-5 bar. Combining their results and the low-temperature data from other studies, the authors reported the acetaldehyde rate constant expression as K = 6.6 × 10 −18 exp (−800 K/T) cm 3 s −1 for the temperature range of 300-2000 K. Moreover, Hidaka et al. [155] studied the pyrolysis of acetaldehyde oxidation behind reflected shockwave tubes using singlepulse methods. The study considered different fuel concentrations (2.0% CH3CHO, 4.0% CH3CHO, and 5.0 % CH3CHO) diluted with Ar under the temperature range of 1000-1700 K and pressure of 1.2 and 3.0 atm. The (Equations [17][18][19] reactions were mentioned to be the most important initiation reactions and (Equations 20 and 21) as the most crucial reactions responsible for acetaldehyde pyrolysis.
CH3CHO → CH4 + CO (18) CH3CHO→ CH2CO + H2 (19) CH3CHO + H → CH2CHO + H2 (20) Similarly, Ernst et al. [156] conducted acetaldehyde pyrolysis behind reflected shockwaves under a temperature range of 1350-1650 K. The results revealed the decomposition as a first-order reaction with a rate constant expression of K = 1.2 × 10 16 exp (−81.74 kcal/RT) s −1 . The experimental ignition delay time and laminar burning velocity data of acetaldehyde oxidation reported in the current literature are shown below in Figure 12. Furthermore, Figure 13 reports a simplified schematization of the oxidation pathway of acetaldehyde. Note that laminar burning velocity measurements refer to the data reported by Christensen and Konnov [14] exclusively.
Heterocyclic Organic Compounds
Due to their high energy density (~30 MJ L −1 ), better resistance to undesired ignition, high research octane number, better engine efficiency, and lower emissions, furans and their derivatives are a focus of the current research expertise [42]. Furan, a promising biofuel candidate catalytically produced from second-generation biofuels, has attracted the attention of many fuel researchers. Oxygenated fuels like furan were reported to have significantly lower HC, NOx, PM, and CO emissions than their corresponding conventional fuels without compromising their performances [157]. Despite these benefits, some of them have a low energy density, water miscibility, and lower vapour pressure. In addition to their higher octane number and knock-resisting tendency, in recent times, furanbased fuels are progressively emerging because of their high energy density and non-miscibility in water compared to alcohols such as ethanol [74]. The thermal decomposition reaction of furan has been extensively studied because furan and its derivatives play an important role in understanding the combustion of coal and the biomass [158], and also, it is an interesting model compound that helps us to understand the vibrational relaxation and unimolecular decomposition of molecules [159]. The auto-ignition behaviour of furan and other biofuels was studied under a rapid compression machine, and the results revealed 2-methylfuran (2-MF) and ethanol to have an analogous knock inhibition capacity [160]. Wang et al. [161] and Wu et al. [162] compared the combustion and emissions of 2-MF with ethanol, gasoline, and 2-dimethylfuran (2-DMF) in a single-cylinder spray engine and found a higher efficiency, excellent combustion stability, antiknock ability, and lower aldehyde emissions for 2-MF than gasoline. For a better understanding, the laminar flame speed of furan-based fuel combustion has been studied at elevated temperatures and in the presence of an air mixture [163]. The same author reported the flame structure of the DMF/Ar/O2 mixture premixed at low pressure [162], showing that furan and 2-MF are stable intermediates in DMF flames. Tian et al. [76] studied the species distribution measurements for premixed furan/oxygen/argon flames at low pressure. Moreover, the decomposition chemistry of furan has been studied in shock tubes [164] and flow reactors [165]. Similarly, Cullis and Norris [166] investigated the decomposition of furan at 1173-1323 K and atmospheric pressure. Methane (CH4), acetylene (C2H2), ethylene (C2H4), and benzene (C6H6) were found to be the main products observed during the degradation. Besides, the decomposition of furan over the temperature range of 1050-1460 K and a pressure range of 2.6-3.6 atm was studied by Lifshitz et al. [167]. The two main furan decomposition pathways were reported to be CO + pC3H4 and C2H2 + CH2CO. Grela et al. [165] conducted a low-pressure (10 −3 Torr) pyrolysis of furan over the temperature range of 1050-1270 K and revealed the occurrence of C3H4 (allene, aC3H4/propyne, and pC3H4) and CO. As well, the author proposed the high-pressure Arrhenius expression for furan pyrolysis to be K∞ = 10 15.6 exp (−73.5 kcal mol −1 /RT) s −1 . In the same way, recently, the degradation of furan over the temperature range of 960-1085 K and pressure of 1 Torr was reported [168]. The study revealed CO and C3H4 as the main products of the degradation and rate coefficient expression of K = 10 12.9 exp (−65.7 kcal mol −1 /RT) s −1 . From the experimental investigation of the premixed furan/oxygen/argon flames, the H-abstractions were confirmed to be mainly by H, OH, and CH3 radicals [76]. The authors also developed a kinetic mechanism that can predict their experimental work and obtained good agreement between the two results. Later on, Wei et al. [74] studied the ignition delay times of furan behind reflected shockwaves over the temperature range of 1320-1880 K and pressure of 1.2-10.4 atm. In addition to these experimental studies, a theoretical study of furan decomposition kinetics was conducted through quantum chemical methods [169,170]. Besides, furan also proved to be an important component of tobacco smoke [171] and was selected as a model to burn fuel, which can reduce NO formations [172]. The study modified the chemical kinetic mechanism developed by Tian et al. [76] to justify their results and showed that the model produced a reasonable agreement with the experiment. The authors concluded that the most important fuel consumption path under these conditions was triggered by unimolecular decomposition. Lately, Liu et al. [9] studied the combustion chemistry and flame structure of furan group biofuels (furan, methyl-furan, and dimethyl-furan) using molecular-beam mass spectrometry and gas chromatography at low pressures (20 and 40 mbar) and two equivalence ratios (1 and 1.7). Coming to the computational part, despite the abundant experimental investigations of furan decomposition, the theoretical calculations concerning these degradation reactions were hardly reported in the literature. For instance, Liu et al. [173] and Tian et al. [76] revealed the thermal degradation of furan with a density functional B3LYP for geometries and QCISD(T) for energies. However, the study did not mention information about the decomposition rate constants of furan. Additionally, Sendt et al. [169] came up with parameters of numerous crucial reactions associated with furan calculated at the CASSCF, CASPT2, and G2-(MP2) levels. The author accordingly presented a kinetic mechanism and validated it with measurements by Organ and Mackie [164]. They concluded that the formation of cyclic intermediates caused by the 1,2-H transfer and the formation of decomposition products (CO + pC3H4) are the main ways of furan pyrolysis reactions. Although there is an abundancy of information on the thermal degradation of this species, very poor understandings have been reported on its combustion, so far. Thus, the detailed combustion chemistry of furan at engine-relevant conditions is lacking. Figure 14 reports the experimental data available in the current literature for the ignition delay time of the furan/air mixtures. The mechanism determining the oxidation of furan is reported, in a simplified version, in Figure 15.
Future Challenges
This review showed that more experimental data on the key combustion characteristics are needed to verify the model performance. As an example, methanol may show random preignition during compression-ignition, causing the ignition delay time to be much shorter than expected. The source of this preignition is still unknown. Moreover, the relationship between methyl chemistry and acetaldehyde chemistry due to the weak C-C bond in the acetyl group (CH3CO) that eventually leads to the decomposition of the acetyl group to CH3 and CO is limited in the literature. From the species profile of CH4 and HCO, the amount of CH4 produced in ethanol/air flames is higher than that of the methanol/air flames. This is because the amount of CH3CHO produced in ethanol is higher than that of methanol, while a higher production of CH2O was observed from the methanol/air flames that favoured the formation of HCO under fuel-rich conditions. However, no clear information is available on these species under lean conditions. The kinetic study of furan, which is one of the primary structures of coal, is hardly reported. Since furan and its derivatives are unsaturated species, the accurate prediction of the branching ratios of pyrolysis and combustion products relies on the detailed analysis of the addition reactions of the H and OH radicals. It should be emphasized that almost no information is available in the literature for these types of reactions. Besides, the low-temperature oxidation mechanism of furan and its derivatives is still unknown. Moreover, assessing the possible consequent harmful emissions during the combustion of oxygenated fuels, especially alcohols and furans, needs further research efforts. Not to mention, the laminar burning velocity data of furan combustion is very limited in the literature. Due to technical difficulties, the number of experimental studies characterizing the combustion of acidbased fuels has been considerably limited in the literature, ultimately hindering the numerical study of the species. It has also been observed that additional experimental data on the high-temperature pyrolysis and oxidation of high molecular weight acid-based fuels are needed for the in-depth understanding of the kinetic effects of carboxylic functional groups. More specifically, to the best of our knowledge, no ignition delay time and laminar flame speed data have been reported so far on crotonic acid combustion. Similarly, the ignition delay time data for acetic acid combustion is not reported. In the end, for the accurate prediction of chemically sensitive low-temperature combustion systems, detailed knowledge on the fuel-specific reaction kinetics is very crucial and hardly reported in the literature.
Conclusions
The goal of this review was to analyze the recent research trends on the combustion and modelling of common intermediates derived from the decomposition of biofuels. To this aim, several experimental and numerical studies on the ignition behaviours and reactivity of oxygenated species such as alcohols, aldehydes, furans, and acids were reviewed to make them commercially beneficial. The important intermediate radicals and the main reactions affecting the combustion parameters were discussed. As part of this, data on the basic combustion parameters (laminar burning velocity, ignition delay times, and pyrolysis species profiles) under a wide range of reaction conditions were extensively investigated. The detailed modelling efforts were also considered. Generally, the distinctive chemical structures of these fuels concerning their intermediate and product species matter in their kinetic modelling and, thus, in-depth knowledge of how these intermediates are formed/consumed are criteria for a better understanding of bio-derived fuel combustion and emissions. | 10,722.6 | 2021-06-15T00:00:00.000 | [
"Engineering"
] |
Processing Printed Words in Literary Arabic and Spoken Arabic: An fNIRS Study
Diglossia refers to a socio-linguistic situation in which two varieties of the same language are used for distinct purposes in everyday life. In Arabic, Spoken Arabic (SA) is the firstly acquired dialect used for oral and informal communication in everyday conversations. Literary Arabic (LA), acquired later in life through formal education, is used for reading and writing and by literate individuals in formal settings such as the media and official speeches. Because of the linguistic distance between the two Arabic varieties, some authors have suggested that SA and LA might cognitively function as first (L1) and second language (L2). Up to now, very few studies using imaging techniques had addressed the question of the neural basis of diglossia in Arabic native speakers. In this study, we sought to test whether or not the visual processing of high (LA-HF), low frequency LA words (LA-LF), and SA-HF words induce detectable differences in the brain responses collected by functional Near-Infra Red Spectroscopy (fNIRS). For this aim, a semantic categorization task, previously assessed in fMRI studies. Based on previous observations, it was predicted that LA words, will be processed faster and more accurately than SA ones. Furthermore, it was predicted that a modulation of the responses (by language conditions) will be found in the left frontal areas. At the behavioral level, the analysis of RTs revealed an effect of language variety in individual response variance and accuracy showed a clear advantage for LA-HF words over LA-LF and SA-HF ones. The analysis of oxygenation level revealed a significant response modulation in frontal and posterior areas by language variety. These results are discussed in the context of diglossia and the advantages/limitations of this new imaging methodology and its use to assess language processing in the brain.
Open Journal of Modern Linguistics
Saiegh-Haddad, Levin et al., 2011). For instance, it had been shown that skilled native Arabic speakers are slower in reading Arabic (words and texts) than reading Hebrew and English (Abu-Rabia, 2001;Ibrahim & Eviatar 2012;Eviatar, Ibrahim et al., 2019).
Regarding the diglossic issue more specifically, psycholinguistic studies have shown that the linguistic distance between SA and LA impacts a variety of linguistic processing skills in LA (For a review see, Saiegh-Haddad, 2018). SA and LA present differences at the phonological, semantic, morphological and syntactic levels (Abu-Rabia, 2000;Saiegh-Haddad, 2003;Saiegh-Haddad, 2020). To give some examples specifically related to the purpose of this study, at the lexical semantic level, although SA and LA share many words in common (despite certain phonological nuances), SA and LA also have different specific words for the same referents. To illustrate this fact, Saiegh-Haddad and Spolsky (Saiegh-Haddad & Spolsky, 2014) analyzed a lexical corpus collected from five-year-old children's oral language and found that 40% of the words consist of nonstandard words (non-MSA) that have no conventional written form, another 40% consisting of SA-LA cognates (with varying phonological nuances) and only 20% of the words had identical forms in the SA and LA varieties. The phonological systems of LA and SA thus are quite different to the extent that some LA phonemes are even absent in certain SA dialects. Accordingly, the phonological distance between SA and LA words had been proposed to underlie the difficulties in reading acquisition among children (Saiegh-Haddad, 2007). For example, previous research had suggested that the children's recognition of LA phonemes is poorer than that of SA ones (Saiegh-Haddad, Levin et al., 2011), attesting of the difficulty to construct phonological representations for LA words, to which children are generally exposed for the first time 2 at the moment of their entry to school (see also Saiegh-Haddad, 2003;Saiegh-Haddad, 2007;Saiegh-Haddad & Schiff, 2016).
Supporting the claim that diglossia might be at the origin of difficulties in reading acquisition among Arabic children, previous studies (Feitelson, Goldstein et al., 1993;Abu-Rabia, 2000) have also suggested that early exposure to LA might improve children's reading abilities in the early grades.
On the other hand, other research based on classical studies on bilingualism have attempted to assess the extent to which SA and LA might cognitively behave as L1 and L2 in the brain of literate Arabic speakers. have proposed on the basis of a series of studies using semantic priming tasks, despite their common origin and the wide use of SA and LA by adults' native Arabic speakers, the two varieties function as first and second language. In these studies using lexical decision in the auditory modality, the authors showed a pattern of language dominance for SA over LA and Hebrew, where the latter two seemed to behave both as second languages (see also Eviatar 2 Although the formal exposure to LA occurs at the entry of children to school, they are still however exposed to LA through media and TV programs for children and through oral storytelling by parents and educators at home and in kindergartens (see discussion in Saiegh-Haddad & Spolsky, 2014). Open Journal of Modern Linguistics & Ibrahim, 2000;Ibrahim, Eviatar et al., 2002;Ibrahim, 2009). In another line of research, Eviatar & Ibrahim (2000) have shown that Arabic-speaking children, who have been exposed relatively early to LA, behaved as bilingual children on tests of metalinguistic awareness, and differed from monolinguals. The conclusions raised by these authors in the auditory modality appeared quite reasonable given the history of acquisition and patterns of use of the two varieties of Arabic. In fact, few studies had used the visual modality to compare word recognition and reading in LA and SA. The lack of such experimental studies stems from the fact that SA is generally considered as an oral language that has no consensually agreed upon written form, hence more studies relied on auditory paradigms. However, in one early study using the visual word presentations, Bentin and Ibrahim (1996), examined written LA and SA words' recognition (with non-words in a lexical decision task) and reading aloud (in a word naming task). The authors reported that LA words were processed more rapidly than SA ones, with the latter functioning as low frequency LA words. Their results suggested that word recognition in SA was more mediated by phonological processes than LA ones. From this latter study in the visual modality and others in the auditory modality, it had recently been suggested that the status of SA and LA in terms of dominance is modality-dependent: SA being the dominant variety in the auditory modality and LA being the dominant one in the visual modality (Nevat, Khateb et al., 2014). Recent data have indeed confirmed that response times (reaction times: RTs) to words in the auditory modality are faster for SA than for LA words while in the visual written modality the responses to LA words are faster and more accurate (see also Khateb & Ibrahim, 2020).
Although at the behavioral level, the response to the dominance question appears intuitively done, one can ask how these two varieties of Arabic are represented in the brain. The question of brain representation of the two Arabic varieties, relying on such behavioral studies does not give enough answers to understand the neurofunctional bases of the diglossic situation. Indeed, unlike neurocognitive studies in the field of bilingualism that sought to provide answers to the question of the bilinguals' languages representation in the brain, (Keatley, Spinks et al., 1994;Kim, Relkin et al., 1997;Fabbro, 2001;Perani & Abutalebi, 2005;Hull & Vaid, 2006;Emmorey, Giezen et al., 2016;Miller, Bayram et al., 2018), the diglossic question has up to now barely been investigated using neurofunctional methods (Khamis-Dakwar & Froud, 2007;Ahmed, 2012;Nevat, Khateb et al., 2014). Of particular interest to this question is the study Krayem Abu Ahmed (2012) that analyzed event-related potentials (ERP) during an auditory lexical decision task that compared brain responses to SA, LA and Hebrew words. Not only RTs were shown to be faster to SA words (than to LA and Hebrew ones), ERPs displayed early response differences between SA and the two other language conditions, supporting the dominant status of SA variety. In a subsequent study by Andria (Andria, 2016), ERPs were analyzed during a visual lexical deci- both at the level of the N170, and the late P6, a higher response amplitude was observed after LA-HF in comparison to LA-LF and SA-HF, with no differences between the two other conditions. These findings provided also support to the assumption of LA holding the status of the dominant variety in the visual modality. In line with these findings, the study by Nevat, Khateb and Prior (2014) analyzed fMRI responses during the processing of LA, SA and Hebrew written words in a semantic categorization task. Here again, the behavioral measures showed that decisions for SA were slower and less accurate than for words for LA ones. More importantly, the functional responses in the left inferior frontal, precentral, parietal and occipito-temporal regions showed stronger activation to SA than LA, a pattern of difference that mimicked to some extent those reported by L2 vs. L1 comparisons in previous studies (Chee, Hon et al., 2001). The authors interpreted these findings in terms of differences in the exposure (and subjective familiarity) to the written forms of SA vs. LA. Altogether, while providing support to the view that the question of dominance in diglossia is modality-dependent, these previous findings from ERP and fMRI studies call for a combination of behavioral and brain functional measures in order to provide new insights into the question of the status of SA and LA in the brain of native Arabic speakers.
In continuity with this vision, and given the fact that each neuroimaging method might have its characteristics that may present drawbacks and limitations in the study brain activity, the present study sought to employ functional near-infra read spectroscopy (fNIRS) to further investigate the question of diglossia in Arabic. fNIRS is an optical imaging technique that allows the non-invasive measurement of changes in the concentration of oxygenated (oxyHb) and deoxygenated (deoxyHb) hemoglobin (Sela, Izzetoglu et al., 2012). Regional brain activation is known to be accompanied by increases in regional cerebral blood flow and in the regional cerebral oxygen metabolic rate. When the degree of increase in regional cerebral blood flow exceeds the degree of increase in the regional cerebral oxygen metabolic rate (Fox & Raichle, 1986), the result is a decrease in deoxyHb in venous blood. Hence, under NIRS measurements, an increase in oxyHb and a decrease in deoxyHb are interpreted as indicating activated areas (Hoshi & Michael, 2005). Of note is the fact that NIRS enables the measurement of Hb concentration changes in the cortex immediately beneath the probes, but with a relatively poor spatial resolution. During the last two decades, fNIRS has been used in several language studies conducted with infants, children (Horovitz & Gore, 2004;Minagawa-Kawai, Mori et al., 2008;Gervain, Mehler et al., 2011;Sugiura, Ojima et al., 2011;Jasinska & Petitto, 2013;Ludyga, Mücke et al., 2019) and adults (for a review see Feitelson, Goldstein et al., 1993;Ferrari & Quaresima, 2012;Quaresima, Bisconti et al., 2012;Vanderwert & Nelson, 2014). fNIRS Open Journal of Modern Linguistics has also been used to study neural correlates of linguistic and non-linguistic processing in native and non-native languages (Telkemeyer, Rossi et al., 2009;Arimitsu, Uchida-Ota et al., 2011;Plichta, Gerdes et al., 2011;Jasińska & Petitto, 2014;Vannasing, Florea et al., 2016).
Relevant to the present purpose, a previous study was conducted on 484 elementary school children (6 -10 years) who performed word repetition tasks in their native language (L1 Japanese) and a second language (L2-English) while investigating three factors: language (L1/L2), word frequency (high/low), and hemispheric laterality (left/right). The study revealed that the cortical activation pattern associated with language processing in elementary school children involved a bilateral network of regions in the frontal, temporal, and parietal lobes.
One of the major finding was, L1 words elicited significantly greater brain activation than L2 words, regardless of semantic knowledge, particularly in the superior/middle temporal and inferior parietal regions while L2 unfamiliar words were processed like non-words auditory stimuli in the brain as indicated by lower activation than that elicited by L1 words in the superior/middle temporal and inferior parietal regions. Moreover, low-frequency words elicited more right-hemispheric activation (particularly in the supramarginal gyrus) and highfrequency words elicited more left-hemispheric activation (Sugiura, Ojima, Matsuba-Kurita, Dan, Tsuzuki, Katura, & Hagiwara, 2011). In another study, Kahlaoui and her colleagues (2007) examined the hemispheric dynamics during lexical decision task among 10 younger adults (age range 25 -35) and 10 older adults (age range 65 -84) participants. The results showed significant hemispheric differences between the word and pseudo-word conditions with increased blood oxygenation patterns observed in pseudo-word conditions across both hemispheres. In another study that investigated the role of the dorsolateral prefrontal cortex (PFC) in semantic processing in bilingual adults, the participants performed a semantic judgment task. Their results suggested that bilinguals had significantly higher activation than the monolinguals in the left dorsolateral prefrontal cortex for correct answers (Oi, Saito et al., 2010). Another study conducted on French adults who were asked to perform a lexical decision task showed increasing tHB when participants started reading and with a return to baseline level once they stopped reading (Safi, Lassonde et al., 2012 Nevat et al. (2014) which showed difference between the processing of SA and LA words, this study capitalized on the differences previously found between high-frequency SA words and high-frequency LA words.
Hence, we hypothesized that reaction times (RTs) and accuracy will show significant differences between written SA and LA words with LA ones being processed faster and more accurately. Also, we predicted that a modulation of fNIRS responses will be in left frontal areas (inferior frontal gyrus, Broca's area) together with other more posterior areas.
Participants
Thirty native literate Arabic speakers were recruited from the University of Haifa (15 men, 15 women, aged 18 -30; M = 24.3 y, SD = 4.25 y). All participants were right-handed (mean laterality index = 0.92, SD = 0.08, according to the Edinburg inventory, Oldfield 1971) and had Arabic as their mother language. All had been exposed to formal instruction of LA since first grade. All participants were healthy with no history of neurological/psychiatric diseases or learning disabilities and had normal or corrected to normal vision. They were all asked to sign a written consent form prior to their participation in the study and were all paid for their participation. The study protocol was approved by the ethics committee of the Faculty of Education at the University of Haifa.
Stimuli
All participants performed a semantic categorization task (based on Seghier, Lazeyras et al., 2004;Nevat, Khateb et al., 2014) in a block design paradigm that alternated between blocks of word pairs (hereafter Activation condition) and blocks of symbol string pairs (hereafter Control condition). The word stimulus list included 72 pairs of SA high frequency words (SA-HF), 72 pairs of LA high frequency words (LA-HF) and 72 pairs of LA low frequency words (LA-LF). All words were concrete imaginable nouns. The word pairs from each sub-list were presented in blocks of 12 pairs each, providing thus for all word pairs a total of 18 distinct blocks (6 for SA-HF, 6 for LA-HF and 6 for LA-LF). The words in each pair were either categorically related (i.e., two words were exemplars of the same semantic category, 2/3 of the pairs) or semantically unrelated (i.e., the two words belonged to two different semantic categories, 1/3 of the pairs). The participants were asked to respond after the presentation of each pairs if the words are related or not (see Table 1 for examples). The selection of the words was based on a questionnaire which was presented filled by 18 young native Arabic speaking participants (who did not participate in the experiment, mean age M = 24.25, SD = 4.95) who were asked to rate the frequency/familiarity of 200 words in each language variety on a scale from 1 to 5 (1 least frequent/familiar, 5 most Open Journal of Modern Linguistics Another questionnaire was presented to another group of native Arabic speakers who were requested to rate the semantic relationship within each of the pairs from 1 to 5. The average relatedness in each language condition was above 4 for the related words pairs in LA-HF (M = 4.98, SD = 0.08), in LA-LF (M = 4.97, SD = 0.20) and in SA-HF (M = 4.99, SD = 0.14, see Table 1 for examples).
The Control condition used pairs of Greek letter strings (unfamiliar visual stimuli in this population) that were either visually identical or not, i.e., the same string was presented twice or one string differed from the other by one or more character. The participants had to decide whether the two strings in each pair are visually the same or not. Each experimental run contained 6 blocks of the activation condition (word pairs) and 6 blocks of the control conditions (string pairs), yielding thus a total run duration of 4.8 minutes. All participants underwent three experimental runs (for LA-HF, LA-LF and SA-HF), the order of which was balanced over participants. Finally, in order to ensure that the tasks were correctly understood, all participants were provided with detailed instructions before starting the recording (see Figure 1 for a schematic presentation of the bloc design).
Procedure
The experiment was carried out in a sound isolated room at the laboratory of the Edmond J. Safra Brain Research Center for the Study of Learning Disabilities (University of Haifa). Participants underwent one session of fNIRS recording.
Each trial (both in the activation and the control blocks) was of ~2 seconds duration and started with a 500 ms fixation cross, then followed by the stimulus pair which appeared on the screen for 600 ms. An additional blank screen appeared for 890 ms to allow for the participants' response (yielding thus a total of 1.5 s for the response from stimulus onset, see Figure 2). In the activation blocs, the stimulus consisted of two words presented one below the other. In the control B. M. Tarabya et al. In both the activation and control blocs, the "yes" trials represented 2/3 and the "no" trials represented 1/3. condition blocs, the stimulus consisted of two Greek letter strings presented one below the other (see Figure 1). The participants were instructed to give their responses (semantic judgment for words and visual judgment for letter strings) as quickly and accurately as possible using one of two response buttons by using their right-hand middle and index fingers. During fNIRS measurement, participants were seated at about 120 cm from the screen and instructed to look at a fixation point. In order to minimize head motion, subjects were asked to avoid movements as much as possible during the tasks.
Data Collection and Analysis
The data was collected using a 22 channel spectrometer fNIRS device (Optical fNIRS data Analysis: since HbO signal is a more sensitive indicator of changes in blood flow (Strangman, Culver et al., 2002), only oxyhemoglobin [oxy-Hb] data were processed off-line using Matlab software. The data were filtered to remove respiration, cardiac variations and high-frequency noise (mainly due to head motion and reduction of the grip of the optic fibers on the hairy areas). A low-pass filter with a cut off frequency of 0.14 Hz was used. In a second step, Open Journal of Modern Linguistics data were converted to measurements of oxy-Hb, arranged into epochs for the different blocks (from −5 s pre-block to 23.5 s post block). An average time course was then computed for each participant and each channel in each condition (activation and control conditions). Individual data were then averaged to generate grand means for visualization and illustration purposes. Based on the visual inspection of the time course of the grand average signals, two time windows appeared to display signal differences between activation and control conditions. The mean signal for each channel and condition in each participant was computed in the period between 7 -13 s after the beginning of each block for the first time window, and between 17 -23 s for the second time window (see Figure 2 hereafter). A repeated measure analysis of variance (ANOVA) was then conducted on the individual mean [oxy-Hb] signal in each of the two time windows with language variety, condition and channels as within-subject factors.
Behavioral Results
The means and standard deviation of the behavioral measures (accuracy and RTs) for the activation and control blocks in the three language conditions are presented in Table 2. The accuracy was computed as the percentage of correct responses (for yes and no together, 72 trials) in the activation and control blocks.
The 3 × 2 repeated measures analyses of variance (ANOVA) with language variety (LA-HF, LA-LF and SA-HF) and condition (activation vs. control) as a within subject factor showed a significant main effect for language variety (F (2, 58) = 5.552, p < 0.01, η 2 = 0.160) and for condition (F (1, 29) = 204.153, p < 0.001, η 2 = 0.876). The language effect was due to higher accuracy in LA-HF (M = 78%) than for LA-LF (M = 76%) and SA-HF (M = 74%). The condition effect was due to the fact that accuracy was higher in the control (M = 86%) than in the activation condition (M = 66%). There was also a highly significant interaction between the two factors (F (2, 58) = 10.924, p < 0.001, η 2 = 0.274). This interaction was due to the fact that condition effect, although significant in all language varieties, was smaller in LA-HF (see Table 2 for details).
The ANOVA conducted on the participants RTs showed no significant main There was also a small but still significant three-way interaction between the three analysis factors (F (42, 1218) = 1.505, p = 0.021, η 2 = 0.049). In order to better understand these interactions, separate ANOVAs were conducted for each language variety with condition (activation vs. control) and channel (22) as within-subject factors.
fNIRS Results
In LA-HF, there was a highly significant interaction between the factors condition and channel (F (21, 609) = 2.488, p < 0.001, η 2 = 0.079). Post-hoc Fisher's LSD test showed that this effect was due to the fact that five channels showed a difference between conditions (activation vs. control: see details in Table 3), with three temporal and parietal channels showing higher oxygenation in activation condition and two frontal channels showing higher oxygenation in the control condition (Figure 2(B)).
In LA-LF, although the interaction between condition (activation vs. control) and channels failed to reach significance (F (21, 609) = 1.201, p = 0.243, η 2 = 0.039), post-hoc tests revealed here that three channels showed significant differences between conditions (activation vs. control, see Table 3). These included one frontal and two temporo-parietal channels with higher oxygenation in activation than in control (see Figure 2(C)). Table 3. Summary of the statistical differences (in the three language conditions in the two time windows). For the 10 channels exhibiting signal difference between the Activation and Control conditions (see Figure 2). As for SA-HF, there was a significant interaction between the condition and channels (F (21, 609) = 4.755, p < 0.001, η 2 = 0.141). Post-hoc test showed that five channels differentiated conditions (see Table 3). Of these, two parietal channels showed higher signal in activation and three frontal channels displayed higher oxygenation in control condition (Figure 2(D)).
Analysis for the time window 17 -23 s: the 3-way ANOVA performed on the individual oxygenation mean signal in this second time window revealed a highly two-way interaction between condition and channels (F (21, 609) = 2.883, p = 0.001, η 2 = 0.09). Although the three-way interaction was not significant (F (21, 609) = 0.739, p = 0.890, ns), separate ANOVAs were again conducted for each language variety with condition (activation vs. control) and channel as within-subject factors.
In LA-HF language condition, there was a highly significant interaction between the two factors (F (21, 609) = 2.542, p < 0.001, η 2 = 0.081). Post-hoc Fisher's LSD tests showed that this was due to the fact that four (out of 22) channels showed a higher values during the control condition than in the activation condition (see details in Table 3).
In LA-LF, there was neither significant main effects of condition and channel, nor an interaction between the two factors (F (21, 609) = 0.804, p = 0.716, ns).
However, post-hoc LSD tests showed significant difference in one frontal channel due to larger response in the control than in the activation condition (see Table 3).
Regarding SA-HF, the ANOVA showed a significant interaction between the two factors (F (21, 609) = 1.586, p < 0.05, η 2 = 0.052). Post-hoc tests showed significant differences only in two frontal channels (see Table 3) due to higher re-
Discussion
This study aimed to examine whether or not the visual processing of LA words and SA words induce detectable differences in fNIRS responses, while manipulating word frequency in LA among Arabic native speakers. For this proposal, a semantic categorization task was used with LA and SA written words in an fNIRS paradigm. This block-design paradigm had previously been used to map left hemisphere language areas (Seghier et al., 2004) and had recently been used in Arabic to assess fMRI differences between SA and LA (Nevat et al., 2014). Up to now, several studies have investigated the diglossic issue in Arabic, but very few have used brain imaging to characterize the neural basis of diglossia in the brain of Arabic native speakers. In terms of fNIRS responses, we took the option in this study to analyze the oxygenation measure which reflects the difference between oxyhemoglobin and deoxyhemoglobin (oxy-Hb-deoxy-Hb) and this was done separately for the activation and the control conditions. The analyses in all language varieties, conducted to assess the differences in terms of activation, were performed in two-time windows on all channels. The option to use two windows aimed at avoiding missing effects because of possible differences in the time course of the responses. Hence, we expected that whenever differences were found in one of the two-time windows, the direction of the effect will be the same as shown for instance in F7 and F9 channels.
In terms of increased activation, our fNIRS analyses (summarized in Table 3 at the end of the results) showed that for instance when comparing the effects finding was interpreted in fMRI results due to the relatively low familiarity of SA word patterns than LA ones, and to the fact that decoding SA necessitated more computation in this region (Nevat et al., 2014).
Taken together, these differences in terms of increased activation would suggest that the fNIRS oxygenation measure was quite sensitive to assess the differences in processing the three varieties of Arabic. Here, fNIRS analysis could show effects that were not visible in RT measures. One should however be careful about such interpretation of the differences found only in an isolated channel. Actually, as long as the placing of the fNIRS recording channels relies on approximate location, small differences between conditions might also be partially due to small variations in the effects where is some instance small effects just failed to reach significance. A more correct/conservative/careful approach would be to use a cluster of channels (that show differences) as a region of interest in order to avoid effects on simply separate channels. A pre-requisite for the use of such analysis approach would be the need to verify that the direction of the effect in these channels is the same (see for instance channels P7, T7 and P9 [corresponding to channels 16, 18, 22] in Figure 2). In the meantime, having effects on single separated channels would be an inherent feature of fNIRS recordings as reported here because of the small number of channels and of the very low spatial resolution (more than 2.5 cm between two successive channels).
Conducting similar studies with more dense channel distribution would definitely allow avoiding such hesitation about the interpretation and in the same time facilitate the interpretation of the differences thanks to the higher spatial resolution. Also, future studies using this technology should also use sensors positioned not only to sample left hemisphere but also right hemisphere activity. together with F9 there was an additional decrease in F7 (as in LA-HF) but also in C3 (around the motor areas). As to the functional significance of these effects, one should again highlight the fact that these differences attest of difference in the processing demands for the three language varieties. LA-HF is the condition which showed the highest level of decrease in frontal region while LA-LF showed the lowest level of decrease (with SA-HF in between). If one considers decrease in activation in these frontal areas as an index of a lower activation demands, then it appears reasonable to say that LA-HF was the condition that necessitated these frontal areas the less to perform the task, followed by SA-HF and then by LA-LF. Finally, as for the fact that SA-HF seemed to necessitate less activation in C3, one might speculate that this is probably due to the fact that SA words demands in terms of motor-articulatory efforts are lower, hence less activation/more deactivation was observed in these motor areas.
As for the question of diglossia, the only study that investigated the neural basis of diglossia using fMRI and visual presentation of SA and LA words was that reported by Nevat et al. (2014). Actually, two other studies using fMRI were conducted to assess brain activity during picture naming in SA and LA (Abou-Ghazaleh, Khateb et al., 2018) and to investigate language control mechanisms during the use of SA and LA (Abou-Ghazaleh, Khateb et al., 2020). As for our purpose, building on previous observations by Nevat et al. (2014), it was predicted that, since LA is the formal written language, it will be processed faster and more accurate than SA (which usually not encountered in the written form).
Furthermore, due to the fact that LA is the first acquired written form and frequently used by Arabic native speakers it would show faster and accurate responses than LA-LF which is not used daily. In accordance with the behavioral hypotheses, it was expected that responses in left frontal areas (inferior frontal gyrus, Broca's area) will be modulated by the language conditions, with LA-HF words inducing the smallest responses and SA-HF and LA-LF showing no or little differences. The behavioral results, while contrasting with other suggesting that SA and LA function cognitively L1 and L2 (Ibrahim & Eviatar, 2009), are in accordance this Nevat et al. (2014) and Bentin & Ibrahim (1996) showing an advantage for LA in the visual modality. The results observed in terms of decrease in activation seem to suggest that responses in left frontal areas (inferior frontal gyrus, Broca's area) are indeed modulated by the language conditions, with words of the most dominant LA-HF inducing the smallest responses.
The neurocognitive outcomes of the present study indicate, as in previous studies (Safi et al., 2012), that fNIRS technology might be useful tool to investigate reading processes and understanding the differences that might be reflected in the brain activity. As expected, high activation was observed in the left hemisphere, in regions classically involved in reading processes. Consistent with the behavioral measures, brain oxygenation signals during word condition (activation) showed higher values in the language and reading areas compared to symbol condition (control). As for activation in Broca's area, although no direct Open Journal of Modern Linguistics comparison was made between the different language conditions, the pattern of responses (e.g., more deactivation in LA-HF) observed here seems to be in accordance with previous studies' results. In an fMRI study by Joubert et al. (2004), participants were asked to silently read, high-frequency, and low-frequency words together with nonwords. They authors showed that nonwords and low-frequency relative to high-frequency words, elicited a significantly higher activation in bilateral inferior frontal gyrus.
The activation observed in occipito-temporal areas (visual word form area) indicates higher oxygen concentrations in activation than in control. This can be explained by the fact that word recognition shows higher activation than visual recognition (symbols) (Dehaene & Cohen, 2011). The review of Mechelli Andrea, Gorno-Tempini, and Price (2003) of nine studies with a comparison of words and pseudowords listed six studies with higher activation for pseudowords than words in areas corresponding to or near to the visual word form area. However, two recent studies used lexical decision instead of reading found the opposite, that is, higher activation for words than pseudowords (Kronbichler et al., 2004;Binder et al., 2003;Fiebach et al., 2002).
To summarize, the behavioral and functional results suggested the presence of response differences between the processing of LA-HF, LA-LF and SA-HF, although no direct comparison was made here for three language conditions. The current study showed that some areas's activation/deactivation (word form area and Broca's area) were modulated by language condition. In continuity with previous investigations, the results of this study (behavioral measures and deactivation in the frontal areas) suggested that the status of SA and LA is modality-dependent, with LA appearing as the dominant variety in the visual modality.
This is the first fNIRS study to investigate the diglossic issue. Future studies should probably use other types of analysis in order to better assess the neurofunctional differences between conditions. Also, the use of other systems with more recording channels should definitely improve the spatial resolution. A better control on the stimulus list or the experiment timing parameters should also allow understanding why in this, compared to previous experiments, there were no RTs differences. | 7,792.4 | 2021-05-14T00:00:00.000 | [
"Linguistics"
] |
Manufacturing 4.0 Operations Scheduling with AGV Battery Management Constraints
The industry 4.0 concepts are moving towards flexible and energy efficient factories. Major flexible production lines use battery-based automated guided vehicles (AGVs) to optimize their handling processes. However, optimal AGV battery management can significantly shorten lead times. In this paper, we address the scheduling problem in an AGV-based job-shop manufacturing facility. The considered schedule concerns three strands: jobs affecting machines, product transport tasks’ allocations and AGV fleet battery management. The proposed model supports outcomes expected from Industry 4.0 by increasing productivity through completion time minimization and optimizing energy by managing battery replenishment. Experimental tests were conducted on extended benchmark literature instances to evaluate the efficiency of the proposed approach.
Introduction
In manufacturing systems, owners try always to improve profits and optimize production resource use. They also aim to eliminate wastes of time and energy in order to reduce costs and align with current standards. In this context, industry 4.0 has been adopted as a revolutionary industrial paradigm to ensure that production concepts will adapt effectively to operative changes with a more intensive focus on sustainability in industrial contexts while increasing economic and ecological efficiency [1]. The key element for such a concept is the implementation of a highly flexible manufacturing process that allows better resource management.
Governments announced, through the United Nations General Assembly in September 2015, that they would demonstrate the scale and ambition needed to develop the knowledge and technological innovations to increase the use of sustainable energy in multiple areas of critical importance, including transport systems [2]. In this field, urban transport systems [3], freight transport systems [4] and industrial material handling systems [5] are nowadays a major stake (e.g., Figure 1 presents a fully automated material handling based sortation plant in an e-commerce company). Nevertheless, sustainability must be taken into account not only at the strategic level, but also at the operational level in which scheduling is one of the key factors [6]. Modern flexible manufacturing systems use automated guided vehicles (AGVs) as a part of their material handling system [8]. In fact, AGVs are driverless, their routes can be redefined and transport request fulfillments can be reviewed without the infrastructure changes, which offers faster and more intuitive ways of adapting an existing system to new business rules [9]. Battery management on AGVs has an impact on the overall performance of the manufacturing system [8]; however, previous studies omitted this factor [10] which led us to consider it through this work. More precisely, this paper presents a scheduling approach that supports processing, transport and vehicles' battery replenishment tasks. The main question that arises is: how can we maintain the economic cost (i.e., makespan) and optimize energy consumption onboard AGV while considering all these tasks?. From our perspective, the key to reaching the resolution of this scheduling problem easier is to decompose it into three assignment sub-problems: production tasks to processing machines, transport tasks to AGVs and battery replenishment tasks to handling trips. Consequently, it is important to address adequately the battery management to run an AGV system efficiently [11]. Furthermore, this study led us to present a more real view of previous research and should be more reliable when applied in the real world.
Our paper is organized as follows: in the current section, we describe the treated problem and review the state of the art of the related works to position our contribution in this context. Next, the proposed approach is presented, the developed model is described and related algorithms are listed. Section 3 is dedicated to detail experiments and Section 4 reports numerical results which are later discussed in Section 5. Finally, Section 6 presents conclusions and highlights our perspectives for future research.
Problem Description
In our target job-shop scheduling problem (JSP), production is represented as a sequence of jobs j entering and leaving the system through a loading/unloading station (L/U). Each job j represents an occurrence of a product type manufacturing process and consists of an ordered list O j of tasks i. Every i ∈ O j is executed on a specific uninterruptible mono-task machine m according to a predefined process route. Thus, a manufacturing plan of a job j can be considered as a sequential series of (i,j,m) combinations that have to be scheduled efficiently.
The transport fleet has a limited number of battery-based uni-charge AGVs starting their first transportation jobs from the L/U station at time t = 0 with fully charged batteries. A transportation job managed by an AGV v to carry a combination (i, j, m) is composed of two sequential transport sub-tasks performed on unidirectional routes to avoid collision: the moving of empty task v from its current position, which can be either the L/U station if it is the first job or the last delivery station otherwise, to the call node m' with (i − 1, j,m'), and the moving of the loaded combination (i, j, m) to its target station m. During these sub-tasks, the AGV can be redirected to the L/U station to replace its battery before it has a deep discharge; i.e., the detected level at L/U station should never go below a certain value λ (called also maximum discharge capacity in [12] or threshold level charge in [11]). A typical layout of the studied job-shop problem is provided by Figure 2 which tallies up Bilge and Ulusoy benchmark specifications from [13] with an additional battery station at the L/U node. Generally, constraints related to AGV battery replenishment are omitted from the literature [14] and the some papers that exist consider them only in dynamic behavior. Thus, a static scheduling approach (i.e., predictive behavior) that integrates transport, battery replenishment and task affectation into machine constraints will be defined in this paper.
Integrating transport constraints into JSP is an NP-hard problem [15] because of the dependence between task allocation and the availability of both machines and AGVs. Our objective is to schedule tasks optimally on machines, AGVs and battery stations to minimize the makespan while keeping the maximum discharge capacity of each AGV battery higher than λ level. Thus, it is necessary to choose the battery management technique meeting the desired optimization objectives.
In the next subsection, works related to JSP with transport constraints and AGV battery management approaches are described, and our contribution is positioned in this context.
Related Works
JSP with transport constraints is an optimization problem in which resources are allocated to perform a predetermined set of tasks. A great deal of effort has been spent while developing methods in this context. The first benchmarks in this field were introduced by Bilge and Ulusoy in [13], and Knust and Huring in [16]. Both provided referential instances for simultaneous material transfer between machines in identical uni-charge AGV based JSP. Reference [13] proposed an approach based on AGV trips time window constraints that depends on machine operations' completion times. They provided an iterative heuristic procedure to optimize the maximum completion time of job sets in two AGV based problem examples. The benchmark proposed by [16] addresses the same problem in new instances using a single robot-based material handling system. They used the tabu search metaheuristic to minimize the sum of all traveling and waiting times and proposed an appropriate technique to accelerate solution evaluation in the used context. Both benchmarks consider conflict-free unidirectional manufacturing layouts with predetermined shortest paths routing problem and are widely used and enhanced in the literature. Reference [17] implemented three metaheuristics based on iterated local search, and simulated annealing and their hybridization to deal with transport and processing allocations to resources. They obtained new upper bounds for Bilge and Ulusoy instances, and provided new results for minimizing the exit time of the last job after extending the same benchmark instances. Later on, reference [18] developed an approach composed of a disjunctive graph-based framework to model the joint scheduling problem and a mimetic algorithm for representing machines and AGVs scheduling. Their results on both [13,16] instances came up with new enhancements on both makespan and exit time minimization. Reference [19] used a hybrid heuristic search algorithm based on a timed colored Petri net to optimize both the makespan and exit time of the last job. Reference [20] explored a biologically inspired whale optimization algorithm in a mono transport robot JSP to minimize seven fitness functions (makespan, robot finishing time, transport time, balanced level of robot utilization, robot waiting time, job waiting time and total robot and job waiting time). They provided also a novel mathematical formulation and compared the obtained results with five meta-heuristic algorithms. All the papers listed above have omitted battery constraints from their studies.
Since most AGVs rely on batteries as sources of energy, battery depletion rates can become limiting factors [12]. In fact, additional traveling times required to charge or change a depleted battery can significantly affect manufacturing costs. Thus, battery management constraints must be considered in order to get as close as possible to the real behavior of the studied system. Reference [12] presented an overview dictating the way battery replenishment should be implemented in AGV systems (see Figure 3). The author presented two techniques: a battery charging technique, in which AGVs are coupled with chargers until each depleted battery reaches a predefined level, and the replacement technique where the battery is being replaced by a new one. Battery replacement can either be manual, through a handling agent, or automatic in a battery swap station. Meanwhile, the battery charging covers four possible scenarios: (1) Opportunity charge during AGV idle time.
(2) Automatic charge in which the AGV is redirected to a charging station until its battery level recovered. (3) A combination of both. (4) Rail based charging where the AGV remains coupled to a charge rail while traveling through a specific area in the manufacturing plant. Battery replenishment techniques have a large influence on the operational times of AGVs. In fact, recharging the battery inside the vehicle until replenishment means that it will not be available for the charging process's duration, which exceeds, in general, its possible working time [21] and can vary from a battery type to another. On the other hand, the battery swap technique has a limited impact on the operational time of the AGV as batteries are recharged outside the vehicle before being replaced.
To ensure the planned throughput while using these techniques, replenishment (or charging) strategies were developed. They are part of the vehicle dispatching module of the transport order processing and depend on the used battery type and the replenishment method [22]. They are used to prevent too many vehicles from entering the replenishment process at the same time and thus reduce vehicle unavailability while managing battery replenishment. Reference [21] listed four strategies an AGV can use to change its battery in a multi battery station environment. They take into account the position of the battery station regarding the route within the current transport job: 1. The nearest battery station; 2. The farthest reachable battery station on the current route; 3. The first battery station encountered on the current route; 4. The battery station that leads to minimum delay.
Reference [11] implemented those four routing heuristics in a battery swap-based AGV system and performed a comparative analysis between them in large scale manufacturing plant. Their proposed approach ensures that the battery charge will not go below a threshold level (20% for strategies 1 and 4, 28% for strategy 3 and 33% for strategy 2) by the time the depleted battery is swapped. AGVs are redirected to the battery station when traveling to the pickup node or after getting loaded with the affected part. The obtained simulation results proved that the strategy 4 outperforms all others by giving the largest number of total outputs for the used instances.
A common rule between all the previous strategies is that the residual charge of an AGV battery should not go below a certain level λ. This is because a deep battery discharge under the recommended level can affect the battery's life cycle greatly [23]. To highlight the relation between the residual charge and time required for replenishment, reference [24] proposed three regression formulas to calculate the time required to charge a valve regulated lead acid battery (commonly VRLA) based on the current depth of discharge (DoD) and the desired state of charge (SoC) (the first refers to the quantity of energy lost from the battery while the second identifies the quantity of energy available in the battery (or residual charge) with SoC = full battery capacity-DoD). They state in their study that the level of charge that a battery receives is not proportional to the time it gets charged for, and batteries receive most of their charge during the initial phase of charging, as opposed to the later phase. Thus, they proposed these formulas to calculate the recharging time t in minutes while targeting 90%, 95% and 100% of DoD respectively (d): (1) They mentioned also that targeting a lower SoC may have undesirable consequences on battery live cycle (Over-discharge and deep-discharge both have terrible effects on life performance of the battery grid structure and are in the origin of poor life cycles [23,25].), but they prove the efficiency of targeting a lower SoC through saving time and increasing system outputs. In a same way, Reference [26] proposed three formulas for calculating recharging time for 90%, 95% and 100% targeted DoD (d) in lithium-ion batteries: Previous listed works used the AGV battery management constraints in non-deterministic production processes (i.e., the list of jobs to be processed is not known before the production process starts); few papers considered deterministic ones. Reference [14] presented a graph based linear programming heuristic in the vehicle routing problem (VRP) environment to minimize the time needed to schedule AGV deadlined pickup/drop-off transportation jobs while using a charging strategy (machine scheduling is not considered in his work). AGVs move from a charging location; perform transportation between two points according to request and finish times; and go back to the charging location without exceeding the battery capacity. The authors in [27] used a genetic algorithm, particle swarm optimization and hybridization of both in an AGV-based flexible manufacturing system to minimize makespan and the total AGV count while considering battery charges. The proposed approach tries to add an AGV to the fleet if the battery charge of the available AGVs cannot allow responding to current demands. They considered both automatic and opportunity battery charging, in which an AGV charges for 10-12 min every hour, and integrates a parameter that can be changed to adopt to any battery type. To validate their approach, they used two random layouts and implemented their model in a simulation to prove its feasibility. Reference [28] solved the optimal battery swap station location in respect to AGV routing and machine scheduling in JSP facility. In that study, they fixed a duration of time cht after which an AGV battery was considered elapsed and the AGV stopped in a planned horizon h. In fact, in their model, AGVs transfer parts from a pickup point to a delivery point and return back to a home station, and an AGV is not assigned to a part if its battery charge is not sufficient; however, it can stop while returning home due to battery discharging. The mathematical model they proposed tries to minimize the distance between the optimal location of the battery swap station and the stop point for all warehouse AGVs while optimizing the makespan. It has been tested on seven instances using CPLEX to validate the proposed model with different numbers of AGVs.
The particularity of our approach is in using the variable neighborhood search metaheuristic to rearrange a deterministic task scheduling (i.e., processing, transport and battery swap tasks). We also consider real AGV battery characteristics and use a widely explored benchmark in manufacturing systems with some additional parameters to cover battery constraints. The next section presents the details of our proposed model.
Problem Modeling and Solving Approach
In the proposed approach, an unexplored metaheuristic in the context of JSP with transport constraints is used to represent a model that encompasses the three sub-problems treated by this study: task allocation to machines, transportation assignment of parts to AGVs and battery swap scheduling during the manufacturing process in a production cell identical to that described in previous sections.
The variable neighborhood search algorithm (VNS) is a metaheuristic for solving combinatorial and global optimization problems [29]. It was firstly introduced by N. Mladenovic and P. Hansen in 1997 [30]. Since the majority of metaheuristics make use of just one type of neighborhood structure, there exists a high probability of them getting trapped in local optima after a certain number of iterations [31]. VNS inventors tried to bypass this shortcoming by proposing a technique that diversifies, systematically, the type of neighborhood structure while searching for the solution and thus escaping the local optima. It consists of proposing different neighborhood structures for the problem and randomly jumping from one to another until reaching a stop condition (fixed number of neighborhoods, maximum running time, maximum loops within a local search, etc.).
As metaheuristics work on encoding spaces, an encoding technique is used to define different parts of our problem schedule efficiently. It allows describing the three studied sub-problems for our schedule and distinguishing different neighborhood structures explored by VNS. In the next section, the used coding space is described.
Schedule Representation
To present different parts of our problem, the schedule representation will include three parts: 1. The JSP-string: representing the schedule of production tasks on their related processing machines; 2. The transport string (or AGV-string), representing the AGV IDs selected for transporting the processing tasks of the JSP-string; 3. The battery swap string (or BS string), enumerating AGVs behaviors regarding the battery replenishment during the transport of the affected task.
In the first part, the system receives an entry list of n integers representing the requested job-list to be manufactured where each number refers to a product type; for example, the job list "132" refers to three products or jobs, one of type "1", a second of type "3" and a last type "2". From this job-list, a JSP-string is generated by repeating each job type from the job-list according to the size of its task set (i.e., if the job type "1" has four tasks, it will be repeated four times in the JSP-string). To differentiate tasks of the same job we employ the appearance order; hence, all tasks take their names from their parent jobs but they are interpreted according to their appearance order in the JSP-string (i.e., the first "1" in the JSP-string corresponds to the first task of the job "1"; the second "1" refers to the second task of the job "1" and vice versa; see the example in Figure 4 where O ij is the task i of the job j and |O j | is the number of tasks of the job j). This is called operation (or task)-based representation, as detailed in [32]. The second part is the representation needed to select AGV for transporting tasks between machines. The simplest way has been utilized by reproducing the same JSP-string from previous steps, and substituting task numbers with AGV IDs to generate the AGV-string. Interpreting the new string highly depends on the JSP-string, as an AGV id in a position p indicates that this AGV is selected to transport the task in the same position p from the JSP-string (see Figure 5; for example, the third column states that AGV with id = 0 is selected to transport the second task of the job "1"). As described in previous sections, a task trip covers two moves: from the current AGV position to the pickup node of the task, and from this last one to the delivery node. Tasks' pickup and delivery nodes are static data stored in a separate dictionary queried for that purpose. In a same way, the last part of the representation corresponds to the behavior of the AGV while transporting the task from the pickup to the drop-off node. The BS string is generated with the same number of elements as JSP and AGV strings. Each element of the BS string describes the route that the AGV of the same position from the AGV-string will seek during its transport task. We consider three possible scenarios in our study and thus three possible values for each element of the BS string: a zero "0" indicates that the transport task is realized from source to destination without interruption (i.e., no battery swap is performed during the whole AGV trip); a one "1" forces the AGV to perform a battery swap before reaching the pickup station (i.e., AGV travels from its current position to the battery station, makes a battery switch and then is redirected to the pickup station to perform its transporting task); and finally a two "2" make the AGV change its battery whilst transporting the job to its next destination (i.e., AGV travels to the pickup node, loads the corresponding part, is redirected to the battery station with loaded part onboard, makes a battery switch and then moves to the drop-off node). Still for our example from Figure 5, the third column states that AGV with an ID of 0 will move to the BS station to perform a battery swap, and then travel to the pickup node of the second task of the job "1" in order to transport it to its destination.
Using this 3-string based representation, infeasible solutions for our schedule have been avoided, and thus, an additional cleaning step in our metaheuristic has been omitted [33]. Additionally, our approach incorporates three local search stages, one for each sub-string, which operate together to find better task allocation while keeping the makespan minimized and the minimum battery levels for the whole AGV fleet higher than a particular level. In the next subsection, we detail the proposed approach.
The Proposed Approach
The flowchart of the Figure 6 presents our model's behavior. The objective is to find the combination of three sub-strings (i.e., JSP, AGV and BS strings) producing the best schedule for the studied problem; thus, a cost function is used, at the end of each research step, to calculate both the makespan (C max ) and the minimum detected battery level (MBL) for all available AGVs. A schedule is accepted if MBL is greater or equal to pre-specified battery charge level λ; otherwise it will be ignored and a new BS string search is restarted. Three types of neighborhoods are used in the searching process according to the representation string of each sub-problem, as detailed in previous sections: JSP, AGV and BS neighborhoods. Since JSP schedule structure has fixed letter enumeration (i.e., only the order of letters can change within all possible JSP strings), only neighborhood structures that operate on elements orders are allowed (see Figure 7). On the other hand, both AGV and BS schedules can have several neighborhood structures.
At the start of the process, an initial random solution x is generated from the input data (i.e., the requested job list, processing orders of each job and layout characteristics) having three strings with the same length l and a BS strings that gives an MBL ≥ λ. At this stage, this solution is considered as the best schedule.
The exploration process uses three nested local search levels to look for the best triplet strings composing the schedule: initially, an AGV-string is chosen randomly; then a JSP string is picked by random; and finally, a random BS string is generated and sent, together with aforementioned strings, to the cost function to calculate C max and MBL. If the resulting MBL is lower than λ, the exploration process passes to the next BS string; however, if the triplet cost has the lowest C max and the highest MBL (in the case of equal C max ), it will be saved as the best schedule. Otherwise, the searching process is repeated likewise with a new random BS string to find a better cost until reaching the BS stop condition.
When knocked out by a stop condition, at BS, JSP or AGV levels, the searching process steps back to upper levels to try with new random elements till reaching the global stop condition which can refer generally to a maximum CPU time or number of iterations. AGV, JSP and BS levels' stop conditions are fixed to exit the related level if no improvement of the BestC max is detected within pre-defined number of loops.
At VNS exit, the best triplet selected corresponds to the best schedule having the minimum C max and the maximum MBL value.
VNS Implementation
When experiencing VNS, we realized that its implementation is mainly based on small details which can greatly influence the quality of the obtained results. After choosing the appropriate representation and neighborhood types, it is necessary to well determine both neighborhood structures and local search heuristics. We chose using general VNS (GVNS), one of the most successful VNS variants [34], and variable neighborhood descendant (VND) as a local search routine for both AGV and BS schedules. VND can significantly increase the chance of reaching the global minimum as it gives the possibility of jumping to another neighborhood structure during the local searching process, despite only a single neighborhood structure being explored in the classic local search concept [35]. VND is used only in both AGV and BS schedules for the same reason that differentiates the JSP neighborhood described in Section 2.2. The VNDs' pieces of pseudocode are shown in Algorithms 1 and 2 respectively, in which four neighborhood structures are used to allow exploring different regions of the feasible solutions space: In both VND algorithms, neighborhood changes are used and a loop is initialized to allow exploration. Initially, a neighbor is randomly selected from the current neighborhood using either I 1 or I 2 technique (the choice depends on the value of the variable noImprovementCount which represents the number of iterations without the solution x ameliorates). Afterwards, the heuristic requests other levels' strings to calculate the triplet cost(α, β, γ) (where α represents the AGV-string, β the JSP-string and γ the BS string). If the new cost is better, the best solution x is updated and the noImprovementCount variable is reinitialized. Otherwise, noImprovementCount is incremented and a neighborhood change is performed using either N 1 or N 2 technique according to the new value of noImprovementCount. Note that, in each loop, N 1 , N 2 , I 1 or I 2 are always applied equitable during the exploration process on the best solution x (which explains the use of the modulo(noImprovementCount, 4) in AGV_VND and BS_VNS, and modulo(noImprovementCount, 2) in JSP_LS).
On the other hand, the JSP local search function JSP_LS, described in Algorithm 3, uses only I 1 and I 2 neighborhood structures to explore possible JSP schedules.
Instance Description
To validate our approach, a series of tests has been held on the Bilge and Ulusoy manufacturing benchmark presented in [13] which is a common reference for JSP with transport constraints. This benchmark has been extended to cover energy behavior onboard AGV (or battery constraints) and additionally for machine and transport scheduling. We used 40 test instances with two AGVs to assure transport in four different manufacturing layouts exploring 10 separate job-sets. The following assumptions and parameters were considered to align with our study's objectives:
•
One time unit in the benchmark corresponds to one minute in the real world; • L/U times are included in the travel duration; • L/U were automatically performed upon AGV destination arrival; • Travel durations were constant either traveling empty or loaded; • AGVs were unicharge vehicles; • Battery swap operation was performed in the L/U station and took 4 min to achieve (authors in [36] state that this operation takes less than 5 min; thus we choose the first integer value which met this requirement).
Instance nomination code uses EX (abbreviation of example) followed by job set and layout digits. For example: EX61 represents instance using the job set 6 with layout 1.
Energy Characteristics
To reflect the real behavior of battery discharge onboard AGVs, we distinguished various AGV activities and their related consumed energy rates using data collected from existing AGV systems [12]. Table 1 lists amperes consumed by our AGV model in one time unit by activity type, as described in [12] for unit load AGVs: Table 1. AGV activities ampere draws.
AGV Activity
Ampere Draw Travelling loaded 60 Travelling empty 40 Blocking 5 For example, an AGV that is traveling loaded during six minutes consumes energy equal to: 6 min × 60 amperes = 360 ampere-min = 6 ampere-h. The battery capacity was fixed to 100 Ah, which is equivalent to 6 h discharge rate, as mentioned by [12].
Energy consumed when L/U parts is considered null; in the assumed layouts, AGV passes by a special area that loads/unloads parts on/from the vehicle automatically at node arrival.
Numerical Results
In the first experiment, two possible cases for energy onboard AGVs have been studied: no battery management and opportunity charging (see Section 1.2). First, the energy consumption is monitored by measuring the MBL during the manufacturing process. This allowed us to highlight the gain through using a battery management strategy while applying an AGV scheduling approach in the real world. Then, the behaviors of two different types of batteries have been monitored in an opportunity charge based system in which AGVs were automatically coupled to battery chargers upon station arrival, during their idle time (i.e., AGV still charging while parking in the last drop-off station). The AGV battery still charged till the next call, and the quantity of energy replenished depended on several parameters: battery type, idle time duration, the targeted state of charge (SoC) and the current depth of discharge (DoD). Thus, three possible target SoCs for each studied battery type were tested, by exploring jointly the six equations detailed in Section 1.2 and a new Equation (7), to calculate the quantity of replenished energy. The obtained results are presented in Table 2.
Charged energy percentage = Idle time × Target SoC percentage t (7) Experiments within that table were conducted on a modified version of the proposed GVNS by omitting the battery switch level. The used benchmark instances' referential lower bound and upper bound enhancements provided by Zheng et al. 2014 in [37] are presented in "LB" and "BKC max " columns respectively. The "BFC max " column contains the best found makespan results obtained by the modified GVNS approach, while both "BKC max " and "BFC max " are expressed in minutes to comply with the assumptions of Section 3.1. Furthermore, seven columns are used to express MBLs during the whole working process in seven cases: one case column to express the obtained results without using a charging strategy, and six cases columns to display findings when employing an opportunity charging strategy on two different battery types (lead acid and lithium-ion) and targeting three different SoCs for each battery type (90%, 95% and 100%).
In the last part of the experiments, given by Table 3, all the three scheduling levels of our GVNS model were used (i.e., JSP, AGV and BS schedules). In this series of tests, C max was minimized while keeping the MBL superior to 28% as per [11]. Consequently, benchmark instances in which the MBL value already reached this level without being battery replenished were not included in these tests (i.e., if the value in the 5th column of Table 2 is greater or equal to 28%, then the related instance was not considered for this second part of the study). Two different groups of data are provided in this table: 1. Results without battery replenishment: This part reproduces the values obtained in Table 2 for comparison purposes, where columns "C max " and "MBL" refer to "BFC max " and "MBL without charging" columns respectively. 2. Results with battery swap technique: This presents our GVNS model outputs. In addition to C max and MBL columns, the "BS count" column refers to number of battery switches performed to reach λ level; "AGV id " indicates the ID of the AGV concerned by the battery switch's operation, and "Status in BS" column describes the status of the AGV while swapping its battery (two values are possible: Empty (or "1"), Loaded (or "2") as described in the BS string representation in Section 2.1).
Results Discussion
Several remarks can be drawn from the results obtained in Table 2. Initially, C max values gathered by using GVNS were almost identical to the best known solution (BKC max ). The analysis of variance through one-way ANOVA confirmed that there was no significant difference between our GVNS C max results and BKC max , as the calculated p-value was almost 90%-greatly superior to 5%.
Additionally, MBLs without a replenishment technique substantiate previous findings in the literature and further support the idea that battery constraints have a great impact on system throughput [12]. In fact, lines EX103, EX74 and EX104 state that omitting energy usage while scheduling various tasks may cause one or both AGVs stop in the middle of a transport operation (as MBL is lower than zero) involving a huge delay in production, route congestion, onboard battery characteristics' degradation and extra cost for troubleshooting failed AGV(s). Furthermore, adopting the opportunity charging technique provides further evidence for our previous deductions. As a matter of fact, the last six columns show that an optimal C max can be maintained by choosing the right battery type. Additionally, it can be clearly observed that the charge level in lead acid batteries did not improve much when adopting the opportunity charge technique.
Note that stable battery levels for some instances, such as in EX73, were due to the lack of blocking time at the pickup/drop-off stations.
On the other hand, results of Table 3 have further strengthened the effectiveness of considering battery management method. Lines EX84 and EX94 stress just how possible it is to preserve the same scheduling qualities while managing energy: C max was unchanged and remained the same while adopting a battery replacement strategy. Figures 8 and 9 show, respectively, Gantt diagrams of the instance EX84 before and after using a battery management policy and demonstrate that the schedule system has maintained the same C max in both situations. Furthermore, the analysis of variance (one-way ANOVA) has been conducted to evaluate whether a significant statistical difference exists between C max and MBL data groups before and after using a battery swap technique. The obtained p-values (see the last line of Table 3) demonstrate that C max results have not been significantly changed before and after using a battery swap technique (p-value > 5%), while MBL values have been meaningfully enhanced after using our approach (p-value < 5%). This confirms that our GVNS has improved MBL values while keeping makespan in steady state.
Finally, the proposed approach demonstrates its ability to minimize the number of battery switch operations: all the proposed schedules have only one BS operation. Additionally, using a single BS station is more beneficial, in terms of installation and maintenance costs, than using a battery charging station in each node which can presents a valuable economic factor.
Conclusions
In this paper, we have addressed a novel approach to resolve the JSP in an AGV-based manufacturing facility with battery constraints. We have studied previous works to highlight their limitations and proposed a GVNS-based metaheuristic that has succeeded to preserve the economical quality of the production cells while significantly optimizing energy consumption.
The provided numerical results show that the obtained makespan is very close to the best known solutions. We complemented the existing literature on the topic with new energy consumption related results, and enhanced our understanding on the feasibility or unfeasibility of previous findings when considering energy aspects. Our results also put forward the possibility of using an opportunity charge or a battery swap strategy while saving money.
The proposed model is useful to managers for the decision making at the operational level, although some potential limitations need to be considered. First, the maximum battery capacity is not constant and it is subject to degradation during the charging process [38]; thus, future studies should take into consideration the dynamic nature of this parameter while highlighting battery properties such as the maximum allowable charging/discharging current, allowable warming and balancing cells. Additionally, the battery threshold level parameter (λ), to prevent batteries from being deeply discharged, was fixed with an arbitrary value while it should be carefully chosen, as it can highly influence the economic factor by reducing battery cost, which is significant in comparison with energy prices [39]. Finally, it would be wise to integrate other factors such as battery degradation rates, AGVs workloads or battery stations' installation and maintenance costs.
Considerable insight has been gained with regard to AGV systems and their usage in different industry types. Some modern companies try to bypass the limitations and logistical problems of battery charging and switch techniques by employing other replenishment systems, such as rail based charging [40], which can be a very interesting field of investigation, even if this choice still depends on technical and economical factors of the target system. Additionally, a reactive behavior study, allowing AGVs to manage their own energy replenishment operations in regard to system outlined objectives, will also be helpful. Furthermore, additional battery parameters can be monitored, during the manufacturing process, when considering the opportunity charge strategy, such as the issue of balancing cells and their operating parameters (temperature, charging/discharging currents, service life, etc. The work described in this paper was conducted within the framework of the joint laboratory "SurferLab" founded by Bombardier, Prosyst and the Université Polytechnique Hauts-de-France. This Joint Laboratory is supported by the CNRS, the European Union (ERDF) and the Hauts-de-France region. | 8,832.6 | 2020-09-21T00:00:00.000 | [
"Engineering"
] |
Patient-specific neural networks for contour propagation in online adaptive radiotherapy
Objective. fast and accurate contouring of daily 3D images is a prerequisite for online adaptive radiotherapy. Current automatic techniques rely either on contour propagation with registration or deep learning (DL) based segmentation with convolutional neural networks (CNNs). Registration lacks general knowledge about the appearance of organs and traditional methods are slow. CNNs lack patient-specific details and do not leverage the known contours on the planning computed tomography (CT). This works aims to incorporate patient-specific information into CNNs to improve their segmentation accuracy. Approach. patient-specific information is incorporated into CNNs by retraining them solely on the planning CT. The resulting patient-specific CNNs are compared to general CNNs and rigid and deformable registration for contouring of organs-at-risk and target volumes in the thorax and head-and-neck regions. Results. patient-specific fine-tuning of CNNs significantly improves contour accuracy compared to standard CNNs. The method further outperforms rigid registration and a commercial DL segmentation software and yields similar contour quality as deformable registration (DIR). It is additionally 7–10 times faster than DIR. Significance. patient-specific CNNs are a fast and accurate contouring technique, enhancing the benefits of adaptive radiotherapy.
Introduction
Over the years, advanced radiation delivery paradigms such as intensity-modulated radiotherapy, volumetric modulated arc therapy and intensity-modulated proton therapy have increased the dose conformality with the tumor, resulting in improved healthy tissue sparing (Lomax 1999, Bortfeld 2006, Otto 2008, Tran et al 2017, Moreno et al 2019. However, daily set-up variations and longitudinal anatomical changes throughout the treatment, such as weight loss and tumor shrinkage, result in differences between the planned dose and the delivered dose. This can lead to target coverage degradation for highly conformal radiotherapy that may impact tumor local control. The effect is especially apparent for proton therapy, because the depth of the proton dose peak is highly dependent on the tissue densities along the beam path, which changes with changing anatomy (Lomax 2008, Zhang et al 2011. Uncertainties in set-up, anatomy and range are accounted for in the planning process either by applying margins around the clinical target volume (CTV) (Albertini et al 2011) or by incorporating the uncertainties directly using robust optimization (Liu et al 2012, Unkelbach et al 2018. However, both techniques result in an increased dose to the normal tissue, reducing the advantage of conformal radiotherapy. With online adaptive radiotherapy, the set-up and anatomical uncertainty can be strongly reduced. The daily treatment plan is reoptimized based on a 3D daily image taken shortly before the treatment (Yan et al 1997, Lim-Reinders et al 2017, Albertini et al 2020, Paganetti et al 2021. The consequent reduction of uncertainty increases the plan conformality and, hence, the sparing of healthy tissue. Online plan adaptation is a time and resourceintensive process as it requires the repetition of several planning steps for every fraction. In particular, it requires organs-at-risk (OARs) and target volumes delineation on the new images, plan evaluation, adaptation, reoptimization and quality assurance (QA). To be effective, all these steps need to be executed in several minutes because the time between the image acquisition and the treatment needs to be as low as reasonably possible to ensure high correspondence between the image and the treated anatomy. Furthermore, faster adaptation shortens the patient's overall treatment time and therefore increases patient comfort.
The time required for online adaptation implies automation of each sub-process with as little as possible manual interventions. The most resource-intensive step is daily contouring, so there is a great interest to automate it with sufficient accuracy and robustness (Lim-Reinders et al 2017). It can be automated in two distinct ways: automatic segmentation or registration.
Firstly, state-of-the-art segmentation is usually based on deep learning (DL) with convolutional neural networks (CNNs), which learn to segment medical images based on large datasets with manually annotated contours (Chen et al 2021, Nikolov et al 2021. The advantages of these methods are that they are fast, consistent and yield accurate results. On the downside, they require large amounts of annotated data to train and do not always generalize well to out-of-distribution data, e.g. scans that are significantly different than the training data. Furthermore, their applicability for tumor and target volume segmentation is limited (Kosmin et al 2019, Liu et al 2021. CNNs do not require manual contours for the patient under study, which is an advantage for segmentation in general. However, in adaptive therapy, such a reference annotation is always available, i.e. on the planning CT, containing information that is not used in the automatic segmentation of the daily scans. Another set of methods relies on image registration for contouring (Thor et al 2011, Kumarasiri et al 2014, Elmahdy et al 2019. Specifically for adaptive therapy, the manual contours on the reference CT can be propagated to the daily scan by registering the former to the latter and applying the same transformation to the reference contours. The main advantage is that this technique does not require a large training dataset. The disadvantage is that it requires at least one annotated scan per patient and that traditional techniques are slow compared to auto-contouring with CNNs (Klein et al 2009, Costea et al 2022. Furthermore, when anatomical changes occur, deformable image registration (DIR) is needed, which is an ill-posed problem requiring careful hyperparameter tuning and algorithm choice to achieve high performance (Brock et al 2017).
To overcome the long runtime of traditional DIR algorithms, recent works have proposed image registration with deep learning (Fu et al 2020, Haskins et al 2020, Xiao et al 2020. Instead of iteratively optimizing a similarity metric, these CNNs are trained to directly predict the deformation which reduces the runtime strongly. However, despite the great potential, these techniques have not yet achieved the same performance as iterative algorithms (Fu et al 2020).
Both registration and segmentation have their advantages and disadvantages. On the one hand, iterative deformable registration is slow and can be unreliable in case of large anatomical changes or mass variations (Oh andKim 2017, Brock et al 2017). On the other hand, CNNs can fail on out-of-distribution data and cannot accurately segment tumors, so they cannot be employed in adaptive therapy without time-consuming manual checks and adjustments made by clinicians. However, by including the information from the (contoured) planning CT in the CNN, its robustness can be increased because the daily images are closely related to the planning CT, so the distribution of the CNN is likely to encompass the daily images. This can be achieved by (re-) training the CNN on the planning CT, also known as patient-specific fine-tuning, which has been explored for prostate cancer on MR and CT (Elmahdy et al 2020, Fransson et al 2022, for a single OAR in the head region on CT (Chun et al 2021) and for brain white matter segmentation on MR (Jansen et al 2020).
Whereas all works report a strong improvement of the quality of the CNN by patient-specific fine-tuning, their implementation details differ and the results are specific to a single anatomical site. A rigorous comparison of this technique to other auto-contouring methods for adaptive therapy has not yet been performed, and it is therefore unclear whether it is usable and optimal.
In this work, we train patient-specific CNNs for automatic contouring in online adaptive proton therapy (PT) and compare this technique to general segmentation networks and registration-based contour propagation for patients with head and neck cancer (HNC) and non-small cell lung cancer (NSCLC). Our work differs from previous publications in: • It uses transfer learning, as in Elmahdy et al (2020), Jansen et al (2020), but updates all parameters of the CNN instead of a subset which enhances the learning capabilities.
• It uses affine and elastic deformations along with noise addition as data augmentations to mimic the set-up and anatomical variations happening in adaptive therapy. This further prevents overfitting, making the quality of the contours less sensitive to the number of training steps during the retraining.
• The technique is evaluated for different anatomical sites. The HNC patients are representative of small anatomical changes whereas the NSCLC patients undergo larger anatomical deformation, therefore also covering a large spectrum of relevant clinical deformations in adaptive radiotherapy. Additionally, both OAR and CTV segmentation is tested.
Materials and methodology
This section describes the different methods for contour propagation used in this study. First, the datasets used for training and evaluation are presented. Then, the registration and segmentation-based methods are described, followed by a short description of the evaluation metrics.
Datasets
This work is based on three datasets. The first dataset is from the Center For Proton Therapy (CPT) in Switzerland and contains patients treated with proton therapy between 2013 and 2021. A total of 388 patients with various indications was included, all having at least one planning CT with or without replanning CTs, yielding a total of 464 scans with annotations. Depending on the tumor location, different OARs were contoured manually by expert medical personnel, resulting in a large variation in the number of ground truth labels for each OAR (table 1). As none of these patients underwent online adaptive therapy, this dataset is solely used to pretrain the segmentation models (see section 2.3). In the remainder of this paper, this dataset will be referred to as the CPT dataset.
The second dataset consists of five patients with non-small cell lung cancer (NSCLC), not included in the CPT dataset. This data has previously been described in Josipovic et al (2016), Nenoff et al (2020), Amstutz et al (2021), Nenoff et al (2021). Each patient has one planning and nine repeated voluntary deep breath hold CTs. The repeated CTs were acquired on three different days, each day consisting of three different acquisitions. However, for this study, we will consider each CT to be representative of a different fraction in online adaptive therapy. All CTs were retrospectively recontoured by expert radiation oncologists according to the clinical protocol (Nenoff et al 2021), which included propagating the planning contours with DIR and slice-wise manual adjustments, either in Eclipse or Velocity (Varian Medical Systems, Palo Alto, USA). This dataset will be referred to as the NSCLC dataset.
The last dataset consists of five patients with various indications of head and neck cancer treated with proton therapy at the CPT. Each patient has a planning CT and 4 to 7 repeated CTs acquired on separate days throughout the treatments. All patients were removed from the CPT dataset so that they were not included in pretraining the networks. Even though these patients were not treated with online adaptive therapy, the repeated CTs are representative of the daily and longitudinal anatomic and set-up variations to be expected during online adaptive therapy. The repeated CTs were retrospectively recontoured by expert radiation oncologists according to the same clinical protocol as the NSCLC scans. We will refer to this data as the HNC dataset.
Registration based methods
In registration-based contour propagation, the reference CT is considered the moving scan which is registered to the daily CT, i.e. the fixed scan. This registration results in a deformable vector field (DVF), which is used to interpolate the binarized reference contours to transfer them to the daily scan. In this work, we consider two distinct registration techniques.
Rigid registration
The first registration method relies on rigid registration (RR), i.e. the reference CT is only translated and rotated to match the daily CT. In case the anatomy is not strongly deforming (such as the head), this technique can be preferred because of its simplicity, speed and consistency. More specifically, we employ rigid registration implemented in elastix (Klein et al 2010) with mean squared error (MSE) as similarity criterion and four consecutive resolutions.
Deformable registration
The second registration method is a deformable image registration (DIR) method, which is the preferred method for contour propagation in case of deforming anatomy. The downside of DIR is that the problem is illposed, so that the results of different algorithms and even hyperparameters can lead to strongly different results (Brock et al 2017). In this work, we use the b-spline algorithm implemented in plastimatch (Sharp et al 2010) with MSE as similarity criterion. A detailed description of the hyperparameters can be found in (Nenoff et al 2021). Several other DIR algorithms were also tested, but for the sake of clarity we focus on this one as it led to good results compared to the other DIR algorithms and is publicly available.
Segmentation based methods
We train deep CNNs for the task of contour propagation in adaptive radiotherapy in two different settings: pretrained (or general) and patient-specific. All networks are based on the 3D UNet architecture, which takes as an input the daily CT and outputs a set of segmentation maps S, each map corresponding to an OAR or target volume (TV). The network has 16 initial convolutional filters, which are doubled in each of the four encoder blocks. Max pooling with kernel size and stride 2 is used for downsampling between the encoders. Four decoders upsample the features to the original resolution with nearest-neighbor interpolation. All encoders and decoders consist of 2 convolutional filters with kernel size 3 × 3 × 3 followed by a rectified linear unit activation. A final convolution with kernel 1 × 1 × 1 is used to convert the 16 features into a set of organ-specific activation maps. All networks are trained with binary cross-entropy, i.e the segmentation allows each voxel to be part of multiple labels. Even though organs generally do not overlap, this allows to easily handle sparsely annotated scans, i.e. scans on which some organs are visible but not segmented by the medical personnel because they were irrelevant to the planning. To still leverage all the contours in the dataset, the loss function is adjusted in such a way that it ignores the loss contributions from labels that were not manually segmented.
Pretrained neural network
The pretrained neural networks (PNN) are firstly trained on the relatively large CPT dataset. The models are trained from scratch with the Adam optimizer for 200 epochs and initial learning rate 10 −3 , which is halved every 20 epochs. Early stopping is applied by retaining the model with the lowest loss on the validation set (10% of the patients). All scans are resampled to a fixed resolution 0.97 × 0.97 × 2 mm and data augmentations include random cropping, rotations within ±5°, ±5% scaling, small localized elastic deformations (Isensee et al 2020) and Gaussian noise with σ 2 = 10 −4 . Note that these networks do not segment any of the target volumes, because the dataset contains a wide variety of indications and previous work has shown the poor quality of CNNs for target volume segmentation (Kosmin et al 2019, Liu et al 2021. We train two networks, one specific for the OARs in the head and neck region, i.e pretrained HNC network, and one for the OARs in the lung region, i.e pretrained NSCLC network. After training on the CPT dataset, the models are in a second step retrained on the HNC and NSCLC datasets themselves. The evaluation is done with leave-one-out validation, i.e the pretrained model is retrained on 4 out of 5 scans of either the HNC or NSCLC dataset and the retrained model is evaluated on the remaining scan. In that way, the pretrained model has still never seen the anatomy of the patient under study and should therefore generalize what it has learned from other patients. The retraining parameters are similar to the initial training parameters, but the magnitude of the data augmentations was increased to ±10°rotations, ±10% scaling and σ 2 = 10 −2 Gaussian noise to avoid overfitting the very small dataset.
Patient-specific neural network
During online adaptive therapy, clinically accepted contours on the planning CT are always available because they were used for the initial planning. The pretrained networks however do not leverage these. To include this prior information, we fine-tuned the pretrained networks by retraining the networks only on the reference CT (Elmahdy et al 2020, Chun et al 2021, yielding patient-specific neural networks (PSNN). This retraining results in overfitting of the network to the reference CT, but because the reference CT is very similar to the daily CTs, it can be expected that this overfitted network still performs better than the generalizing pretrained networks. Further, to avoid complete overfitting, training is restarted with a lower learning rate 10 −4 and, since there is only one scan in the training set, runs for 50 000 epochs. Data augmentations are the same as for the initial pretraining, with the exception of a stronger Gaussian noise with σ 2 = 10 −2 .
Pretraining a network for target volume segmentation is very difficult and would require a lot of data. However, this does not mean that the target volumes (TV) cannot be segmented with deep CNNs. Similar to the fine-tuned models, we can train a neural network solely on the reference CT, which contains the TVs contoured by a clinician. This is commonly referred to as one-shot image segmentation (Shaban et al 2017). Contrarily to the fine-tuned models, the training cannot restart from a pretrained neural network that is already able to segment TVs. It is however possible to leverage some prior information during one-shot learning by means of transfer learning (Weiss and Khoshgoftaar 2016), which has shown promising results in e.g video segmentation (Caelles et al 2017). With transfer learning, the network is first trained on a different task than it is supposed to (e.g. lung segmentation). In a second step the network is then retrained on the original task (e.g. TV segmentation) starting with the initial weights from the other training. Here, we take the pretrained models on the OARs and use transfer learning to segment the CTV. We restart the training from the final weights of the pretrained models for all layers except the final convolution, as this convolution creates organ-specific maps which are not informative for the TVs.
Commercial segmentation
Finally, the trained CNNs are also compared to a clinically used commercial auto-contouring software Limbus Contour 1.7 (AI Limbus Inc., 2076 Athol Street, Regina, SK S4T 3E5, Canada). This software has been clinically validated and shown to only rarely require manual adjustments of OARs (Wong et al 2020, D'Aviero et al 2022).
Evaluation methods
The performance of the above-mentioned contour propagation methods is evaluated on the HNC and NSCLC by comparing the results with the manually annotated contours on the repeat CTs. We use three well-known geometric metrics for this comparison. Firstly, the dice coefficient to evaluate the overlap between the manual and propagated contour. The dice coefficient is however strongly dependent on the size of the structure and is therefore difficult to compare for organs with different sizes. To alleviate this effect, we also include the surface dice, which represents the proportion of the organ surface which is within a tolerance of the surface of the manually annotated organ (Nikolov et al 2021). We set this tolerance to 2 mm. Both dice and surface dice coefficients give insight into the average difference between segmentations. To also assess the maximal error, we evaluate the 95th percentile of the Hausdorff distance (HD). A Wilcoxon signed rank test is performed between each method and the patient-specific NNs to test whether they perform significantly better or worse than the other methods.
Two preliminary experiments are performed on the NSCLC dataset to highlight the differences between the proposed method and previous works. Firstly, we compare our approach (i.e. the fine-tuning all weights of the network) to fine-tuning only the final layer, as proposed by Elmahdy et al (2020), Jansen et al (2020). Secondly, we evaluate the importance of using data augmentations, by comparing our approach to fine-tuning without data augmentations.
Preliminary experiments
Fine-tuning all weights of the network improves the segmentation compared to only fine-tuning the last layer (table 2). This means that increasing the learning capability by retraining all weights indeed improves the performance of the network.
Including data augmentations during fine-tuning increases contouring accuracy ( figure 1). For all patients, the maximum dice score during training is higher with data augmentation than without. Moreover, training with data augmentations avoids overfitting, i.e. the segmentation accuracy on the repeated CTs first increases and then stagnates, without significantly decreasing at the end of the training. Contrarily, without data augmentations, the accuracy reaches a maximum after which it steadily decreases. In practice, a fixed number of iterations needs to be defined. When training without data augmentations, the iteration at which the dice score is maximal depends on the patient (figure 1). For example, here, patient 1 reaches maximal dice after 700 iterations, and for patient 3 this is 3500. Therefore, selecting a fixed number will result in suboptimal performance for some patients. Contrarily, when training with data augmentations, the number of iterations can simply be set high (50 000 in our case) as the quality stagnates.
Contouring accuracy
Regarding the OAR contours, rigid registration (RR) performs generally worst of all methods for the NSCLC dataset (figure 2), except for the spinal cord. This is because the RR aligns the spine well, and, hence, also the spinal cord is accurately contoured. The pretrained NN achieves better contour accuracy but suffers from outliers with low performance for the lungs and esophagus. This happens when the network is evaluated on outof-distribution data, i.e data that is significantly different from the training data. Because the training set is small for these OARs (table 1), the probability of this is indeed larger than for the more frequently occurring OARs. The commercial system consistently outperforms the pretrained NN and is especially more robust, i.e. it suffers less from outliers. The large HD95 in the lungs in some cases is due to the presence of a tumor, which, depending on location, annotator or method is included or excluded in a contour. This has only a limited effect on the dice and surface dice, but affects strongly the HD95.
Fine-tuning the segmentation networks on a specific patient improves the segmentation accuracy of the OARs strongly, outperforming rigid registration, the pretrained NN and the commercial contouring software significantly for all OARs (figure 3). Note that we only show the significance test results for the surface dice, but similar results are found for the dice and HD95. It also resolves the outliers, because fine-tuning on the planning CT avoids that the network is run on out-of-distribution data. The contour quality is similar to DIR for the lungs, significantly lower for the heart and esophagus and significantly better for the spinal cord ( figure 3).
In order to assess whether the obtained performance of the patient-specific NNs is clinically acceptable, it can be compared to the variability between the contours drawn by different observers, i.e. the inter-observer variability. This variability was not studied here, but has been quantified in other works for the relevant OARs in the thorax region (Yang et al 2018). It is important to note that the values are organ and image-modality-specific, as the metrics are strongly affected by the volume or contrast of the organ. The reported inter-observer dice scores were 0.96 for the lungs, 0.93 for the heart, 0.81 for the esophagus and 0.86 for the spinal cord. These values are very close to the dice scores for the patient-specific NNs and DIR (figure 2).
Regarding target segmentation, the patient-specific NNs perform significantly better than RR, but significantly worse than DIR (figure 3). However, these differences are small and only significant for the surface dice and not for dice. For one patient, the patient-specific NN has much lower contour quality. In this patient, the shape of the tumor changed throughout the treatment, causing the manual delineations to alter significantly from the reference. Whereas the performance of DIR is also low for this patient, the drop in quality is less pronounced. Such strong outliers did not occur for the patient-specific OAR segmentation, which indicates that one-shot segmentation lacks robustness because of its limited general knowledge.
Most general trends found for the NSCLC data also hold for the HNC dataset ( figure 4). The main difference is that RR performs much better. For OARs in head, close to the skull (e.g. brainstem, chiasm, hippocampus), RR performs as well as the more advanced methods (figure 5), because it matches the skull accurately and the rigid assumption is applicable there. Contrarily, for OARs further from the skull (e.g. spinal cord, thyroid), RR performs badly because the rigid transformation matching the skull is not valid there. Lastly, for organs that change strongly during radiotherapy (e.g. parotid glands), RR sometimes performs very badly.
Despite the larger OAR dataset in the HN region compared to the thorax (table 1), the performance of the pretrained NN is still low. This is most apparent for the smaller OARs (e.g. lacrimal gland, optic nerve, chiasm). Again, the commercial system outperforms the pretrained NN. The patient-specific NNs significantly outperform all other methods (including DIR) for all organs in general (figure 5). However, for the individual organs, we find that the difference is not always significant and that segmentation of the optic nerves is even better with registration and the commercial segmentation.
Several other works have investigated the inter-observer variability for OARs in the head and neck region (Deeley et al 2011, Brouwer et al 2012, Mattiucci et al 2013, Verhaart et al 2014, Tao et al 2015, van der Veen et al 2019, Wong et al 2020. Whereas the stated values vary between the publications because of differences in experimental set-up, we found that the mean dice scores of patient-specific NNs and DIR here are similar or even higher than the reported inter-observer variabilities for all OARs except the thyroid. Figure 3. Overview of the Wilcoxon signed rank test results for the surface dice of the contours in the NSCLC dataset. Green: patientspecific NN performs significantly better than the method. Red: patient-specific NN performs significantly worse than the method. White: the performance of the method is not significantly different from the PSNN. Grey: the method does not segment the structure. The significance level is set to 2.5%. Contouring of the main CTV works best with DIR, followed by patient-specific NNs and rigid registration. Rigid registration does not work well because the CTV covers part of the neck region, where significant shrinkage happened for these patients which cannot be captured with rigid transformations. The improvement of the patient-specific NNs compared to rigid registration is only significant based on dice score but not for the surface dice (figure 5). For the boosted region, rigid registration works well and even significantly better than the patient-specific segmentation, as this region is inside the head close to the skull.
Contouring speed
The runtime of the algorithms depends strongly on the hardware and potential GPU acceleration. Image registration in plastimatch and elastix runs on CPU and the runtime is evaluated on a Linux based system with 8 Intel Xeon E3-1240 v5 CPU cores. The runtime of the in-house trained NNs is evaluated by running inference on a Nvidia Quadro P6000 GPU and the commercial software was ran on a Nvidia RTX 3060 GPU.
Rigidly registering the CTs takes approximately the same time as running inference of the in-house trained CNNs. The commercial segmentation software is approximately 2 times slower, but still significantly faster than Overview of the Wilcoxon signed rank test results for the surface dice of the contours in the HNC dataset. Green: patientspecific NN performs significantly better than the method. Red: patient-specific NN performs significantly worse than the method. White: the performance of the method is not significantly different from the PSNN. Grey: the method does not segment the structure. The significance level is set to 2.5%. DIR, which is 7-10 times slower than rigid registration (table 3). Note that several DIR methods with GPU acceleration exist, which could lead to significant speed up (Gu et al 2010, Weistrand andSvensson 2015). Whereas the runtime of the DIR is likely acceptable, the speed of the other methods offers an advantage for patient comfort and correspondence between CT and treated anatomy in a particularly time-dependent setting such as adaptive therapy. Figure 6 visualizes the trade-off between speed and accuracy. Patient-specific NNs lie on the pareto front for both HNC and NSCLC datasets, i.e. none of the other methods can improve accuracy without increasing runtime nor improve runtime without reducing accuracy. For NSCLC, also DIR lies on the pareto front, yielding slightly higher accuracy but slower runtime. For HNC, the pareto front is shared with RR, which is faster but yields lower accuracy.
Discussion
Our results show that DIR yields generally the most accurate contours for the targets and OARs in the thorax region. Contrarily, the patient-specific NNs are best for OARs in the head and neck region. The differences are however small and not always significant for all metrics. The PSNN is further on average 10 times faster, which is advantageous in adaptive therapy. For the HNC specifically, rigid registration is both fast and accurate for the structures close to the skull, but the accuracy is lower for those far from the skull which can lead to unacceptable degradation in target coverage.
Although both patient-specific NNs and DIR lead to high-quality contours, they do not perfectly correspond to the manual ones. This can be due to limitations of the methods, but also due to inaccuracies in the manual contours, as it has been shown that substantial inter-and inter-observer variability in delineation of HNC and NSCLC exists ( Zhang and Huang 2022). The PSNN and DIR methods reach accuracies similar to such inter-observer variabilities found in the literature, which impose an upper-bound for the average achievable accuracy. This further means that the methods perform similar to a human, indicating that they can be used directly in adaptive therapy.
In order to meticulously evaluate the use of contour propagation methods for adaptive therapy, the effect on the dose and the corresponding biological effect should be analyzed. Treatment plans reoptimized on automatically propagated contours should be compared to plans reoptimized on manual contours, and these dosimetric differences have to be interpreted clinically before implementation in the clinic. This is the subject of current work at CPT. The size of the evaluation datasets is relatively small, mainly because manual delineation of all daily CTs is a time-consuming process. For the NSCLC dataset, the clinicians completely manually recontoured because of the limited number of OARs. As the number of OARs in the HNC region is much larger, the clinicians manually adjusted contours propagated from the reference using DIR, in accordance with the current clinical protocol for replanning. Even though this creates a bias, the resulting contours are clinically acceptable and the DIR algorithm used to create these initial contours was different from the one used in this study.
The quality of the pretrained NN for the HNC dataset is low, even though the training dataset is relatively large. Especially for the smaller organs, the segmentation accuracy is largely insufficient. This could be due to the large number of OARs segmented by a single network. During training, the loss function is only slightly affected by these small organs, similar to class imbalance. This could result in the network favoring accurate segmentation of the larger structures over the smaller ones. This can be overcome by simply training one network for each structure. Even though this would lead to an increase in runtime, inference could still be parallelized or hierarchical approaches could be employed instead of splitting the image in patches (Shaheen et al 2021).
This analysis relies on the presence of daily CT scans, which requires an in-room CT. Although such inroom CT is present at several proton therapy centers, gantry-mounted CBCT scanners are more prevalent. In the future, also daily MRI scans might be used. The registration-based methods could easily be adjusted to allow multi-modal registration between CBCT/MRI and CT to propagate the contours. Further, a general segmentation network for CBCT/MRI could also be developed if an appropriate dataset is available. The patient-specific fine-tuning cannot be applied directly with CBCT/MRI. However,it can be applied on synthetic CTs, which are produced from the daily CBCT/MRI to reoptimize the plan in adaptive therapy. Although it is expected that the networks will work on such synthetic CTs, the quality of contours will have to be evaluated.
Conclusion
In this work, patient-specific CNNs were compared to general CNNs and (deformable) registration for the task of contour propagation in adaptive radiotherapy. We found that the patient-specific fine-tuning leads to higher quality contours than general segmentation networks, reaching similar quality as DIR but with a significant reduction in runtime. Fine-tuning further allows target volume segmentation, which is not yet feasible with general CNNs. | 7,611.8 | 2023-04-05T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Novel Models of Crohn’s Disease Pathogenesis Associated with the Occurrence of Mitochondrial Dysfunction in Intestinal Cells
Crohn’s disease remains one of the challenging problems of modern medicine, and the development of new and effective and safer treatments against it is a dynamic field of research. To make such developments possible, it is important to understand the pathologic processes underlying the onset and progression of Crohn’s disease at the molecular and cellular levels. During the recent years, the involvement of mitochondrial dysfunction and associated chronic inflammation in these processes became evident. In this review, we discuss the published works on pathogenetic models of Crohn’s disease. These models make studying the role of mitochondrial dysfunction in the disease pathogenesis possible and advances the development of novel therapies.
Introduction
Crohn's disease (CD) is a chronic disease that belongs to the group of inflammatory bowel diseases (IBD). Along with ulcerative colitis (UC), CD occupies one of the leading positions among other diseases of the digestive system in terms of the disease severity, the frequency of complications and the number of fatal cases [1]. CD can affect any part of the digestive system from the oropharynx to the anus and can affect all layers of the intestine [1]. The inflammatory process in the affected intestine is not homogeneous, with healthy areas of tissue alternating with affected sites [1]. Complications arising in CD can affect not only the gastrointestinal tract, but also other organs: the eyes, joints, skin, and liver [2]. Current therapy for CD is mainly aimed at reducing excessive inflammatory response.The main types of drugs used for treatment of this disease are 5-aminosalicylic acid derivatives, cytostatics, corticosteroid hormones, and monoclonal antibodies [3][4][5].
Crohn's disease is most common in economically developed countries, with the United States and the United Kingdom being the leaders with the highest CD prevalence rates [1]. That makes CD one of the major challenges of public health in countries with high GDP. An additional feature of CD prevalence is the increased incidence of this disease among people aged 20-40 years, the most active and able-bodied period of life, which leads to significant economic impact of this condition. Moreover, the disease can have a heavy psychologic impact on the patient [4]. Available data show that CD is more common among the residents of large cities than in people living in the countryside [1], which indicates the possibility of an environmental factor in the CD development. It has been proven that mutations of certain genes may be responsible for the occurrence of CD [5], which, however, is not the only etiological factor of the disease.
Currently, the dominant hypothesis of CD pathogenesis is the development of an aggressive immune response to the gut microbiome in genetically predisposed individuals [6]. The resulting acute inflammation affects the intestinal wall and moves into a chronic phase over time [6]. Identifying different pathways for CD development is an important step in the fight against the disease. One likely cause of chronic inflammation is mitochondrial dysfunction. The possible role of mitochondrial dysfunction has been shown in the chronification of inflammation for diseases such as type 1 diabetes mellitus [7] and atherosclerosis [8]. Mitochondrial dysfunction was also described in CD [9]. In this review, we will focus on the possible role of mitochondrial dysfunction as one of the factors in CD pathogenesis and propose the models of CD pathogenesis.
Etiology of Crohn's Disease
Among the causes of CD, the exceptional factor responsible for the development of the disease cannot be distinguished. Most likely, the etiology of CD is associated with the interaction of several factors, which complicates the understanding of the initial stages of the given disease pathogenesis. However, to date, several key etiological factors of CD development have been identified: nutrition, smoking, taking some types of drugs, gene mutations and intestinal dysbiosis.
Diet
The increased prevalence of CD in economically developed countries may be linked to lifestyle, particularly the diet primarily followed in these countries [10]. Epidemiological studies have shown that the American diet with a high intake of fats and carbohydrates and a low intake of fiber led to the increased risk of CD development [11]. In addition, dietary antigens along with bacterial antigens are the most common in the intestine, which supports the evidence of the diet's contribution to CD development [11]. Changes in gene expression, modification of gut microbiome composition and effect on the intestine wall permeability can be identified among possible mechanisms of CD development exerted by food antigens [12]. Additionally, an unhealthy diet can lead to obesity, which increases the risk of CD [11]. In one study [13], CD patients were found to consume significantly more carbohydrates than those in the control group. Additionally, the researchers found that patients with a high CD activity index (CDAI > 150) had a higher amount of carbohydrates in their diet than patients in remission. In addition to carbohydrates, the effect of proteins on CD development is possible: for example, in the study [14], a positive correlation was shown between the increased consumption of animal protein and the occurrence of CD among Japanese residents.
Smoking
A study conducted in France showed that the proportion of the active disease course in CD patients who did not smoke was lower than in light smokers (up to 10 cigarettes per day) and heavy smokers (more than 10 cigarettes per day): 33% versus 38% and 41%, respectively [15]. In another study [16], smoking was also shown to increase the risk of surgery in CD patients: smokers had 20% higher risk than non-smokers. In addition to influencing the occurrence and progression of CD, smoking can affect the body's sensitivity to the effects of drugs. The results of the analysis of clinical outcomes revealed that periodic use of infliximab was effective in 73% non-smoking patients and only in 22% among smokers [17]. The question was also investigated whether quitting smoking would have a beneficial effect on the disease course in CD patients. Among the group of patients who quit smoking (who were tested for nicotine content in the urine), the risk of exacerbation of the disease was lower than that of smokers, at the same time, there was no reliable difference between quit and non-smokers [18].
Several studies have shown that smoking increases the risk of CD development; however, the exact mechanisms of such influence remain unknown. Four possible smoking targets that may lead to CD were proposed: gastrointestinal microbiota, the gut immune system, intestinal epithelium integrity and epigenetic influence [19]. Approximately4500 compounds present in tobacco smoke, about 150 can have a toxic and carcinogenic effect on the organism, among which are dioxins that have a proven immunomodulatory effect [19]. Some studies provided indications for possible mechanisms of smoking effects on CD development. For instance, in a multifactorial study in which patients were stratified for disease severity, it was shown that the proportion of Bacteroides-Prevotella that are conditionally pathogenic bacteria was higher in smokers compared to non-smokers (38.8% versus 28.3%) [20]. In another study in patients with UC, smoking patients were shown to have higher intestinal permeability compared to non-smoking patients and a negative control group [21]. In a mouse model study, passive smoking was shown to increase the number of dendritic cells, increase chemokine expression and T-lymphocyte activation [22].
The Effect of Therapeutics on CD Development
An additional factor that increases the risk of CD development is the use of certain groups of therapeutic drugs: antibiotics, nonsteroidal anti-inflammatory drugs (NSAIDs) and contraceptives [23]. According to the currently used hypothesis, antibiotic intake in childhood causes a disruption of the body's tolerance to gut microbiota, which can lead to the emergence of CD later in life [24]. Several studies were shown that antibiotic intake positively correlated with CD development [23]. There was also a positive association between women taking contraceptive pills and the risk of CD, with women who stopped consuming contraceptive pills having a reduced risk of CD [25]. The exact mechanisms of these drugs' influence on the risk of CD occurrence are unknown. However, the effect intensifies under the action of estrogen, which enhances the immune response, and weakens under influence of progesterone, which acts as immunosuppressor [26]. A prospective cohort study of over 76,000 women nurses revealed an increased risk of CD in the study subjects who took NSAIDs for at least 15 days per month [27]. Moreover, another study in patients with IBD who were in remission, taking NSAIDs was associated with relapse within 9 days with a frequency of 15 to 30% [28].
Genetic Factors
The influence of genetic mutations on the development of CD has been fully proven: to date, more than 230 single nucleotide polymorphisms (SNPs) associated with IBD have been identified [5]. It was found that first degree relatives (FDR) of patients with IBD also have an increased risk of developing IBD: the incidence rate of CD was 7.77, and the degree of concordance in monozygous twins was 30-35% [29]. The main identified genetic risk factor for CD is the NOD2 gene, which is expressed in the intestinal epithelial cells, intestinal mucosal lymphocytes, as well as in monocytes and macrophages [5]. This gene encodes a receptor that recognizes muramyl dipeptide (MDP), a component of peptidoglycan from the bacterial wall. NOD2 interaction with MDP triggers a signaling cascade that activates transcription of proinflammatory cytokines and, accordingly, activates innate immune response. The most frequent mutations in the NOD2 gene responsible for CD development affect amino acids that are in the leucine-rich repeat-LRR domain, which is responsible for binding to MDP [5]. A hypothesis has been formulated, according to which, mutations in the NOD2 gene lead to increased reproduction of intestinal bacteria, which causes enhanced immune response of the host [30]. Thus, the loss of regulation of gut microbiota mass and composition with disturbance of the immune system can trigger a sharp immune response that can evolve to the state of chronic inflammation. However, despite the fact that NOD2 is the main genetic locus associated with CD risk, mutations in this locus cannot be considered as a necessary and sufficient condition for CD development, since they can occur in a population of healthy people with a frequency of 0.5-2%, and, at the same time, in 60-70% of patients with CD do not carry NOD2 mutations [31].
Mutations in genes: ATG16L1, LRRK2, and IRGM that are involved in autophagy are also associated with CD risk [32]. Autophagy is a multistage process, orchestrated by various proteins, whereby its function is to destroy superfluous and dysfunctional organelles. A particular type of autophagy is mitophagy, which is specialized on the destruction of dysfunctional mitochondria. When mitophagy is disturbed, a large number of mitochondria and mitochondria-derived molecules accumulate in the cytosol. These molecules, that are normally sequestrated within the outer mitochondrial membrane, are recognized as endogenous antigens-DAMP (damage-associated molecular pattern). Like external antigens PAMP (pathogen-associated molecular patterns), DAMPs are able to trigger inflammatory immune response [33]. Disruption of the intestinal mucosa integrity can lead to impairment of gut microbiome regulation and inflammation triggering; therefore, mutations in genes controlling the permeability of the intestinal wall, such as MUC19, ITLN1, FUT2, and XBP1, were identified as additional risk factors for CD development [5]. A study using GWAS analysis described several variants of gene alleles encoding DNA-methyltransferases and other proteins taking part in epigenetic modifications associated with a risk of CD development [34]. In a study of zebrafish, inactivation or expression of the uhrf1 gene responsible for DNA methylation resulted in IBD syndrome development [35].
Dysbiosis
The gut microbiota is the largest reservoir of bacteria in the human body, with the highest number of bacteria concentrated in the large intestine lumen. The bacterial count there can reach 1011-1012 cells/g of lumen contents [36]. The human genome includes approximately 23,000 genes, while the gut microbiome contains more than 3 million genes that have an important impact on human health [37]. The gut microbiota performs important functions for the body: digesting substrates that gastrointestinal enzymes cannot cope with, training the immune system tolerance, restraining pathogenic microorganisms' growth, producing biologically active substances, such as butyrate, that supports the energy needs of enterocytes, facilitating the absorption of calcium and iron in the colon [38,39].
It has been proven that gut dysbiosis-a change in the qualitative and quantitative composition of the microbiome-is a common feature in IBD. However, it remains unclear whether this condition is associated with a cause or consequence of the inflammatory reaction development. According to the results of studies of gut microbiota among patients with IBD, changes were found associated with increased bacterial load and decreased bacterial species diversity [40]. A change in microbiota composition was also found in feces and intestinal mucosa samples from IBD patients. Moreover, the number of bacteria was significantly higher in the regions of the intestine with the greatest inflammation, in the colon and ileum, which proved the connection of dysbiosis with the development of inflammation in IBD [40].
A general pattern for the state of dysbiosis in IBD is a decrease in the number of bacterial species that are beneficial for the organism. Several studies reported a decrease in species diversity and the number of Bacteroidetes and Firmicutes types in CD patients that are dominant types for normal microbiota, for example, a decrease in the number of Faecalibacteriumprausnitzii, Roseburia or Eubacterium bacteria [38,41]. A decrease in Bacteroides fragilis was also found to be involved in the activation of T-regulatory cells having an antiinflammatory effect [38]. Additionally, the number of Bifidobacterium type bacteria having important protective and metabolic functions for the organism was reduced [38]. At the same time, the number of conditionally pathogenic types of Proteobacteria (Escherichia coli, Pasteurellaceae), Firmicutes (Veillonellaceae and Ruminococcusgnavus) and Fusobacterium species is increasing [38]. In addition, it was shown that Escherichia coli bacteria were able to adhere on the intestinal wall and penetrated through the intestinal epithelial layer, also being found embedded and multiplied in macrophages, which caused an enhanced release of the inflammatory mediator TNFα [42].
Models of CD Pathogenesis Based on Mitochondrial Dysfunction
The effect of mitochondrial dysfunction in CD development can be divided into two distinct pathways. The first one is the impact of energy shortage through impaired ATP synthesis, which leads to disruption of a number of processes important for the proper functioning of the intestine. Among these processes are the differentiation of enterocytes, tight junction maintenance and butyrate oxidation. In this model, inflammation is formed on bacterial antigens. The second pathway is based not on an energy deficit in the intestinal cells, but on increased generation of reactive oxygen species (ROS) and disruption of mitophagy, which leads to the accumulation of defective mitochondria. This process is accompanied by the accumulation of components acting as internal antigens (DAMPs) and the activation of the inflammatory response.
Models of CD Pathogenesis Based on an Energy Deficit in the Intestine Cells
Energy production through ATP synthesis depends on a number of processes: the conversion of metabolite energy into the energy of chemical bonds of reduced NADH molecules, the transfer of electrons from NADH to the electron transport chain and ultimately to molecular oxygen, and pumping protons from the mitochondrial matrix through the inner mitochondrial membrane to the intermembrane space, which generates transmembrane proton gradient. The generated energy is used for phosphorylation of ADP molecules to form ATP [43].
At the molecular level, mitochondrial dysfunction manifests itself in disruption of the processes involved in mitochondrial energy production: loss of electrochemical potential on the inner mitochondrial membrane, disruption of electron transport chain transporters and a reduction in the key metabolites transported into the mitochondria [43]. These changes lead to a decrease in oxidative phosphorylation efficiency and ATP production, which, in turn, leads to energy deficiency in the affected cell. Mitochondrial dysfunction is a well-known feature of various chronic diseases associated with low-level sustained inflammation [44]. This hypothesis is confirmed by the presence of mitochondrial dysfunction in CD. Oxidative phosphorylation deficiency in complexes III and IV of the respiratory chain has been reported in such patients [45]. It can be speculated that, since the proper energy balance is crucial for the correct functioning of the intestinal cells and the controlling of intestinal wall permeability, mitochondrial dysfunction in these cells can contribute to CD development.
Mechanism of CD Pathogenesis Based on Impaired Enterocyte Differentiation
Intestinal epithelium is updated every 4-5 days, which requires the consumption of a significant amount of energy [46]. Intestinal epithelium consists of one layer of different cell types, among which are goblet cells, absorptive enterocytes, Paneth cells, and enteroendocrine cells [46]. All these cells originate as a result of differentiation of intestinal stem cells. Lack of ATP leads to impaired differentiation of intestinal epithelial cells (IEC) [46]. Of particular importance is the disruption of the Paneth cells formation, which play the role of primary protection of the intestine, releasing antimicrobial molecules, such as defensins [47]. A decrease in the number of Paneth cells can lead to intestinal dysbiosis, which is a known characteristic of CD. If dysbiosis reaches a certain level, it may lead to the initiation of a strong inflammatory reaction that develops into chronic inflammation.
The vulnerability of Paneth cells was demonstrated in several studies. One of such studies reported that mice defective for the phb1 gene encoding the main component of the internal mitochondrial membrane developed spontaneous inflammation in the ileum, preceded by mitochondrial dysfunction observed in Paneth cells [9]. It was also shown that mitochondrial dysfunction could lead to differentiation of dysfunctional Paneth cells from the intestinal stem cells, resulting in relapses in patients with CD [48]. Additionally, expression of some IEC differentiation genes was found to be impaired upon the development of inflammatory response in the intestine [46]. In a mouse model, increased ATP production as a result of increased oxidative phosphorylation activity was shown to protect mice from colitis, which was induced by administration of sodium dextran sulfate and trinitrobenzene sulfonate, and also contributed to increased enterocyte proliferation, which proved a positive effect of ATP generation on the rate of intestinal epithelium renewal [49].
Mechanism of CD Pathogenesis Based on the Disruption of Tight Junction Integrity
The intestinal epithelium acts as a protective barrier, preventing penetration of bacteria, debris and other substances from the intestinal lumen into the intestinal wall [50]. The key role in maintaining the integrity of the intestinal barrier is played by tight junctions, which are formed by proteins, mainly claudins and occludins, connecting adjacent intestinal epithelial cells and delimiting intestinal lumen from lamina propria [51]. Since maintaining the integrity of tight junctions requires ATP energy [46], in case of its deficiency, intestinal permeability increases, which leads to the penetration of bacterial antigens through the epithelial layer from the intestinal lumen into lamina propria, where intestinal immune cells are concentrated. The inflow of antigens also causes increased proliferation and inflow of immune cells, leading to the inflammatory response activation. It was shown that mitochondrial damage could influence increased intestinal barrier permeability for pathogens [52].
Mechanism of CD Pathogenesis Based on Impaired Butyrate Oxidation and Its Deficiency
One current hypothesis is based on the observation that IBD is characterized by a state of energy deficiency with a change of metabolism in the intestinal epithelial cells [53]. Butyrate is the main energy source of colon epithelial cells, providing more than 70% of energy demand [46]. Butyrate undergoes cleavage in the mitochondrial matrix of colonocytes in the process of β-oxidation of fatty acids [46]. Butyrate is a nutrient that is contained in food and is also formed by intestinal bacteria as a by-product of fermentation of dietary fibers [54]. Since butyrate is the main source of energy for colonocytes, the disruption of its oxidation in the mitochondria leads to a lack of energy to perform important processes in the intestine functioning described above, which, in turn, leads to the development of severe gut dysbiosis and activation of the inflammatory reaction. In the study in a model of experimental animals, it was shown that inflammation of the intestinal mucosa occurred when inhibiting butyrate oxidation [55]. It was also shown that pharmacological inhibition of β-oxidation of fatty acids in the intestine led to the development of colitis in mice [56].
In addition to being an energy source, butyrate is an anti-inflammatory agent that induces a mediated antimicrobial action. Thus, butyrate inhibits the action of NF-kB, IFNγ, pro-inflammatory cytokines IL2, IL6 and IL8, reduces the recruitment of macrophages and neutrophils, enhances the action of an antimicrobial protein catelicidin, and increases the expression of mucin, which also has an antimicrobial effect [57][58][59][60]. Accordingly, if dietary butyrate is lacking, or if butyrate deficiency is caused by a decrease inthe gut microbiome species producing it, no inflammatory response deterrent remains except for ATP deficiency. This option is not directly related to the mitochondria, but can be considered in the framework of the mitochondrial model of CD development. Scheme of these models are depicted on Figure 1.
Immune Response to Bacterial Antigens in CD
According to the pathogenetic models described above, the key outcome leading t CD development is the initiation of an immune response to bacterial antigens, which lead to the development of chronic inflammation. Gut-associated lymphoid tissue (GALT plays a controversial role in the pathology development: in the case of infection, the prox imity of immune cells is a positive factor; however, in the case of IBD, it has negativ influence. The primary response to intestinal dysbiosis is initiation of innate immune re sponse. Innate immunity leukocytes represented by neutrophils, macrophages, and den dritic cells recognize bacteria through interaction with PAMPs, which are present in mos microorganisms [61]. Examples of conservative PAMPs are lipopolysaccharides (LPS peptidoglycan, flagellin and bacterial nucleic acids [61]. The key receptors recognizin bacterial PAMPs are TLR and NRL receptors that are expressed not only on immune, bu also on intestinal epithelial cells. Therefore, enterocytes can also directly activate the in flammatory response [61]. In addition, M-cells can be engaged in the capture of bacteria antigens in the intestine. They transport antigens to Peyer's patches-GALT structural ele ments, where they are selected by dendritic cells or destroyed by macrophages [62]. Interaction of PAMPs with the innate immunity receptor leads to activation of in flammatory cascades involving pro-inflammatory transcription factor NF-κB and IL-1β as well as other inflammatory cytokines highly present in the intestine in CD: IL-12, IL 17, IL-18, TNF-α and IFN-γ. Activated antigen presenting cells (APCs) produce IL-12 an IL-18 and induce polarized differentiation of CD4+ T lymphocytes along the Th1 pathwa [63]. That, in turn, further increases the release of proinflammatory cytokines that stimu late APCs to produce new types of proinflammatory cytokines, such as IL-1, IL-6, and IL 8. Therefore, a self-perpetuating inflammatory response is formed that leads to chroni inflammation development.
Immune Response to Bacterial Antigens in CD
According to the pathogenetic models described above, the key outcome leading to CD development is the initiation of an immune response to bacterial antigens, which leads to the development of chronic inflammation. Gut-associated lymphoid tissue (GALT) plays a controversial role in the pathology development: in the case of infection, the proximity of immune cells is a positive factor; however, in the case of IBD, it has negative influence. The primary response to intestinal dysbiosis is initiation of innate immune response. Innate immunity leukocytes represented by neutrophils, macrophages, and dendritic cells recognize bacteria through interaction with PAMPs, which are present in most microorganisms [61]. Examples of conservative PAMPs are lipopolysaccharides (LPS), peptidoglycan, flagellin and bacterial nucleic acids [61]. The key receptors recognizing bacterial PAMPs are TLR and NRL receptors that are expressed not only on immune, but also on intestinal epithelial cells. Therefore, enterocytes can also directly activate the inflammatory response [61]. In addition, M-cells can be engaged in the capture of bacterial antigens in the intestine. They transport antigens to Peyer's patches-GALT structural elements, where they are selected by dendritic cells or destroyed by macrophages [62].
Interaction of PAMPs with the innate immunity receptor leads to activation of inflammatory cascades involving pro-inflammatory transcription factor NF-κB and IL-1β, as well as other inflammatory cytokines highly present in the intestine in CD: IL-12, IL-17, IL-18, TNF-α and IFN-γ. Activated antigen presenting cells (APCs) produce IL-12 and IL-18 and induce polarized differentiation of CD4+ T lymphocytes along the Th1 pathway [63]. That, in turn, further increases the release of proinflammatory cytokines that stimulate APCs to produce new types of proinflammatory cytokines, such as IL-1, IL-6, and IL-8. Therefore, a self-perpetuating inflammatory response is formed that leads to chronic inflammation development.
Models of CD Pathogenesis Based on ROS Production and Mitophagy Disorders
The by-product of electron transport chain (ETC) functioning in the mitochondria are reactive oxygen species (ROS): molecules with increased reactivity due to the presence of unpaired electron at the external electronic level [64]. Under normal physiological conditions, the amount of ROS generated from the total amount of oxygen consumed by cells is around 2%; however, in a pathological state, ROS release increases [65]. The increase in ROS production is associated with disturbances in the respiratory complexes functioning, with increased proton leakage and excessive oxygen consumption, accompanied by a further increase in ROS generation [66]. ROS molecules readily interact with various biological molecules, including proteins, lipids and nucleic acids, which leads to damaging effects [65]. Inside the mitochondria, ROS damage phospholipids of the outer and inner mitochondrial membranes, especially cardiolipin, which is highly sensitive to ROS. Oxidative destruction of the lipids of the internal mitochondrial membrane leads to the reverse transfer of protons to the mitochondrial matrix; therefore, decreasing the electrochemical gradient [46]. As a result of mitochondrial damage, various mitochondrial molecules enter the cytoplasm. Some of them are highly immunogenic and can act as internal antigens DAMPs: succinate, cardiolipin, N-formyl peptides, mitochondrial DNA (mtDNA) and mitochondrial transcription factor A (TFAM), while others, such as cytochrome c, can act as signals to cell death [67]. Damaged mitochondria are normally isolated by mitophagy into membranous structures that are then fused with lysosomes for degradation by lysosomal enzymes. Mitophagy is the cellular process that selectively removes old, damaged and dysfunctional mitochondria by sequestration [68]. Mitophagy, along with other mitochondrial dynamics processes such as mitochondrial fusion, fission, and transport, plays an important role in maintaining normal cell homeostasis. Many proteins, molecular intermediates, activators and inhibitors participate in coordination of the mitophagy process, so the regulation of mitophagy is a complex process and can be disrupted, leading to the formation of a pool of defective and dysfunctional mitochondria.
Increasing the number of defective mitochondria leads to even greater DAMP generation, which triggers an inflammatory response. Thus, mtDNA, entering the cytosol, activates the inflammasome NLRP3, which, in turn, leads to the activation of caspase-1 and the production of proinflammatory cytokines IL-1β and IL-18 [33,67]. Additionally, cytosolic mtDNA can also directly activate AIM2-bound inflammasome [67]. Thus, mtDNA acts as an intracellular activator of the inflammatory reaction. In addition, inflammation activation is also possible when the immune system cells are directly affected by the extracellular molecules DAMP. So, extracellular ATP, can activate inflammasome NLRP3 and promote, thus, secretion of IL-1β and IL-18 in macrophages [69]. An additional role of ATP is to attract neutrophils to the inflammation site [69]. Another mitochondrial DAMP, succinate, which is an intermediate in the tricarboxylic acid cycle, can also be secreted into the extracellular space when mitochondria are damaged. Extracellular succinate enhances antigen-dependent activation of T helper cells by interaction with dendritic cell receptor GPR91 [70]. The subsequent development of the inflammatory response is similar to the scheme described previously, including the polarization of CD4+ T lymphocytes towards Th1 phenotype and the formation of an irreversible reaction involving a large number of inflammatory molecules and immune cells, which leads to the establishment chronic inflammation state. These models are schematically depicted in Figure 2.
Models of CD Pathogenesis Based on ROS Production and Mitophagy Disorders
The by-product of electron transport chain (ETC) functioning in the mitochondria are reactive oxygen species (ROS): molecules with increased reactivity due to the presence of unpaired electron at the external electronic level [64]. Under normal physiological conditions, the amount of ROS generated from the total amount of oxygen consumed by cells is around 2%; however, in a pathological state, ROS release increases [65]. The increase in ROS production is associated with disturbances in the respiratory complexes functioning, with increased proton leakage and excessive oxygen consumption, accompanied by a further increase in ROS generation [66]. ROS molecules readily interact with various biological molecules, including proteins, lipids and nucleic acids, which leads to damaging effects [65]. Inside the mitochondria, ROS damage phospholipids of the outer and inner mitochondrial membranes, especially cardiolipin, which is highly sensitive to ROS. Oxidative destruction of the lipids of the internal mitochondrial membrane leads to the reverse transfer of protons to the mitochondrial matrix; therefore, decreasing the electrochemical gradient [46]. As a result of mitochondrial damage, various mitochondrial molecules enter the cytoplasm. Some of them are highly immunogenic and can act as internal antigens DAMPs: succinate, cardiolipin, N-formyl peptides, mitochondrial DNA (mtDNA) and mitochondrial transcription factor A (TFAM), while others, such as cytochrome c, can act as signals to cell death [67]. Damaged mitochondria are normally isolated by mitophagy into membranous structures that are then fused with lysosomes for degradation by lysosomal enzymes. Mitophagy is the cellular process that selectively removes old, damaged and dysfunctional mitochondria by sequestration [68]. Mitophagy, along with other mitochondrial dynamics processes such as mitochondrial fusion, fission, and transport, plays an important role in maintaining normal cell homeostasis. Many proteins, molecular intermediates, activators and inhibitors participate in coordination of the mitophagy process, so the regulation of mitophagy is a complex process and can be disrupted, leading to the formation of a pool of defective and dysfunctional mitochondria.
Increasing the number of defective mitochondria leads to even greater DAMP generation, which triggers an inflammatory response. Thus, mtDNA, entering the cytosol, activates the inflammasome NLRP3, which, in turn, leads to the activation of caspase-1 and the production of proinflammatory cytokines IL-1β and IL-18 [33,67]. Additionally, cytosolic mtDNA can also directly activate AIM2-bound inflammasome [67]. Thus, mtDNA acts as an intracellular activator of the inflammatory reaction. In addition, inflammation activation is also possible when the immune system cells are directly affected by the extracellular molecules DAMP. So, extracellular ATP, can activate inflammasome NLRP3 and promote, thus, secretion of IL-1β and IL-18 in macrophages [69]. An additional role of ATP is to attract neutrophils to the inflammation site [69]. Another mitochondrial DAMP, succinate, which is an intermediate in the tricarboxylic acid cycle, can also be secreted into the extracellular space when mitochondria are damaged. Extracellular succinate enhances antigen-dependent activation of T helper cells by interaction with dendritic cell receptor GPR91 [70]. The subsequent development of the inflammatory response is similar to the scheme described previously, including the polarization of CD4+ T lymphocytes towards Th1 phenotype and the formation of an irreversible reaction involving a large number of inflammatory molecules and immune cells, which leads to the establishment chronic inflammation state. These models are schematically depicted in Figure 2.
Future Directions
Mitochondria are currently considered not only as energy "plants" of the cell, but also as important players in the processes of immunomodulation and regulation of cellular homeostasis. The idea of the mitochondrial dysfunction influencing the development of chronic inflammation is not new and is used as a hypothesis for explaining the pathogenesis of various chronic diseases, fatigue and aging. However, the pathogenesis models discussed in this review are specifically focused on the development of chronic inflammation in CD. Some aspects of these models are supported by studies, but more detailed investigation is required to fully prove their feasibility, with full reproduction of these pathogenesis mechanisms in laboratory animals. Moreover, an important step will be the finding of specific mitochondrial targets from which mitochondrial dysfunction and the development of subsequent inflammation begin.
As described in the previous chapters, mitochondrial dysfunction was shown to play a prominent role in CD pathogenesis, especially in Paneth cells, where mitochondrial function is crucial to maintain cellular functionality. Correspondingly, mitochondriatargeting therapies are being evaluated to reduce the severity of CD manifestations in model organisms. One of such approaches is the use of mitochondrial ROS scavengers that allow reducing the mitochondrial damage-associated oxidative stress. One recent study tested a mitochondrial antioxidant, Mito-Tempo, on ex vivo biopsy material from CD patients. The study confirmed the presence of mitochondrial damage in ileal mucosal biopsy samples form CD patients as compared with non-IBD patients that manifested itself at the phenotype and transcriptome levels. Treatment with Mito-Tempo allowed restoring the expression of altered genes to non-IBD levels, including genes involved in inflammation (IL-17/IL-23), lipid metabolism and apoptosis regulation [71]. These results are promising for potential future use of mitochondrial antioxidants for CD treatment, but more studies are needed to validate these results in animal models. One such model is the TNF ∆ARE mouse model bearing a deletion of adenylate/uridylate-rich elements (AREs) in TNF mRNA. Such animals present with inflammation-associated mitochondrial dysfunction in Paneth cells and develop spontaneous ileitis. Promoting mitochondrial respiration through dichloroacetate-mediated glycolysis inhibition was shown to improve the function of intestinal stem cells from these animals. Therefore, suppressing glycolysis to restore mitochondrial metabolic balance may be explored as one of the potential therapeutic approaches to CD [48,72].
Another recent study tested the effect of Olaparib, the inhibitor of poly(ADP-ribose) polymerase-1 (PARP-1) currently used for cancer therapy on the manifestations of chemically-induced experimental colitis in mice. PARP-1 plays an important role in inflammation development through mitochondrial dysfunction. Its inhibition reduced the inflammation markers and restore the intestinal barrier function in affected animals. In colon epithelial cell monolayer treated with hydrogen peroxide, Olaparib helped preserving barrier integrity and alleviated morphological changes. It also improved cellular mitochondrial function under oxidative stress conditions. Together, these results indicate the possible use of PARP-1 inhibitor to treat CD and warrant further preclinical and clinical studies [73].
Another interesting model for developing novel CD therapies is the recently described patient-specific human intestinal organoid (HIO) [74]. This model, consistent with patientderived ileal cells, is characterized by the expression of mitochondrial and extracellular matrix genes reflecting a human situation. It makes possible studying the expression pattern and morphology and their alterations in response to different potential therapeutic agents through RNA sequencing and immunostaining techniques. Importantly, HIO is a patient-specific modelling approach, which can potentially be used for the development of personalized treatments. In a recent study, the effects of butyrate and eicosatetraynoic acid were tested in such model showing promising results [74].
As we hoped to demonstrate in this review, CD pathogenesis is complex, with several pathogenic pathways that can complement and overlap each other. Together with other inflammatory bowel diseases, CD is currently regarded as a pathologic condition associated with chronic inflammation, in which mitochondrial dysfunction plays a prominent role. Correspondingly, mitochondria-targeting therapies are being developed and evaluated for these diseases [75]. One of the great challenges of this research is the creation of reliable preclinical models that would allow identifying reliable molecular targets for therapeutic effects. Such models should reflect, at least to a certain degree, the complexity of pathological processes taking place in the intestinal cells and tissues. The strategy of developing anti-CD drugs aimed at the initial stages of disease pathogenesis rather than its final symptomatic stages involving the inflammatory reaction in the gastrointestinal tract, are especially interesting. Such an approach would help to reduceside effects caused by anti-inflammatory drugs and increase the therapy effectiveness. In this review, we strived to show that despite formal understanding of the etiological factors underlying CD development, many unexplored stages of CD pathogenesis remain, the disclosure of which will help in the fight against this disease.
Conclusions
We propose four pathogenetic models for the development of CD, with mitochondrial dysfunction development as a central event, and one model based on the lack of butyrate, an important anti-inflammatory mediator, in the intestine. Models associated with mitochondrial dysfunction can be divided into two groups: the first group is based on disorders in the vital functions of enterocytes caused by energy deficiency, where bacterial PAMPs act as antigens that cause an inflammatory reaction; the second group is associated with increased production of ROS and impaired mitophagy, leading to the release of mitochondrial DAMPs playing the role of antigens that initiate inflammation. | 8,428 | 2022-05-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
B Cell Receptor-induced Phosphorylation of Pyk2 and Focal Adhesion Kinase Involves Integrins and the Rap GTPases and Is Required for B Cell Spreading*
Signaling by the B cell receptor (BCR) promotes integrin-mediated adhesion and cytoskeletal reorganization. This results in B cell spreading, which enhances the ability of B cells to bind antigens and become activated. Proline-rich tyrosine kinase (Pyk2) and focal adhesion kinase (FAK) are related cytoplasmic tyrosine kinases that regulate cell adhesion, cell morphology, and cell migration. In this report we show that BCR signaling and integrin signaling collaborate to induce the phosphorylation of Pyk2 and FAK on key tyrosine residues, a modification that increases the kinase activity of Pyk2 and FAK. Activation of the Rap GTPases is critical for BCR-induced integrin activation as well as for BCR- and integrin-induced reorganization of the actin cytoskeleton. We now show that Rap activation is essential for BCR-induced phosphorylation of Pyk2 and for integrin-induced phosphorylation of Pyk2 and FAK. Moreover Rap-dependent phosphorylation of Pyk2 and FAK required an intact actin cytoskeleton as well as actin dynamics, suggesting that Rap regulates Pyk2 and FAK via its effects on the actin cytoskeleton. Importantly B cell spreading induced by BCR/integrin co-stimulation or by integrin engagement was inhibited by short hairpin RNA-mediated knockdown of either Pyk2 or FAK expression and by treatment with PF-431396, a chemical inhibitor that blocks the kinase activities of both Pyk2 and FAK. Thus Pyk2 and FAK are downstream targets of the Rap GTPases that play a key role in regulating B cell morphology.
Antibodies (Abs) 2 made by B lymphocytes play a critical role in host defense against infection. Antigen-induced signaling by the B cell receptor (BCR) initiates an activation program that leads to B cell proliferation and subsequent differentiation into Ab-producing cells. BCR clustering by antigens or by anti-immunoglobulin (anti-Ig) Abs used as surrogate antigens initiates multiple signaling pathways that control gene expression, cell survival, and proliferation pathways (1)(2)(3).
BCR signaling also promotes integrin activation (4,5), localized actin polymerization, reorganization of the actin cytoskeleton, and changes in B cell morphology (6,7), all of which may facilitate B cell activation. Integrin activation and cell spreading is critical for the activation of B cells by membrane-bound antigens. Macrophages, dendritic cells, and follicular dendritic cells can present arrays of captured antigens to B cells (8,9), and this may be one of the main ways in which B cells encounter antigens (10). BCR-induced integrin activation prolongs the interaction between the B cell and the antigen-presenting cell and also allows the B cell to spread on the surface of the antigenpresenting cell such that more BCRs can encounter and bind membrane-bound antigens (11). Subsequent contraction of the B cell membrane allows the B cells to gather the BCR-bound antigen into an immune synapse in which clustered antigenengaged BCRs are surrounded by a ring of ligand-bound integrins. Formation of this immune synapse reduces the amount of antigen that is required for B cell activation (12,13).
Recent work has shown that B cells in lymphoid organs may contact soluble antigens by extending membrane processes into a highly organized network of lymph-filled conduits (14). These conduits are created by fibroblastic reticular cells that partially ensheathe collagen fibrils. In addition to being rich in collagen, fibronectin, and other extracellular matrix (ECM) components, the fibroblastic reticular cells that form these conduits express high levels of intercellular adhesion molecule-1, the ligand for the ␣ L  2 integrin (lymphocyte function-associated antigen-1 (LFA-1)) on B cells (10). Thus B cells interacting with these conduits are likely to be in contact with integrin ligands, and integrin-dependent spreading may enhance the ability of B cells to extend membrane processes into the fibroblastic reticular cell conduit.
In addition to promoting cell spreading, integrins can act as co-stimulatory receptors that enhance signaling by many receptors including the T cell receptor and the BCR (15)(16)(17). Thus signaling proteins that regulate B cell spreading and that are also targets of BCR/integrin co-stimulation may play a key role in the activation of B cells by membrane-bound antigens as well as soluble antigens that are delivered to lymphoid organs by fibroblastic reticular cell conduits.
Proline-rich tyrosine kinase (Pyk2) and focal adhesion kinase (FAK) are related non-receptor protein-tyrosine kinases that integrate signals from multiple receptors and play an important role in regulating cell adhesion, cell morphology, and cell migration in many cell types (18 -20). Integrins, receptor tyrosine kinases, antigen receptors, and G protein-coupled chemokine receptors all stimulate tyrosine phosphorylation of Pyk2 and FAK, a modification that increases the enzymatic activity of these kinases and allows them to bind SH2 domain-containing signaling proteins (21). FAK, which is expressed in almost all tissues (21), is a focal adhesion component that mediates integrin-dependent cell migration (22), cell spreading, and cell adhesion (18) in adherent cells as well as co-clustering of LFA-1 with the T cell receptor in lymphocytes (23). Pyk2 is expressed mainly in hematopoietic cells, osteoclasts, and the central nervous system (24) and is critical for chemokine-induced migration of B cells, macrophages, and natural killer cells (20,25,26) as well as the spreading of osteoclasts on vitronectin (27). FAK and Pyk2 are thought to mediate overlapping but distinct functions because Pyk2 expression only partially reverses the cell adhesion and migration defects in FAK-deficient fibroblasts (28).
In B cells, clustering of the BCR,  1 integrins, or  7 integrins induces tyrosine phosphorylation of both Pyk2 and FAK (29 -33). FAK is involved in the chemokine-induced adhesion of B cell progenitors (34), and Pyk2 is required for chemokine-induced migration of mature B cells (25). However, the role of these kinases in BCR-and integrin-induced B cell spreading has not been investigated, and the signaling pathways that link the BCR and integrins to tyrosine phosphorylation of Pyk2 and FAK have not been elucidated.
We have shown previously that the ability of the BCR to induce integrin activation, B cell spreading, and immune synapse formation requires activation of the Rap GTPases (6,17). In addition to binding effector proteins such as RapL and Rap1interacting adaptor molecule (RIAM) that promote integrin activation (35)(36)(37), the active GTP-bound forms of Rap1 and Rap2 bind multiple proteins that control actin dynamics and cell morphology (38). Moreover we showed that BCR/integrininduced phosphorylation of Pyk2 in B cells is dependent on Rap activation (17). However, this previous study did not address how Rap-GTP links the BCR and integrins to Pyk2 phosphorylation, whether Rap activation is important for FAK phosphorylation in B cells, or whether B cell spreading is regulated by Pyk2 or FAK. We now show that Pyk2 and FAK are differentially expressed and localized in B cells, that Pyk2 and FAK are important for B cell spreading, and that integrin engagement enhances BCR-induced phosphorylation of Pyk2 and FAK, a process that depends on both Rap activation and actin dynamics.
Cells-B cells were isolated from the spleens of C57BL/6 mice using the magnetic-activated cell sorting B cell isolation kit (Miltenyi Biotec, Auburn, CA) to deplete non-B cells (41). The resulting cells were Ͼ98% B cells as determined by staining with anti-CD19-fluorescein isothiocyanate (BD Pharmingen). Activated B cells were obtained by culturing splenic B cells with 25 g/ml lipopolysaccharide (LPS; Sigma-Aldrich) plus 5 ng/ml IL-4 (R&D Systems, Minneapolis, MN) for 2-3 days. A20 cells (ATCC, Manassas, VA) were maintained as described previously (17). Bulk populations of A20 cells stably transduced with the empty pMSCVpuro vector (BD Biosciences Clontech) or with pMSCVpuro/RapGAPII have been described previously (17).
Expression of Pyk2 and FAK-For immunoblotting with Abs to Pyk2 or FAK, cells were solubilized in radioimmune precipitation assay buffer (42). For quantitative RT-PCR, RNA was prepared using the RNeasy kit with QIAshredder columns (Qiagen, Valencia, CA) and converted into cDNA using the High Capacity cDNA Archive kit (Applied Biosystems, Foster City, CA). Equivalent amounts of cDNA were combined with TaqMan Fast Universal PCR Master Mix (Applied Biosystems) plus TaqMan Gene Expression Assay primers and probes (Applied Biosystems) specific for Pyk2 (Mm00552840_m1), FAK (Mm00433209_m1), or glyceraldehyde-3-phosphate dehydrogenase (Mm99999915_g1). PCRs and quantitation were performed using an Applied Biosystems 7500 Fast Real-Time PCR system. The amount of Pyk2 or FAK mRNA was normalized to the amount of glyceraldehyde-3-phosphate dehydrogenase mRNA for each sample.
RT-PCR Analysis of Pyk2 mRNA Splicing-PCR primers (5Ј-GTGGCCTCTCCTGAGTGTGT-3Ј and 5Ј-GATCTTCTCT-GCCTCCCAGA-3Ј) that flank the alternatively spliced exon of the mouse Pyk2 gene were used to amplify cDNA from resting and activated mouse B cells. These primers amplify a 738-bp fragment from cDNA generated from unspliced Pyk2 mRNA and a 612-bp fragment from cDNA from the hematopoietic cell-specific Pyk2 isoform in which a 126-bp exon is deleted. PCR products were separated on 2% agarose gels and visualized with CyberSafe DNA gel stain.
Immunofluorescence-Cells were fixed with 3% paraformaldehyde for 20 min and then permeabilized with phosphatebuffered saline plus 0.1% Tween 20 for 45 min. After blocking with phosphate-buffered saline containing 2% bovine serum albumin for 10 min, the cells were stained with goat Abs to Pyk2 or FAK for 45 min followed by Alexa Fluor 488-conjugated donkey anti-goat IgG (Molecular Probes-Invitrogen) for 30 min. Where indicated, cells were also stained with rat monoclonal Abs to LFA-1 or VLA-4 followed by Alexa Fluor 568conjugated donkey anti-rat IgG (Molecular Probes-Invitrogen). The cells were washed and adhered to poly-L-lysine-coated coverslips, which were treated with Prolong Gold antifade reagent containing 4Ј,6-diamidino-2-phenylindole (Molecular Probes-Invitrogen) and mounted onto glass slides. Images were collected using an Olympus IX81/Fluoview1000 confocal microscope and processed using Olympus Fluoview 1.6 software.
Phosphorylation of Pyk2, FAK, Erk, Akt, and Paxillin-A20 cells or splenic B cells (1.5 ϫ 10 7 ) in 1 ml of modified HEPESbuffered saline (41) were stimulated with anti-Ig Abs either while in suspension or 30 min after being added to wells of 6-well tissue culture plates coated with a collagen/fibronectin ECM (17,43). This ECM was generated by sequentially coating the wells with a 2% gelatin solution and then fetal calf serum. To initiate integrin signaling, cells were added to wells that had been coated with Abs to LFA-1 or VLA-4 as described previously (6). Reactions were terminated by adding 0.25 ml of cold 5ϫ lysis buffer (17). After 10 min on ice, insoluble material was removed by centrifugation. Where indicated, aliquots of cell lysate were removed to assess total protein tyrosine phosphorylation or the phosphorylation of Erk, Akt, and paxillin. Pyk2 and FAK were immunoprecipitated from cell lysates as described previously (34,41).
Short Hairpin RNA (shRNA)-mediated Knockdown of Pyk2 and FAK Expression in A20 Cells-pGIPZ lentiviral vectors encoding GFP as well as microRNA-adapted shRNAs (shRNAmirs) specific for murine Pyk2 (catalogue number V2LMM_21947) or FAK (catalogue number V2LMM_37327) were purchased from Open Biosystems (Huntsville, AL). Lentiviruses were generated by transfecting 293T cells with the appropriate lentiviral vector (7.5 g) together with 12.5 g of pCMV-␦R8.91 and 2 g of pCMV-VSV-G-M5 (44,45). Viral supernatants were collected 24 and 48 h after transfection and filtered through a 0.45-m filter. A20 cells (6 ϫ 10 5 ) were added to wells of a 6-well dish containing 3 ml of viral supernatant and then centrifuged at 2000 rpm for 1 h at 21°C. Cells were cultured with 5 g/ml puromycin to select for transduced cells.
Cell Spreading-Tissue culture plates were coated overnight at 4°C with a rat anti-mouse LFA-1 monoclonal Ab (6) or with fibronectin (R&D Systems) and then blocked with phosphatebuffered saline containing 2% bovine serum albumin for 1 h. A20 cells (10 5 cells in 0.5 ml of RPMI 1640 medium with 2% fetal calf serum and 50 M 2-mercaptoethanol) were pretreated with DMSO or PF-431396 for 45 min, added to the coated wells, and incubated at 37°C. Cells scored as spread were phase dark and had an elongated or irregular shape with obvious membrane processes.
Rap Activation-Rap activation assays were performed as described previously (17). A GST-RalGDS fusion protein was used to selectively precipitate the active GTP-bound form of Rap, which was detected by immunoblotting with a Rap1 Ab (Santa Cruz Biotechnology).
RESULTS
Expression and Localization of Pyk2 and FAK in B Cells-Because Pyk2 and FAK regulate cell morphology in many cell types, we asked whether both of these kinases were expressed in mature B cells from mouse spleen. Immunoblotting showed that resting B cells from mouse spleen expressed high levels of Pyk2 but only low levels of FAK (Fig. 1A). We also asked whether B cell activation altered the expression of Pyk2 or FAK because activated, but not resting, primary B cells undergo dramatic spreading when plated on integrin ligands or on immobilized Abs to CD44, CD23, or the BCR (46 -48). Activating splenic B cells with LPS plus IL-4 for 2 days resulted in a 4 -5-fold decrease in Pyk2 protein levels and a 6-fold increase in FAK levels ( Fig. 1A). This likely reflects transcriptional regulation because a similar down-regulation of Pyk2 mRNA and up-regulation of FAK mRNA occurred upon B cell activation (Fig. 1B). A number of murine (WEHI-231, BAL17, and A20) and human B lymphoma cell lines (Ramos, Daudi, and Raji) expressed both Pyk2 and FAK ( Fig. 1A and data not shown), consistent with these cells representing transformed versions of activated B cells. We also observed that Pyk2 from LPS/IL-4-activated B cells ran as a doublet on SDS-PAGE gels (Fig. 1, A and C). The higher molecular weight form of Pyk2 may be the unspliced form that has been reported to be highly expressed in brain but not in the spleen (49). Indeed RT-PCR showed that both the spliced and unspliced forms of Pyk2 mRNA were present in activated B cells, whereas only the spliced form was present in resting B cells (Fig. 1D). Both isoforms of the Pyk2 protein were tyrosine-phosphorylated upon BCR clustering in activated splenic B cells (Fig. 1C).
Confocal microscopy showed that Pyk2 and FAK had distinct subcellular localizations in B cells (Fig. 1E). In both resting and activated murine splenic B cells, Pyk2 was uniformly distributed in the cytoplasm with a diffuse pattern. In contrast, FAK was present in punctate structures in both resting and activated splenic B cells (Fig. 1E) as well as the A20 B cell line (data not shown). Activated splenic B cells had more FAK-containing puncta than resting splenic B cells and overall higher levels of FAK consistent with the immunoblotting data. These punctate FAK-containing structures also contained LFA-1 and to some extent VLA-4 (␣ 4  1 integrin) (Fig. 1F), suggesting that FAK associates constitutively with these integrins in B cells.
Adhesion to ECM Enhances BCR-induced Tyrosine Phosphorylation of Pyk2 and FAK-To examine the role of Pyk2 and FAK in BCR and integrin signaling in B cells, we used the A20 B lymphoma cell line, which expresses both Pyk2 and FAK. Consistent with the idea that integrins can act as co-stimulatory receptors that enhance BCR signaling, we showed previously that BCR-induced tyrosine phosphorylation of Pyk2 is substantially greater when A20 cells are plated on a colla-gen/fibronectin ECM that contains integrin ligands than when the cells are stimulated in suspension (17) (see also Fig. 2A). As was the case for Pyk2, BCR-induced tyrosine phosphorylation of FAK was also substantially increased when A20 cells were plated on ECM ( Fig. 2A). The binding of integrins to ECM ligands did not cause an overall enhancement of BCR signaling but selectively augmented BCR-induced tyrosine phosphorylation of FAK and Pyk2. BCR-induced serine/threonine phosphorylation of Erk, Akt, and the cytoskeleton-associated adaptor protein paxillin was not enhanced by integrin engagement (Fig. 2, B and C). The selective targeting of Pyk2 and FAK by BCR/integrin costimulation suggests that these kinases may be important for integrin-dependent B cell responses.
BCR/Integrin-induced B Cell
Spreading Involves Pyk2 and FAK-A20 cells spread dramatically when they are plated on fibronectin and then stimulated with anti-Ig Abs (6,17). In this scenario BCR signaling activates  1 integrins (e.g. VLA-4), which bind to the fibronectin, and the combined BCR/integrin signaling leads to cell spreading. A20 cells also spread when plated on immobilized Abs that cluster the LFA-1 integrin, adopting a morphology similar to that of anti-Ig-activated A20 cells spreading on intercellular adhesion molecule-1, the physiological ligand for LFA-1 (6). This indicates that integrin signaling is sufficient to induce B cell spreading. Because integrin signaling selectively enhances the ability of the BCR to induce tyrosine phosphorylation of Pyk2 and FAK (Fig. 2) and can independently induce the phosphorylation of these kinases (see Fig. 5), we asked whether Pyk2 or FAK played a role in B cell spreading.
To test this, we used RNA interference to reduce the expression of Pyk2 or FAK in A20 cells. We established stable bulk populations of A20 cells containing the GFP-encoding pGIPZ lentiviral vector or derivatives of this vector that also encode shRNAs specific for either Pyk2 or FAK. The resulting cell populations were Ͼ95% GFP ϩ (Fig. 3A), and immunoblotting showed that the Pyk2 shRNA reduced the expression of Pyk2 by 83% without affecting FAK levels, whereas the FAK shRNA reduced the expression of FAK by 67% without affecting Pyk2 levels (Fig. 3B). Knocking down the expression of either Pyk2 or FAK caused a 30 -40% reduction in the number of A20 cells that developed a spread, elongated morphology when plated on fibronectin and then stimulated with anti-Ig Abs (Fig. 3C). The same was true when A20 cells were plated on immobilized anti-LFA-1 Abs (Fig. 3C). Thus Pyk2 and FAK both contribute to BCR/integrinand integrin-induced B cell spreading in A20 cells.
BCR/Integrin-induced Tyrosine Phosphorylation of Pyk2 and FAK Depends on Activation of the Rap GTPases-Because activation of the Rap GTPases is critical for BCR-and integrininduced B cell spreading (6,17) and Pyk2 and FAK contribute to this process, we hypothesized that Rap activation would be important for BCR-induced tyrosine phosphorylation of Pyk2 and FAK. The phosphorylation of Pyk2 and FAK on conserved tyrosine residues increases their kinase activity (21, 50). The initial event in receptor-induced activation of these kinases is phosphorylation of Pyk2 on Tyr 402 or FAK on Tyr 397 . This is thought to occur via dimerization and transphosphorylation (51). Src family kinases can then bind via their SH2 domain to the phosphorylated Pyk2 Tyr 402 or FAK Tyr 397 and phosphorylate Pyk2 at Tyr 579 /Tyr 580 or FAK at Tyr 576 /Tyr 577 . Phosphorylation of Pyk2 and FAK on these activation loop residues is required for maximal activity of these kinases toward substrates (21). We showed previously that Rap activation is required for BCR/integrin-induced phosphorylation of Pyk2 on Tyr 579 / Tyr 580 (17). However, it was not known whether this reflected a role for Rap activation in Tyr 402 phosphorylation or the Src family kinase-mediated phosphorylation of Tyr 579 /Tyr 580 .
Moreover the role of Rap activation in BCR/integrin-induced FAK phosphorylation had not been assessed.
To address these questions, we blocked Rap activation in A20 cells by expressing the Rap-specific GTPase-activating protein, RapGAPII (52). RapGAPII converts the Rap1 and Rap2 GTPases to their inactive GDP-bound state, and RapGAPII expression has been widely used to assess the role of Rap activation (53,54). We have shown that RapGAPII expression completely blocks anti-Ig-, chemokine-, and phorbol ester-induced Rap activation in A20 cells without inhibiting other signaling reactions such as phosphorylation of mitogen-activated protein kinases or Akt (17,42).
Preventing Rap activation via RapGAPII expression significantly inhibited tyrosine phosphorylation of Pyk2 on Tyr 402 when A20 cells were plated on ECM and stimulated with soluble anti-Ig antibodies (Fig. 4A). This corresponded with inhibition of total Pyk2 tyrosine phosphorylation as assessed using anti-Tyr(P) Abs (Fig. 4A). Thus BCR/integrin-induced phosphorylation of Pyk2 on Tyr 402 , the first step in Pyk2 activation, is dependent on Rap activation. The same was true for FAK. The use of phosphorylation site-specific Abs showed that blocking Rap activation significantly inhibited BCR/integrininduced phosphorylation of FAK on Tyr 397 (Fig. 4B) as well as the subsequent phosphorylation of FAK on Tyr 576 /Tyr 577 (Fig. 4C). Consistent with this, the total tyrosine phosphorylation of FAK as detected using the 4G10 anti-Tyr(P) Ab was also inhibited when Rap activation was blocked (Fig. 4B). Thus during BCR/integrin co-stimulation, Rap activation is critical for the initial step in the activation of Pyk2 and FAK, phosphorylation of Pyk2 on Tyr 402 and FAK on Tyr 397 . As a consequence Rap activation is also required for the subsequent Src family kinasemediated phosphorylation of the activation loop tyrosine residues of Pyk2 and FAK.
The requirement for Rap activation in BCR/integrin co-stimulation-induced phosphorylation of Pyk2 and FAK could reflect a role for Rap activation in one or more of the following processes: coupling BCR signaling pathways to the phosphorylation of Pyk2 and FAK, activating integrins such that ligand binding initiates outside-in integrin signaling, or coupling integrin signaling pathways to the phosphorylation of Pyk2 and FAK. We have shown previously that Rap activation is essential for the BCR to stimulate integrin activation (17). Therefore we now investigated whether Rap activation was also an essential component of the signaling pathways that link the BCR and integrins to the phosphorylation of Pyk2 and FAK. Because integrin engagement greatly enhances BCR-induced phosphorylation of Pyk2 and FAK, we first tested the hypothesis that integrin signaling induces Pyk2 and FAK phosphorylation in a Rap-dependent manner.
Rap Activation Is Important for Integrin-induced Phosphorylation of Pyk2 and FAK-To initiate integrin signaling without stimulating the cells through the BCR, we plated A20 cells on wells coated with Abs against the LFA-1 or VLA-4 integrins. We have shown previously that the ability of A20 cells to spread on immobilized anti-integrin Abs or on immobilized intercellular adhesion molecule-1 is dependent on Rap activation (6). Moreover Ab-induced clustering of LFA-1 activates Rap1 in A20 cells (6). Fig. 5A shows that plating A20 cells on wells FIGURE 2. Adhesion of B cells to ECM selectively enhances BCR-induced tyrosine phosphorylation of Pyk2 and FAK. A20 cells were kept in suspension or plated on collagen/fibronectin ECM for 30 min before being stimulated with 20 g/ml soluble anti-IgG for the indicated times. For unstimulated controls (Ϫ), A20 cells were kept in suspension or plated on collagen/fibronectin ECM for 30 min and then left unstimulated for an additional 45 min before being lysed. A, immunoprecipitated (ippt) Pyk2 and FAK were analyzed by immunoblotting with the 4G10 anti-Tyr(P) (P-Tyr) Ab. The blots were then reprobed with Abs to Pyk2 or FAK. A mock stimulation of cells with phosphate-buffered saline for 15 or 30 min did not increase phosphorylation of Pyk2 and FAK compared with cells left unstimulated for the entire duration of the experiment (supplemental Fig. 2). B, cell lysates were immunoblotted with Abs against the phosphorylated forms of Erk (P-Erk) or Akt (P-Akt) and then reprobed with Abs against total Erk or Akt. C, cell lysates were immunoblotted with a paxillin Ab. Serine/threonine phosphorylation of paxillin is indicated by a bandshift on SDS-PAGE gels and was dependent on the activity of the Erk and GSK-3 kinases (data not shown) as in T cells and macrophages (67,68). For each panel, similar results were obtained in three experiments. ␣IgG, anti-IgG Ab.
Regulation of Pyk2 and FAK in B Cells
coated with Abs to LFA-1 or VLA-4 resulted in increased Pyk2 phosphorylation compared with cells plated on wells coated with an isotype-matched control monoclonal Ab against CD40. Both LFA-1-and VLA-4-induced Pyk2 phosphorylation was substantially reduced in the RapGAPII-expressing A20 cells in which Rap activation was blocked (Fig. 5A). Similarly FAK phosphorylation, which was increased 3-4-fold by clustering VLA-4 and to a lesser extent by clustering LFA-1, was significantly reduced when Rap activation was blocked (Fig. 5B). Thus Rap activation is required for integrin signaling to induce tyrosine phosphorylation of Pyk2 and FAK.
. BCR/integrin-induced tyrosine phosphorylation of Pyk2 and FAK depends on activation of the Rap GTPases.
Vector control and RapGAPII-expressing A20 cells were cultured for 30 min in wells coated with collagen/fibronectin ECM before being stimulated with 20 g/ml anti-IgG for the indicated times. For unstimulated controls (Ϫ), A20 cells were plated on collagen/fibronectin ECM for 30 min and then left unstimulated for another 30 min before being lysed. A, anti-Pyk2 immunoprecipitates (ippt) were probed with an Ab against Pyk2 that is phosphorylated on Tyr 402 (pY402) or with the 4G10 anti-Tyr(P) (P-Tyr) Ab before being reprobed with an anti-Pyk2 Ab. B, anti-FAK immunoprecipitates were probed with an Ab that recognizes FAK that is phosphorylated on Tyr 397 (pY397) or with the 4G10 anti-Tyr(P) Ab before being reprobed with an anti-FAK Ab. C, anti-FAK immunoprecipitates were probed sequentially with Abs that recognize FAK that is phosphorylated on either Tyr 576 or Tyr 577 before being reprobed with an anti-FAK Ab. The relative levels of Pyk2 and FAK phosphorylation were determined by quantifying band intensities using ImageJ, normalizing the values to the total amount of Pyk2 or FAK in the same lane, and expressing the values (mean Ϯ S.E. for three experiments) relative to the Pyk2 or FAK phosphorylation levels in unstimulated vector control cells (ϭ1). *, p Ͻ 0.05 by Student's one-tailed paired t test.
Rap Activation Is Important for BCR-induced Phosphorylation of Pyk2 but Not for BCR-induced Phosphorylation of FAK-
When A20 cells were stimulated with anti-Ig Abs while in suspension, BCR clustering induced tyrosine phosphorylation of both Pyk2 and FAK although to a much lesser extent than when the cells were plated on ECM (Fig. 2). Because integrin engagement is likely to be minimal when the cells are in suspension, this may reflect integrin-independent BCR signaling events. Therefore we asked whether Rap activation was important for integrin-independent phosphorylation of Pyk2 and FAK by the BCR. When we kept vector control and RapGAPII-expressing A20 cells in suspension and stimulated them with soluble anti-Ig Abs, blocking Rap activation completely abrogated BCR-induced phosphorylation of Pyk2 at Tyr 402 and Tyr 579 / Tyr 580 (Fig. 6A). In contrast, blocking Rap activation did not impair the ability of the BCR to increase tyrosine phosphorylation of FAK, as judged using the 4G10 anti-Tyr(P) Ab, or more specifically phosphorylation of FAK at Tyr 397 (Fig. 6B). Thus both BCR-induced (Fig. 6A) and integrin-induced (Fig. 5A) Pyk2 phosphorylation required Rap activation, whereas integrin-induced FAK phosphorylation was dependent on Rap activation (Fig. 5B), but BCR-induced FAK phosphorylation was Rap-independent (Fig. 6B). (17), we hypothesized that Rap might regulate the phosphorylation of Pyk2 and FAK via its ability to promote actin polymerization or stabilize actin filaments. To test this, we pretreated A20 cells with latrunculin A, a drug that prevents the addition of actin monomers to existing actin filaments, thereby leading to a loss of Factin. Confocal microscopy showed that a 30-min treatment with latrunculin A led to a nearly complete loss of F-actin in A20 cells (data not shown). In the presence of latrunculin A, anti-Ig-induced phosphorylation of Pyk2 at Tyr 402 and Tyr 579 /Tyr 580 was almost completely blocked both when the cells were stimulated in suspension and when they were stimulated while on ECM (Fig. 7A). Similar results were obtained using cytochalasin D (data not shown), another drug that leads to the loss of F-actin. An intact actin cytoskeleton was not required for other BCR signaling events such as phosphorylation of Erk or Akt (Fig. 7B). Importantly latrunculin A did not block BCR-induced Rap1 activation (Fig. 7C), consistent with the idea that F-actin acts downstream of Rap activation to promote Pyk2 phosphorylation.
The Role of Rap Activation in the Phosphorylation of Pyk2 and FAK Corresponds to a Requirement for Actin Dynamics-Because Rap activation is required for maximal BCRinduced increases in polymerized F-actin in A20 cells
For FAK phosphorylation, the requirement for an intact actin cytoskeleton paralleled the requirement for Rap activation. When A20 cells were stimulated with anti-Ig while in suspension, BCR-induced FAK phosphorylation was unaffected by blocking Rap activation (Fig. 6B) or by disrupting the actin cytoskeleton with latrunculin A (Fig. 7D). In contrast, when the cells were stimulated while on ECM, BCR/integrin-induced FAK phosphorylation was significantly reduced by disrupting the actin cytoskeleton with latrunculin A (Fig. 7D) and by blocking Rap activation (Fig. 4B). Thus a Rap-and F-actin-dependent pathway links integrins, but not the BCR, to FAK phosphorylation.
The ability of latrunculin A and cytochalasin D to block BCR-induced Pyk2 phosphorylation could reflect a requirement for actin filaments, which may act as signaling platforms, or a requirement for the dynamic assembly and disassembly of actin filaments. To distinguish these possibilities, we used jasplakinolide, a drug that prevents actin filament disassembly (55). When A20 cells were stimulated in suspension, jasplakinolide treatment completely inhibited BCR-induced FIGURE 6. Rap activation is important for BCR-induced tyrosine phosphorylation of Pyk2 but not FAK. Vector control and RapGAPII-expressing A20 cells were stimulated in suspension with 20 g/ml anti-IgG for the indicated times. For unstimulated controls (Ϫ), A20 cells were left in suspension for 30 min without being stimulated. A, anti-Pyk2 immunoprecipitates (ippt) were probed with either the 4G10 anti-Tyr(P) (P-Tyr) Ab, an Ab against Pyk2 that is phosphorylated on Tyr 402 , or an Ab against Pyk2 that is phosphorylated on Tyr 579 /Tyr 580 . The blots were then stripped and reprobed with a Pyk2 Ab. B, anti-FAK immunoprecipitates were probed with either the 4G10 anti-Tyr(P) Ab or an Ab against FAK that is phosphorylated on Tyr 397 . The blots were then stripped and reprobed with a FAK Ab. Band intensities were normalized to the amount of total Pyk2 or FAK for each sample and then expressed as the relative phosphorylation (mean Ϯ S.E. for three experiments) compared with that for unstimulated vector control cells (ϭ1). *, p Ͻ 0.05 by Student's one-tailed paired t test. The values for FAK phosphorylation in vector and RapGAPII-expressing cells were not significantly different by this test.
phosphorylation of Pyk2 while having no effect on BCR-induced Rap1 activation (Fig. 7E). Thus both disrupting actin filaments and stabilizing actin filaments inhibited the Rap-dependent phosphorylation of Pyk2 by the BCR. In contrast, BCRinduced phosphorylation of FAK in cells that were kept in suspension did not require Rap activation and was unaffected by either latrunculin A (Fig. 7D) or jasplakinolide (Fig. 7E).
B Cell Spreading Requires Pyk2/FAK Kinase Activity-We have shown that Rap activation is important for BCR/integrininduced tyrosine phosphorylation of Pyk2 and FAK (Figs. 4 and 5) and for BCR-and integrin-induced B cell spreading (6,17). This suggests that activated Rap may promote B cell spreading at least in part by facilitating the phosphorylation-dependent activation of Pyk2 and FAK. Indeed knocking down the expression of either Pyk2 or FAK reduced B cell spreading (Fig. 3C). To specifically address the role of Pyk2 and FAK kinase activity in BCR/integrin-induced B cell spreading, we used PF-431396, a potent and highly selective pyrimidine-based inhibitor of both Pyk2 and FAK (40). Consistent with the idea that the tyrosine phosphorylation of Pyk2 and FAK involves an initial autophosphorylation or transphosphorylation step, treating A20 cells with PF-431396 blocked anti-Ig-induced tyrosine phosphorylation of Pyk2 and FAK when the cells were stimulated in suspension (Fig. 8A) and when they were stimulated on ECM (Fig. 8B). The phosphorylation of Pyk2 and FAK induced by clustering LFA-1 with plate-bound Abs was also inhibited by PF-431396 (Fig. 8B). PF-431396 treatment was not cytotoxic as judged by 7-amino-actinomycin D staining (data not shown) and did not reduce the ability of the BCR to stimulate Erk phosphorylation or overall protein tyrosine phosphorylation (Fig. 8A), which is dependent on the activation of both Src family kinases and the Syk tyrosine kinase. Thus, PF-431396 appeared to selectively inhibit BCR-induced tyrosine phosphorylation of Pyk2 and FAK. Importantly this correlated with a significant inhibition of A20 cell spreading. PF-431496 treatment significantly reduced the number of A20 cells that developed an elongated, spread morphology after being stimulated with anti-Ig Abs while on fibronectin (Fig. 8C). The spreading of A20 cells plated on immobilized anti-LFA-1 Abs was also significantly reduced by PF-431396 treatment (Fig. 8D). Thus the kinase activity of Pyk2 and/or FAK is required for both BCR/integrinand integrin-induced B cell spreading.
DISCUSSION
The binding of antigens by B cells often occurs in the context of integrin engagement. Integrin-dependent cell spreading enhances the ability of B cells to contact antigens, and integrin signaling may synergize with BCR signaling to promote both B cell spreading and activation. The Pyk2 and FAK kinases are key regulators of cell morphology, and in this report we showed that FIGURE 7. Rap-dependent phosphorylation of Pyk2 and FAK requires actin dynamics. A20 cells in suspension or plated on ECM were pretreated with 10 M latrunculin A or an equivalent volume of DMSO for 30 min before being stimulated with 20 g/ml soluble anti-IgG for the indicated times. For unstimulated controls (Ϫ), A20 cells were kept in suspension or plated on collagen/fibronectin ECM for 30 min and then left unstimulated for another 30 min before being lysed. A, anti-Pyk2 immunoprecipitates (ippt) were sequentially probed with an Ab that recognizes Pyk2 that is phosphorylated at Tyr 402 , an Ab that recognizes Pyk2 that is phosphorylated at Tyr 579 /Tyr 580 , the 4G10 anti-Tyr(P) (P-Tyr) Ab, and an anti-Pyk2 Ab. B, cell lysates were assayed for phosphorylation of Akt (P-Akt) and Erk (P-Erk) as in Fig. 2B. C, a GST-RalGDS fusion protein was used to selectively precipitate the active GTPbound form of Rap1, which was detected by immunoblotting with a Rap1 Ab the kinase activities of Pyk2 and FAK are important for BCR/ integrin-induced B cell spreading. Moreover we showed that integrins enhance the ability of the BCR to phosphorylate Pyk2 and FAK on their auto/transphosphorylation sites, the initial step in the activation of these kinases. Finally we showed that both Rap activation and actin dynamics were critical for BCR/ integrin-induced phosphorylation of Pyk2 and FAK.
We had shown previously that integrin engagement enhances BCRinduced Pyk2 phosphorylation (17), and we have now shown that the same is true for FAK phosphorylation. Moreover by clustering integrins with Abs, we showed that integrin signaling was sufficient to induce tyrosine phosphorylation of Pyk2 and FAK in B cells. Thus signaling by antigen-clustered BCR complexes and ligand-bound integrins can have additive effects on the phosphorylation of Pyk2 and FAK. This highlights the ability of integrins to act as co-stimulatory receptors that collaborate with lymphocyte antigen receptors. Pyk2 and FAK appeared to be selective targets of the BCR/integrin collaboration as integrin engagement did not enhance BCR-induced phosphorylation of other signaling proteins such as Erk, Akt, and paxillin.
We also showed that activation of the Rap GTPases was critical for BCR/integrin signaling to induce the phosphorylation of Pyk2 and FAK on their auto/transphosphorylation sites as well as tyrosine residues in their activation loops. Although Rap-GTP likely contributes to Pyk2 and FAK phosphorylation by activating integrins on B cells (17), we found that activated Rap also acts downstream of the BCR to promote Pyk2 phosphorylation and downstream of integrins to promote the phosphorylation of Pyk2 and FAK. The active GTP-bound form of Rap binds multiple effector proteins that promote actin polymerization and the stabilization of F-actin polymers (38). Many of the downstream consequences of Rap activation may therefore reflect its role in reorganization of the actin cytoskeleton. Indeed we found that the Rap-dependent steps in Pyk2 and FAK phosphorylation were also blocked by actin-disrupting drugs. This suggests that Rap-GTP promotes Pyk2 and FAK phosphorylation via its ability to remodel the actin cytoskeleton. Rap1 activation was not dependent on actin dynamics, suggesting that the requirement for actin remodeling lies downstream of Rap activation.
Although Pyk2 phosphorylation has been shown to require an intact actin cytoskeleton in a number of cell types (21), how this contributes to Pyk2 phosphorylation is not clear. Phosphorylation of Pyk2 on Tyr 402 may involve Pyk2 dimerization and subsequent transphosphorylation (51). Rap-dependent actin polymerization could create a cytoskeletal platform that promotes Pyk2 dimerization. However, we found that treating B cells with the actin-stabilizing agent jasplakinolide also prevented tyrosine phosphorylation of Pyk2, indicating that polymerized F-actin is not sufficient to support receptor-induced Pyk2 phosphorylation. Dynamic remodeling of the actin cytoskeleton may be required for efficient Pyk2 dimerization. Alternatively Pyk2-dependent phosphorylation in vivo may require cycles of actin polymerization and depolymerization that regulate either the kinase activity of Pyk2 or the accessibility of its catalytic site.
In contrast to Pyk2, Rap activation and actin dynamics were required for integrin-induced FAK phosphorylation but not for BCR-induced FAK phosphorylation in A20 B lymphoma cells. For integrin-induced FAK phosphorylation, Rap activation was required for the initial step in FAK activation, phosphorylation of Tyr 397 , an event that is initiated by transphosphorylation and that can be amplified by Src family kinase (56). How Rap activation and F-actin contribute to integrin-induced FAK Tyr 397 phosphorylation is not clear. Our microscopy data suggest that FAK constitutively co-localizes with integrins in B cells. Rap activation and actin polymerization could therefore contribute to the recruitment and/or stabilization of other proteins that regulate FAK Tyr 397 phosphorylation. In adherent cells that form focal adhesions, integrin activation results in the recruitment of talin to the integrin ␣ and  chain cytoplasmic domains (57). This allows FAK to interact with paxillin and undergo autophosphorylation. At the same time, activation of Src family kinases by protein-tyrosine phosphatase ␣ increases the phosphorylation of FAK at Tyr 397 . Further work is required to determine whether Rap and F-actin promote integrin-dependent FAK phosphorylation by regulating these steps in B cells. Interestingly Rap activation and F-actin were not required for BCR-induced phosphorylation of FAK when the cells were in suspension, a situation in which there is minimal integrin engagement. FAK has been reported to associate constitutively with the Src family kinase Lyn and with the BCR in WEHI-231 B lymphoma cells (33). FAK phosphorylation could therefore be a proximal Rap-independent BCR signaling event that is initiated by BCR.
A key finding was that Pyk2 and FAK are important for B cell spreading that is initiated by BCR/integrin co-stimulation or by integrin clustering. This is consistent with Pyk2 and FAK being downstream targets of Rap because blocking Rap activation also prevents B cell spreading (6,17). Knocking down the expression of either Pyk2 or FAK in A20 B lymphoma cells reduced the ability of these cells to undergo cell spreading, whereas PF-431396, a dual specificity inhibitor of the kinase activities of both Pyk2 and FAK, substantially inhibited A20 cell spreading. This suggests that both Pyk2 and FAK contribute to the ability of A20 B lymphoma cells to undergo cell spreading. Moreover the use of PF-431396 showed that the kinase activities of Pyk2 and FAK were critical for B cell spreading.
Although it is not known how Pyk2 and FAK promote B cell spreading, these kinases may coordinate the activation of Rac, Cdc42, and RhoA, GTPases that control cytoskeletal organization. In T lymphocytes, Pyk2 binds Vav (50), an exchange factor that activates Rac. Both Pyk2 and FAK can interact with the RhoA activator p190RhoGEF (58), and in fibroblasts Pyk2 associates with Wrch1, a Cdc42-like GTPase that promotes the formation of filopodia (59). Pyk2 and FAK can also bind the p85 subunit of phosphoinositide 3-kinase following integrin ligation (60,61). Phosphatidylinositol 3,4,5-trisphosphate produced by phosphoinositide 3-kinase activates Vav and promotes Rac-dependent actin polymerization and cytoskeletal rearrangement. Pyk2 and FAK can also bind and phosphorylate the scaffolding proteins p130 Cas and paxillin, which can then recruit the Rac activators DOCK180 and PAK-interacting exchange factor (PIX), leading to Rac-dependent membrane ruffling (61).
An interesting observation was that when B cells were activated with LPS plus IL-4 Pyk2 levels decreased, but FAK levels increased significantly. B cells activated in this manner resemble antigen-activated germinal center (GC) B cells, which proliferate within lymphoid organ follicles and undergo somatic hypermutation of their Ig genes. These GC B cells then compete for limiting amounts of antigen that are displayed on the surface of follicular dendritic cells, which provide the B cells with survival signals. GC B cells interacting with follicular dendritic cells in vivo exhibit a spread morphology with multiple membrane processes (62,63). This presumably increases their ability to detect antigens on the surface of the follicular dendritic cell. The activation-induced increase in FAK expression may reflect a switch from the motile phenotype of a circulating B cell to the more adhesive phenotype of an activated GC B cell. FAK expression and activation are associated with sustained adhesion at least in B cell progenitors (34). A number of adhesion molecules including ␣ 6 integrin are up-regulated in activated GC B cells (64,65), and gene expression profiling has shown that FAK mRNA levels are elevated in GC B cells (66). Thus, the increased expression of FAK after B cell activation may be part of a proadhesion gene expression program in which FAK promotes integrin-dependent adhesion and cell spreading, which facilitate BCR-antigen interactions that provide survival signals for GC B cells. Similarly the change in Pyk2 mRNA splicing in activated B cells may allow Pyk2 to interact with additional proteins that control cell adhesion or cytoskeletal reorganization. In summary, we have shown that Pyk2 and FAK are downstream targets of the Rap GTPases that play an important role in B cell spreading, a process that contributes to B cell activation. | 9,510.2 | 2009-06-26T00:00:00.000 | [
"Biology"
] |
Aesthetic and Occlusal Rehabilitation Using a Telescopic Denture.
Rehabilitating the occlusion of a patient with multiple missing posterior teeth may be challenging, especially when the remaining teeth are malaligned with loss of occlusal vertical dimension. A telescopic denture can be an excellent treatment alternative. In this case, the patient requested an aesthetic maxillary denture with no visible metal clasps when smiling. Hence, two telescopic crowns were placed on the anterior abutment teeth serving as the retentive components of the maxillary cobalt-chromium removable partial denture. Additional retention was obtained from the posterior abutment teeth. The patient was satisfied with the final restored occlusion and appearance.
Introduction
There are numerous treatment options available for patients who require replacement of multiple missing teeth. In cases where only a few malpositioned teeth remain in the arch, removable partial dentures (RPD) or implant-supported prosthesis were usually the alternatives offered [1]. RPD is a cost-effective and acceptable treatment modality in replacing long edentulous spans. A telescopic denture uses the existing abutment teeth as retainers where these additional attachments serve to increase the retention and stability of the prosthesis [2]. A telescopic denture is defined as "an overdenture which is a dental prosthesis that covers and is partially supported by natural teeth, natural tooth roots, and/or dental implants" [3]. The term telescopic denture refers to the type of prosthesis that includes double crowns as retainers or attachments. These retainers consist of two crowns; primary or inner crown which is cemented to the abutment and secondary or outer crown which is attached to the denture. Many other names are used to describe similar types of prostheses such as a hybrid removable denture, an overlay prosthesis, a Marburg double crown system, etc. [4]. The purpose of this article is to present a clinical case in which the telescopic denture was fabricated on the maxillary arch to improve aesthetics and mastication. A short review of the laboratory aspects is discussed as well.
Case Presentation
A systematically healthy, 51-year-old male requested for a set of dentures to replace his missing teeth. He had multiple teeth extracted over the past six years and claimed that they were non-restorable. He never had any form of replacement during his period of edentulism. He had difficulties in chewing, as only one upper tooth was in contact with the opposing teeth. He wished to have a set of dentures that can improve his chewing ability and provide satisfactory aesthetics without having any visible metal wires or clasps. Extraoral examination revealed asymmetrical lips with lack of lip support ( Figure 1). The existing maxillary teeth were teeth 17, 13, 11, and 26; the existing mandibular teeth were 31, 41, 42, and 43. Initial intraoral views and the dental panoramic radiograph were presented in Figure 2 and Figure 3, respectively. The vertical dimension of occlusion (VDO) was collapsed with a freeway space of 6 mm. The only occluding teeth were 11 with 41 and 42. Tooth 13 and 11 were diagnosed with asymptomatic irreversible pulpitis with asymptomatic apical periodontitis and were indicated for non-surgical endodontic therapy. Secondary caries without pulpal involvement was noted on tooth 26 and the tooth was eventually restored with a milled crown. During the provisionalization phase, an interim acrylic maxillary and mandibular dentures were issued to restore and test the increased vertical dimension (Figure 4). A diagnostic wax-up denture was used as a guide during the preparation so to achieve adequate tooth reduction (Figures 5a-5b). Later, the telescopic crowns with parallel mesial, distal, and labial surfaces were placed on teeth 13 and 11 (cobalt-chromium) (Figure 5c). The final impression for the telescopic denture was taken with a light body and regular body polyvinylsiloxane impression material. The telescopic cobalt-chromium framework tried in with satisfactory retention and stability. Maxillo-mandibular relationship (MMR) was recorded in centric relation ( Figure 6). Acrylic teeth were set up and tried in to assess the occlusion and aesthetics. Bilateral group function occlusion was achieved upon right and left excursion and even contacts on anterior prosthetic teeth during protrusive movements. A putty index was fabricated over the labial surface of the arranged acrylic teeth, acting as a template to ensure similar teeth arrangement after porcelain placement. The porcelain was layered over the area of teeth 11, 12, and 13 of the telescopic denture ( Figure 7) using A3, A2, and transparent incisal feldspathic powder (IPS InLine ®, Ivoclar Vivadent, Schaan, Liechtenstein) and fired in the porcelain furnace (Programat P500, Ivoclar Vivadent). After final glazing, acrylic teeth were re-arranged following the putty index, and the denture was processed accordingly. The maxillary denture was delivered thereafter, and the patient was satisfied with both the aesthetic and functional outcome of the rehabilitation (Figure 8). A follow-up appointment revealed satisfactory oral hygiene and prosthesis maintenance. The mandibular denture was maintained in acrylic, as the patient was keen for implant placement in the near future when he had sufficient funds. Preand postoperative six months 'comparison is shown in Figure 9. Follow-up six months
Discussion
A telescopic denture is indicated when a few unfavorably distributed abutment teeth remained within the arch [5]. In this case, both the anterior abutment teeth 11 and 13 were labially tilted in a Class II relationship. Prescribing crowns may improve the angulation of the abutment teeth, but clasps placement was still mandatory on these abutment teeth to provide adequate retention and resistance of the RPD. In addition, the anterior abutment teeth were both extensively carious and required non-surgical root canal therapy. Hence, the abutment teeth could undergo more tooth reduction to cater to both the primary telescopic coping and secondary telescopic denture without risking the vitality of the abutment teeth [6]. With inner copings designed parallel to the proximal surface of the posterior teeth, a single path of insertion was achieved. Unlike extra-coronal precision attachments, these telescopic abutments were easily accessible, allowing effective home care and oral hygiene maintenance [7]. The position of the upper posterior abutments was on par with the design of the partial denture, therefore, it was not taken into account as telescopic abutments. In this case study, the occlusal scheme adopts bilateral group function occlusion so as to have even distribution upon left and right excursive movements. In addition, the uniform occlusal contacts play an important role, as it opposed the future implant-supported prosthesis.
Many double-crown systems have been reported in the literature. The first telescopic crown was patented by Dr. J. B. Beers in 1873 and later improvised by Langer (1980) who categorized them into three systems [8]. Cylindrical-shaped inner crowns provided remarkable retention and aesthetics in the marginal area. However, such crowns were difficult to fabricate, and the constant friction led to an increase in wear rate [9]. Conical shaped crowns with 6° tapering were widely used, as they were less harmful to the abutment teeth and supporting tissues.
However, they were not as retentive as cylindrical crowns. Another telescopic coping described was resilient crowns where only the cervical half conformed to the cylindrical shape. The authors claimed that this design harmonized with the tissue elasticity, had better occlusal forces distribution, and, hence, increased the survival rates of the abutment teeth [10]. However, in a retrospective study, the survival of double-crown-retained RPD seemed to favor telescopic cylindrical design over conical and resilient crowns with a 90% success rate after seven years [11]. Only 78.5% of conical and resilient crowns survived. Henceforth, the majority of the surfaces of the inner copings, in this case, were made parallel with the determined path of insertion to provide the necessary retention.
Retention of the telescopic denture also relies heavily on the frictional surfaces. The components used for the inner crowns and secondary denture should have high shear strength and resistance to wear rates. An in-vitro study reported that telescopic crowns made of a nonprecious metal offered better retention forces when compared with high noble metal or zirconia [12]. Both cobalt-chromium inner telescopic crowns and cobalt-chromium RPD provides retention forces as high as 12.5N as compared with gold inner crowns (7.4-9.6N). Another study comparing the telescopic denture to RPD retained via precision attachment and to another group with RPDs retained with conventional Aker's clasps found that telescopic dentures had significantly higher homogenous occlusal force distributions among the abutments when compared to the other two groups. Hence, it was concluded that telescopic dentures provided the optimum support to the edentulous ridge and were able to prevent unwanted torque forces on the abutment teeth [13].
Porcelain layering over the cobalt-chromium framework demonstrated high fracture strength and excellent aesthetics but may be difficult to repair if a complication such as porcelain chipping occurred. Shade matching of the porcelain build-up with the adjacent acrylic teeth was also challenging in this case. An alternative suggested was the usage of composites as the veneering substrate over the framework but the fracture resistance and wear rates were questionable, as no long-term evidence is available for such a method [14].
Survival of telescopic retained RPD (T-RPD) was 100% after 5 years [15]. There were no statistical differences found between conventional RPD (94.5%) and T-RPD but complications raised from conventional RPD were more difficult to rectify with higher periodontitis and caries rate. Loss of cementation of the primary crowns was the commonest complication in T-RPD which could be easily handled clinically. In cases where abutment teeth served as telescopic retainers are lost or extracted, the denture could still function as usual without compromising the occlusion and aesthetics. The inner surface of the RPD replacing the abutments can simply be filled up with composite.
Conclusions
A telescopic denture can be considered a viable treatment option for patients with unevenly distributed and/or malaligned abutment teeth within the arch. These RPD can easily rectify the aesthetics and possible retention problem that was commonly seen in conventional RPD. Besides, long-term maintenance of oral hygiene is relatively simple as compared to RPDs utilizing precision attachment systems.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2,441.6 | 2020-03-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Weighting and imputation comparison in small area estimation
In this paper, different methods of nonresponse adjustment for the totals of small area domains are examined. To improve quality of estimations linear model with random parameters at domain level is used. The empirical results are based on Monte Carlo simulations with repeated samples drawn from a finite population constructed from the Lithuanian survey on short-term statistics on service.
Introduction
In true surveys, we are faced with nonresponse. Nonresponse not only mean less efficient estimates because of reduced sample size, but also standard-complete methods cannot be immediately used to analyze data. There are several methods to correct the consequences of unit nonresponse [1,6] and they are examined in this paper for the small areas. The focus on small area is made because there is relatively little of an impartial comparison between nonresponse treatment [4]. Also for the small area estimation an important aspect is to choose the right estimator and model [2,3]. During the previous research [4] it was showed, that the linear model with random parameters at domain level is a good choice, but which estimator to use is still an open question. That is why, two different estimators (generalized regression estimator [8] and empirical best linear unbias predictor [5]) are investigated in this paper.
Population
Let U = {1, 2, . . . , k, . . . , N } denote a finite population with N units. This population is divided into D nonoverlaping domains U d , d = 1, . . . , D, consisting of N d units. A sample s of the size n units is selected from the population U , s = {s 1 , s 2 , . . . , s n } ⊂ U . Each unit k has an inclusion probability π k = P(k ∈ s) or a sampling weight w k = π −1 k . For different reasons there are missing units in the sample s. Let a response probability for each unit be κ k = P(k ∈ s (r) , k ∈ s), where s (r) ⊂ s is a responded sample.
Let us denote y as a study variable, which values y k are known just for the elements of a response sample s (r) and x = (x 1 , x 2 , . . . , x J ) ′ as a vector of auxiliary variables, which values x k are known for all units in U . Let t d = k∈U d y k be a domain total -parameter of interest. It is assumed that the number of the elements in each domain U d , d = 1, . . . , D, is known, but the domains are not used in the sample design. This means that the sample part in each domain, s ∩ U d , has a random size.
Model and estimators
Let values y 1 , . . . , y N of a study variable y be realizations of independent random variables Y 1 , . . ., Y N , which satisfy the following general linear model [7]: Here β 0 and β j , j = 1, . . . , J, are regression coefficients, u d , d = 1, . . . , D, are random parameters that are related to the corresponding domain and I dk , d = 1, . . . , D, k = 1, . . . , N , are domain indicators (I dk = 1, if k ∈ U d and I dk = 0 otherwise). The errors ε k and random parameters u d are assumed to be independent and identically distributed Gaussian random variables with mean 0 and variances σ 2 and σ 2 1 respectively. Using restricted maximum likelihood (REML) method with incorporated weights [7] model's parameters are estimated and predicted valuesŷ k = β 0 + D d=1 I dûd + J j=1β j x jk are computed for all k ∈ U . These predicted values are used to estimate two different domain total estimators: generalized regression (GREG) estimator [8] and empirical best linear unbiased predictor (EBLUP estimator) [5] These estimators in the case of significant nonresponse rate should be corrected.
Methods for nonresponse adjustment
Weighting and imputation are the two main methods used to correct for bias due to nonresponse and to make efficient use of data.
Weighting
Using weighting method the original inclusion probabilities π k are deflated by the response probabilities κ k and new sampling weights w k = (π k κ k ) −1 , k ∈ U , are obtained. The original response probability is never known in practice, so there are several methods to estimate it. One of them is called a weighting-class, where the response sample s (r) and sample s are divided into G mutually exclusive and homogenous (with respect of the response rate) groups s (r) g and s g , g = 1, . . . , G, with the same response probability κ k for the unit in the same group: Another method for estimating the response probability is to apply a logistic regression model [1]:κ HereB is the maximum likelihood estimator of the parameters of the logistic regression model based on the data , and z k = 0 otherwise. When weighting methods for nonresponse adjustment are applied in the estimation of the domain total, the correction of estimators (2) and (3) should be made by replacing sampling weights w k withŵ k = (π kκk ) −1 , k ∈ s, not only in equations (2) and (3), but also in calculation of the parameters of the model (1).
Imputation
Another method to adjust nonresponse is to impute a value for the missing unit. There are many types of imputation methods, which can be divided into three main groups: 1) Logical imputation (deductive). It is a part of the editing process and is used when reliable, explicit solution exists given appropriate assumptions.
2) Real donor imputation. Here the imputed observation value is borrowed from another respondent. The most common real donor imputations are the nearest neighbors and the random donor imputation. For the nearest neighbor imputation, a missing value y k is imputed by choosing that value y l which corresponds to the value x l closest to x k . The closest value is determined by the distance between any two response values (d kl = J j=1 (x kj − x lj ) 2 , k ∈ s\s (r) , l ∈ s (r) ). For the random donor imputation the data are divided into homogenous groups by a suitable method and the donors are chosen randomly within these groups.
3) Model based imputation. Here the imputed observation value is calculated using the model with the coefficients estimated from the response sample s (r) . The most common method is a regression imputation.
Imputation methods can also be classified as a single imputation (when one value is imputed instead of missing one) or multiple imputation. Multiple imputation produces several imputed datasets and instead of the missing value a mean of imputed datasets is used.
Let us denote a new variable y * which values y * k are equal to y k , if k ∈ s (r) , or y imp k , if k ∈ U \s (r) . Here y imp k can be a single value, if single imputation is used, or the mean of imputed datasets, if multiple imputation is used. Then estimators (2) and (3) can be written as follows:
Simulation study
For the simulation experiment, a real population from Statistics Lithuania is used. The quarterly survey on short-term statistics on service has been taken. The population includes N = 1660 enterprisers, which filled questionnaire in the first quarter of 2008 year. Every record consists of such variables: region of residence, income for the first quarter of 2008 year, number of employees in the same quarter, value-added tax (VAT), classification of economic activities in the European community (NACE). The income is chosen as the study variable y. Let y k denote the value of y for the kth enterpriser, k = 1, . . . , N . The parameter of interest is total income in each region (domain totalt d ). There are D = 14 regions of interest. To improve the quality of the estimators 7 auxiliary variables were used: number of employees (x 1 ), VAT (x 2 ) and indicator of the NACE (x 3 − x 7 ). 1000 independent samples of 80 elements are drawn from the population by simple random sampling without replacement (SRS).
The GREG and the EBLUP estimators are used to estimate the domain total. Each estimate is calculated several times using different methods of nonresponse adjustment. These differences are denoted by adding two letters, LL ∈ {WC , LR, RD, NN , CR, DR}, and number, R ∈ {0, 1, 2}, at the end of the estimate's name (GREG-LLR or EBLUP-LLR). The meaning of these abbreviations is described below. The weighting-class method (WC) and the logistic regression model (LR) are applied to estimate response probability. Also, the performance of different imputation methods (random donor (RD), nearest neighbors (NN), regression imputation using the common model (CR) [2] and regression imputation using the model with domain-intercepts (DR) [2]) is investigated. For weighting-class, random donor and nearest neighbors methods units are grouped by the number of employees and indicator of the NACE. For the logistic regression model auxiliary vector x with values x k = (1, x 1k , x 2k , x 3k , x 5k ) ′ is used. For the regression imputation the mean of imputed datasets with 5 values is applied. For the common model auxiliary vector x with values x k = (1, x 1k , x 2k , x 3k , x 4k , x 5k , x 6k ) ′ is used. For the model with domain-intercepts auxiliary vector x with values x k = (I 1k , . . . , I Dk , x 1k , x 2k , x 3k , x 4k , x 5k , x 6k ) ′ is applied. Here I dk = 1 if unit k belongs to d domain and I dk = 0 otherwise, d = 1, . . . , D.
Each of these methods of nonresponse adjustment is applied for two populations constructed from the real population (indicated by the number R = 0) by generating different response rate. The response rates of 89% and 79% are generated for the first (R = 1) and the second (R = 2) populations, respectively. These rates represents the response rate in the survey (actually the response rate depends on region, number of employees and NACE). To
Conclusions
The results in the tables show that the real donor imputation (nearest neighbor and random donor) are the worst methods, since they increase bias and MRRMSE more than the others methods. The weighting methods (weighting-class and logistics regression) yield the best results in small area estimation with nonresponse. Alike the other nonresponse adjustment methods, they do not so much depend on the nonresponse rate. | 2,633.4 | 2010-12-21T00:00:00.000 | [
"Economics"
] |
Advances in lithium niobate thin-film lasers and amplifiers: a review
Abstract. Lithium niobate (LN) thin film has received much attention as an integrated photonic platform, due to its rich and great photoelectric characteristics, based on which various functional photonic devices, such as electro-optic modulators and nonlinear wavelength converters, have been demonstrated with impressive performance. As an important part of the integrated photonic system, the long-awaited laser and amplifier on the LN thin-film platform have made a series of breakthroughs and important progress recently. In this review paper, the research progress of lasers and amplifiers realized on lithium niobate thin film platforms is reviewed comprehensively. Specifically, the research progress on optically pumped lasers and amplifiers based on rare-earth ions doping of LN thin films is introduced. Some important parameters and existing limitations of the current development are discussed. In addition, the implementation scheme and research progress of electrically pumped lasers and amplifiers on LN thin-film platforms are summarized. The advantages and disadvantages of optically and electrically pumped LN thin film light sources are analyzed. Finally, the applications of LN thin film lasers and amplifiers and other on-chip functional devices are envisaged.
Introduction
Lithium niobate (LN) has rich physical effects, such as electrooptic, nonlinear, photorefractive, piezoelectric, pyroelectric, and wide transparent window (0.35 to 5 μm) and therefore has attracted extensive attention since the 1960s. In the early stages, the research on integrated LN photonic devices was mainly based on the titanium-diffusion or proton-exchange LN waveguides. These waveguides have large mode sizes (∼10 μm) and weak refractive index contrast (∼0.1) limiting the performance of integrated devices and the development of large-scale integration. Fortunately, LN on insulator (LNOI) prepared by the smart-cut process has been successfully developed in the past two decades, which makes optical waveguides based on LNOI have small mode sizes (∼1 μm 2 ) and high refractive index contrast (∼0.7) revolutionizing the research on integrated photonics. [1][2][3] Recently, micro-nano fabrication processes such as electron beam lithography-argon ion beam etching (EBL-Ar þ etching) 4 and photolithography-assisted chemo-mechanical etching (PLACE) 5 , as well as ultraviolet lithography (UVL)-Ar þ etching 6 have been developed. Microcavities based on an LNOI platform with high quality factor (10 7 to 10 8 ) 7,8 and waveguide with low transmission loss (0.027 dB∕cm) 9,10 have been successfully demonstrated. In addition, due to the excellent electro-optic coefficient of LN and the strong mode field overlap between the electric field and optical mode rendered by the LNOI platform, electro-optic modulators operating with CMOS-compatible driving voltages and 3-dB bandwidths up to 100 GHz were realized on LNOI. 11,12 The overall performance is exceeding or comparable with the counterpart based on other mature integrated photonics platforms. 13,14 Meanwhile, efficient periodically polarized lithium niobate (PPLN) wavelength converters [15][16][17][18] and optical frequency combs [19][20][21] have also been demonstrated on this platform. With the tremendous advance in LNOI passive devices and applications, LNOI-based photonics is regarded as an ideal platform for realizing multifunctional integrated photonic circuits. [22][23][24][25][26] On the other hand, active optical devices, such as lasers and amplifiers, on the LNOI platform have also been expected for a long time as an essential part of integrated photonics. Due to the inherent indirect bandgap structure, it is difficult for LN to achieve electroluminescence. A simple and feasible scheme is to dope rare-earth ions (REIs) into LN as a gain medium to realize light sources and amplifiers under an optical pump. In addition, lasers and amplifiers for LNOI integrated photonics can also be realized by hybrid integration of the commercial semiconductor lasers or amplifiers or heterogeneous integration of the III-V gain materials with an electrical pumping scheme. This paper reviews the recent research on lasers and amplifiers developed on the LNOI platform. Figure 1 shows the research road map for the LNOI light sources and amplifiers, which is also the overall idea of this paper. In Sec. 2, the research processes of lasers and amplifiers based on REI doping LN are introduced. Specifically, the common methods of REI doping in LN crystals and the study of spectral characteristics of doped crystals are discussed first. Subsequently, the important parameters for characterizing microlasers are discussed. Then, the research works of multimode microdisk lasers, multimode microring lasers, single-mode lasers, and amplifiers on the REI-doped LN thin film are presented. At the same time, the challenges of laser and amplifier performance and the potential improvement scheme for the REI-doped LNOI lasers and amplifiers are discussed. In Sec. 3, the electrically pumped III-V lasers and amplifiers on LNOI platform, as well as the applications of the laser transmitter and tunable Pockels laser confirmed by deploying the electro-optical effect of LN are introduced. Then, the advantages and challenges of the LNOI III-V lasers are analyzed with a comparison with REIs-doped lasers. In Sec. 4, the application prospects of LNOI-based lasers and amplifiers are explored, combined with other LNOI functional devices, such as sensing, broadband optical communication, and frequency converter. Finally, in Sec. 5, the contents of the whole review are briefly summarized. At the same time, future research and development of LNOI-based lasers and amplifiers are envisaged.
Optically Pumped Lasers and Amplifiers
The indirect bandgap structure of LN crystal makes it challenging to realize electrically pumped luminescence. Nevertheless, photoluminescence based on REI doping is a simple and effective method that is widely favored by researchers. For example, various REI-doped waveguide lasers and amplifiers based on bulk LN crystal have been confirmed successfully. [35][36][37][38] Combined with the advantages of strong mode localization and low transmission loss, active devices based on REI-doped LNOI platforms are expected to achieve better performance. In this section, the recent progress of REIs doping, spectroscopic analysis, microlasers, as well as amplifiers based on REIs-doped LNOI platforms is summarized.
Rare-Earth Ion Doping and Spectroscopic Analysis
Roughly, there are three main ways to dope REIs into LN crystal. The first one is to add REI oxide for doping when growing LN crystal by the Czochralski method and obtain LN single crystal with uniform ion concentration, 39,40 as shown in Figs. 2(a) and 2(d). The second method is thermal diffusion, mainly through vacuum deposition of the REI layer, and then selective doping through high-temperature diffusion, 37 as shown in Fig. 2(b). In thermal diffusion doping, the diffusion depth of REIs has the characteristics of from complementary error function (erfc)-like to semi-Gaussian shape distribution [ Fig. 2(e)]. 41 Due to the low diffusion rate of REIs, the diffusion temperature must be close to the Curie temperature of LN, which is generally as high as 1100°C and requires a diffusion time of up to 150 h. The diffusion time depends on the crystal phase of the LN substrate. The third method is to dope LN crystals with REIs by ion implantation, 42 as shown in Fig. 2(c). At room temperature, ions are accelerated to million electron volt energy by Van der Graaff accelerator or transistor accelerator and implanted into the LN crystal. The ion concentration displays a nearly Gaussian distribution [ Fig. 2(f)] and high-temperature annealing above 1000°C is needed to eliminate the defects caused by implantation and restore the quality of single-crystal LN.
The above three doping methods are relatively mature technologies after a long time of development and are expected to realize industrialization. But they also have their advantages and disadvantages. For example, compared to diffusion and ion implantation doping, crystal growth doping can obtain a high doping concentration and more uniform ion distribution and thus has a promising future in realizing high power and low transmission loss lasers and amplifiers. In contrast, for thermal diffusion and ion implantation, the maximum concentration of doped erbium ions is ∼0.5% 43 and ∼0.20% (molar fractions; hereinafter percentages referring to doping concentrations represent molar fractions, unless specified otherwise), 44 respectively, and the distribution of ions is erfc-like or Gaussian distribution, which has certain restrictions on the development of applications with high doping concentration, such as amplifiers. However, it has the ability of local doping for thermal diffusion and ion implantation doping, so it can flexibly control the doping region and avoid additional loss of passive devices on integrated optical chips and becomes an ideal scheme to build a locally doped optical gain chip.
In recent years, referring to the incorporation of REIs into LN bulk crystal, many research groups have carried out research on REI-doped LNOI integrated photonics. 45,46 For example, Dutta et al. first prepared 300-nm-thick thin-film LN from 0.1% thulium-doped X-cut LN bulk by the smart-cut process, and then prepared the grating coupling structure and single-mode waveguide by EBL-dry etching process, as shown in Fig. 3(a). 47 To explore the optical properties of thulium ions in the thin-film waveguide, the absorption, emission spectra, and fluorescence lifetime were measured at a 3.6 K low temperature. Compared to a thulium doped-bulk crystal, the thulium ions in smart-cut thin film displayed virtually identical optical properties, which indicates that the smart-cut process can preserve the optical properties of REIs in thin films well. Notably, the preparation of the REI-doped LNOI is compatible with wafer-scale integration, paving the way to achieve on-chip active photonics systems and applications. Furthermore, an atomic frequency comb memory was realized in thulium-doped LNOI waveguides by the same doping and fabrication process, and the storage spectrum bandwidth and optical storage time were up to 100 MHz and 250 ns, respectively. 53 Recently, Wang et al. explored the optical coherence of the erbium-doped smart-cut LN thin film prepared from bulk erbium-doped LN. Experimentally, 180-μs optical coherence time was obtained by fitting the exponential decay curve of the echo signal strength as a function of time delay. 54 The coherence time, also referring to coherence lifetime, is a parameter to reflect the homogeneous broadening linewidth of the REI spectrum and thus indicates the ability of application for quantum information processing. 55,56 The obtained coherence time is comparable with the value of bulk crystal, indicating that the erbium-doped smart-cut LN thin-film platform shows promise for developing on-chip quantum storage. At the same time, Rüter et al. characterized the spectral properties of a neodymium-doped LN thin film fabricated from an LN substrate diffusion-doped with neodymium ions before the smart-cut fabrication process and showed an opportunity to realize active gain areas with locally varying doping concentrations. 57 In addition, Wang et al. also carried out research on ion implantation doping based on a prepared LNOI microcavity. First, a microring coupled with a waveguide was fabricated by the EBL-Ar þ etching process. Then, erbium ions were doped by ion implantation with an implantation energy of 350 keV and a flux fluence of 1.14 × 10 14 ions∕cm 2 . 48 The optical properties could be partially recovered and the average Q value of the microring cavity is 5 × 10 5 after post-implantation annealing at 550°C for 5 h. The scanning electron microscope (SEM) image of the fabricated devices and the simulated ion density distribution are shown in Fig. 3(b). The fluorescence lifetime of erbium ion, defined as the time constant corresponding to the exponential decay of the ion from the energy level in the form of spontaneous emission or non-radiation reflecting the local environment of the ion, is measured as 3.2 ms at low temperatures. Such a fluorescence lifetime is higher than 2 ms in LN bulk crystal. Therefore, it reflects that there may be differences in the local environment of erbium ions, refractive index, and Li/Nb ratio of the material compared with the doped bulk LN. 42 At the same time, the resonance-enhanced fluorescence decay caused by the coupling between ions and cavity was observed, which is referred to as the Purcell effect 58,59 and is expected to develop high-efficiency light emitters. 60 The enhance factor is defined as the Purcell factor and can be expressed as 3Qλ 3 ∕4π 2 V, where Q and V are the quality factor and mode volume of microcavity resonance mode, respectively. 59 In this work, the average Purcell factor was calculated as 3.8. In the same period, Pak et al. adopted a similar method to incorporate ytterbium ions into fabricated LNOI microring resonators and centimeter-long waveguides. 49 Figure 3(c) shows the schematic diagram of the ytterbium-doped LNOI waveguide structure and the simulation of implanted ytterbium ions distribution with a peak concentration of 0.0002%. The doped device was annealed at 500°C for 8 h under a nitrogen atmosphere to heal the lattice damage caused by ion implantation. The loaded Q of the microring was measured at 2 × 10 5 at 908 nm. The photoluminescence characterization found that the lifetime of ytterbium ions on resonator pumping is shortened slightly compared to nonresonant pumping, which is attributed to the Purcell-enhanced emission with a Purcell factor of 0.45. At the same time, Xia et al. directly doped ytterbium ions into an X-cut LN thin film with a thickness of 470 nm by ion implantation. 50 After post-annealing at a slightly higher temperature of 650°C, no apparent film damage was observed. Then, ytterbium-doped microcavities with a radius of 7 μm and a Q of ∼2.4 × 10 5 were fabricated by EBL and chemical-mechanical etching protocol. A layer of indium tin oxide was deposited on the microcavity as an electrode to tune its resonant frequency electrically. The device schematic is shown in Fig. 3(d). Due to the ions-cavity coupling, the shortening of ytterbium ions' lifetime was demonstrated with a Purcell factor of 10.24. The coupling between the REIs and microcavity based on electro-optic tuning can be controlled at 5 μs switching speed over a 160 GHz range. In addition, the detection of a single ytterbium ion was carried out based on electro-optic dynamic tuning, which provides a platform for generating a deterministic single-photon source.
In addition, Yang et al. hybrid integrated the erbium-doped yttrium orthosilicate with a concentration of 50 ppm (parts per million) to LNOI microring by flip-chip bonding, as shown in Fig. 3(e). 51 The fluorescence lifetime of erbium ions was measured as 11.5 ms, consistent with the bulk material result of 11.4 ms. At the same time, the resonance broadening phenomenon caused by ion-cavity coupling was also observed, and the coupling intensity factor was assessed to be 0.36. Moreover, the erbium ionsimplanted LN crystal directly integrated on a silicon photonic chip was reported by Jiang et al., as shown in Fig. 3(f). The optical properties of erbium ions in the integrated structure were investigated, and a modification of the photoluminescent emission was observed. 52 Affected by the differential thermal expansion rates of the layers in the LNOI wafer, the tolerable annealing temperature (∼500°C) of the LNOI wafer is far below the required temperature of ∼1100°C for the thermal-diffusion and ion-implantation doping of REIs into LN bulk crystal. Therefore, the incorporation of REIs into the LNOI platform by ion implantation methods after the smart-cut process significantly limited the doping concentration and ion distribution and naturally limited the optical properties of REIs. Fortunately, incorporating REIs into the LNOI platform before the smart-cut process is feasible. Additionally, this method can preserve the desirable optical properties in bulk crystals and can be compatible with scalable planar fabrication. Xu et al. studied the refractive index, erbium ion spectrum, and other material properties of the erbium-doped LN film prepared based on the smart-cut process. They found that the material properties of the LN film are close to the erbium-doped bulk LN. This indicates that high-quality REIs-doped LN film can be obtained based on the smart-cut process. 61 This provides a path for the realization of on-chip LNOI lasers and amplifiers by adjusting an appropriate doping concentration of the LN bulk sliced to a thin film. Thus, a series of groups have recently focused on realizing LNOI lasers and amplifiers based on REI-doped LN thin film.
Whispering Gallery Microcavity Lasers Based on REI-Doped LNOI
The three elements of a laser are the pump source, the gain medium, and the resonator. The gain medium and pump wavelength are determined by selecting the type of REI. The resonant cavity of a traditional laser is mainly composed of two or more mirrors. Compared with traditional resonators, whispering gallery mode (WGM) microcavities with circular structures can confine light by "continuous total internal reflection" for a long time within an ultrasmall mode volume, leading to strong lightmatter interactions. Benefiting from the high quality factor (Q) and small mode volume (V), WGM microcavity is regarded as an ideal platform to realize ultralow threshold lasers with a small footprint and narrow linewidth. 62
Important parameters for characterizing microlasers
Before introducing the research progress of microlasers based on REI-doped LNOI, we discuss some important parameters for characterizing microlaser performance.
Lasing threshold. The lasing threshold refers to the pump power when the gain provided by the gain medium is just equal to the loss of the laser cavity. The pump threshold power (P th ) of WGM microcavity laser can be approximately expressed as 66 where n l and Q l refer to the effective refractive index and quality factor of the corresponding laser signal mode, respectively. λ l and λ p indicate the signal and pump wavelengths, respectively. V p is the mode volume of the pump mode, and η is the pump efficiency. h and c refer to the Planck constant and the speed of light in vacuum, respectively. σ em and τ respectively correspond to the stimulated emission cross section and fluorescence lifetime of the gain ions. As can be seen from Eq. (1), for a given gain medium, to obtain a low laser threshold, the effective way to reduce the laser threshold is to improve the pumping efficiency, increase the Q value of the resonator, and reduce the mode volume, which is also a significant advantage of conducting laser research based on WGM resonators. Specifically, high Q can be obtained by improving the processing technology. For example, the maximum Q value of an LNOI WGM cavity up to 10 8 has been demonstrated based on an ion-free preparation of LN thin film, which is close to the upper limit of the intrinsic absorption of bulk LN and shows great potential in the application of ultralow threshold lasers. 67 The mode volume can be reduced by reducing the size or thickness of the resonant cavity. However, it should be noted that too small a diameter or thickness may lead to increased radiation loss of the resonant cavity, and the balance between the Q and V needs to be considered in the actual research to obtain a maximum value of Q∕V. In addition, the optimization of pumping efficiency involves major factors such as the coupling between the tapered fiber (or waveguide) and the resonator, the overlap of the pump mode and signal mode in the gain medium, as well as the absorption of the gain medium to the pump.
Conversion efficiency and maximum laser output power. The conversion efficiency of a laser refers to the rate of change of the generated signal power relative to the pump power at the working stage above the lasing threshold, which reflects the conversion efficiency from the pump to signal during the laser operation. At the same time, it is obvious that the maximum laser output power refers to the maximum power that can be obtained when the laser is working, which can reflect the available power level of the laser for subsequent work. For optimizing these two parameters, laser signal extraction should also be optimized. That is to say, high pump efficiency and signal extraction efficiency should be guaranteed simultaneously. However, because the pump and signal are in different bands, the coupling state between a tapered fiber or straight waveguide and a resonator is inconsistent. Generally, the maximum output power of the laser is observed when the pump light is in an overcoupled regime in the experiment. To realize high conversion efficiency and low threshold, it is often necessary to design a broadband coupling to meet the requirements of efficient pump and signal extraction. 68 In addition, different from optimizing the laser threshold, a large resonator size or cavity length is required to improve the gain accumulation to obtain an intense laser output.
Laser linewidth. Laser linewidth usually refers to the full width at half-height of the signal mode in the laser spectrum, which is an important parameter reflecting the coherence and noise of the laser. According to the Haken-Lax-Scully formula, the linewidth of the emitted laser operating above the threshold can be expressed as 69 where h is the Planck constant, ν and P are the emitted frequency and power of the laser, respectively, and Δν is the linewidth of the cavity mode. Increasing the resonator's quality factor and output power is an effective way to reduce the linewidth of the laser. The discussion of other factors affecting laser performance, such as doping concentration, can be found in the previous review on WGM microcavity lasers. 63 Then, the research progress of multimode microdisk lasers, multimode microring lasers, and single-mode microlasers based on REI doped-LNOI platform is introduced in the following sections.
Multimode microdisk and microring lasers
Wang et al. reported an erbium-doped LNOI laser based on a 200-μm-diameter microdisk fabricated on an erbium-doped chip with a doping concentration of 1% by the PLACE process. 70 The Q factor was measured to be 1.8 × 10 6 at 1563 nm by the scanning transmission spectrum method. The emitted laser signal at the 1560-nm band and the accompanying strong green-up conversion fluorescence were observed under the 976-nm laser pump, as shown in Fig. 4(a). The threshold of the laser signal was lower than 400 μW, and the conversion efficiency was deduced as 1.92 × 10 −4 by fitting the signal power data. In addition, the blue-shift (red-shift) of the signal wavelength at a lower (higher) pump power with a rate of −17.03 pm∕mW (10.58 pm∕mW) was observed. The possible reason for this shift process is that the photorefractive and thermo-optical effects of LN exist together, and the photorefractive effect is dominant at low pump power, while the thermooptical effect dominates at high power. Subsequently, Liu et al. fabricated a 150-μm-diameter microdisk by the focused ion milling process based on an erbiumdoped LNOI wafer with a doping concentration of 1%. 27 The laser emission at 1550-nm band with a linewidth of ∼0.1 nm was observed under the laser pumping at 974 and 1460 nm, respectively. Due to the thermal effect of the LN microdisk, the redshift processes of the emission signal wavelength were observed when increasing the pump power for both pump bands, and better thermal stability was obtained at a 1460-nm pump. The threshold for the 974-nm pump was measured as 2.99 mW with a conversion efficiency of 4.12 × 10 −6 . At the same time, Luo et al. reported the batch preparation of erbium-doped LNOI microdisk lasers using UVL-Ar þ etching and an additional chemo-mechanical polishing (CMP) process. 71 The threshold and conversion efficiency of the dominant signal mode in the 1530-nm band were deduced as 292 μW and 6.5 × 10 −7 , respectively, as shown in Fig. 4(c).
Except for the 1550-nm band, the microlaser operating at other wavelength bands has many unique applications. For example, due to the negligible water absorption at the 1060-nm band, the ytterbium ion emission can be applied for biosensing. In addition, compared with an erbium ion, ytterbium ion has a simple energy level structure and higher absorption cross section at the 980-nm band, which has substantial potential to improve the output power and conversion efficiency of microlasers. Zhou et al. first reported a microdisk laser based on an ytterbium-doped LNOI chip. 72 With the continuous laser pump at 984 nm, the lasing signals at the 1030-and 1060-nm bands were observed in the increasing pump power range, as shown in Fig. 4(d). A threshold of 103 μW and a conversion efficiency of 0.53% for the collected signal laser were derived by linearly fitting the signal power data at the different pump powers. Compared to the erbium-doped LNOI microlaser, the conversion efficiency has been significantly improved due to the high quantum efficiency of ytterbium ions. [74][75][76] Meanwhile, due to the strong nonlinearity of LN, the second-harmonic generation (SHG) of the pump laser and the sum frequency generation (SFG) between the pump and emission signal were also observed, as shown in Fig. 4(e).
Subsequently, Luo et al. also reported 1060-nm band microdisk lasers with a high conversion efficiency. 73 Based on an ytterbium-doped LNOI wafer with a doping concentration of 1.5%, the microdisk cavities were fabricated in a batch using UVL-Ar þ etching and the CMP process. The loaded Q factors at 970.20 and 1502.68 nm were measured as 5.56 × 10 4 and 4.0 × 10 6 , respectively. With a 980-nm band laser pump, the emission signal at the 1060-nm band was detected by OSA. The blueshift phenomenon of the signal mode wavelength at a 7.4 pm∕μW rate was observed under the increase of pump power due to the photorefractive effect of the LN crystal. The power and linewidth of the dominant signal mode under different pump powers are shown in Fig. 4(f). An S-shaped curve for the pump-power-dependent signal power was observed, which indicates that the collected signal belongs to a lasing signal. The threshold and conversion efficiency were deduced as 21.19 μW and 1.36%, respectively. Benefiting from the high doping concentration and high Q factors of the fabricated microdisk, as well as the effective signal extraction, the conversion efficiency is the highest value for the reported REI-doped LNOI microlasers. This work significantly improved the microlaser performance and shows the potential of the LNOI platform in biosensing applications. The research works of REI-doped LNOI laser described above are based on microdisk cavities. The microdisk cavity is mainly pumped and monitored by a fiber taper, which has the limitations of unstable coupling and inconvenient further integration with other on-chip functional devices. Microring cavities coupled with an on-chip waveguide can overcome these limitations. Furthermore, microring cavities usually have a smaller mode volume than a microdisk cavity with the same radius, which means that the light field power density in the cavity is more significant under the same pump power, leading to a lower laser threshold.
Luo et al. prepared a microring cavity coupled with a waveguide using the EBL-Ar þ etching technique on a Z-cut erbiumdoped LNOI wafer with a doping concentration of 0.1%. 77 The loaded Q at 1531.8 nm was measured as 4.27 × 10 5 (intrinsic Q ∼ 4.84 × 10 5 ), corresponding to a waveguide loss of 0.86 dB∕cm. Under the 980-nm band continuous laser pump, lasing emission was realized in the 1530-nm band. The lasing threshold was estimated to be ∼20 μW, and the conversion efficiency was deduced as 6.61 × 10 −7 by linearly fitting the signal mode power data, as shown in Fig. 5(a). Benefiting from the small mode volume and high optical power density of the microring structure, the achieved threshold is reduced by 1 order of magnitude compared with the erbium-doped microdisk laser. Till now, 20 μW is the lowest lasing threshold based on the reported REI-doped LNOI microlasers.
At the same time, Yin et al. fabricated a Z-cut erbium-doped LNOI microring cavity with 1% doping concentration and an undoped LNOI waveguide by the PLACE technique, respectively. Then, the LNOI microring was vertically coupled with the waveguide structure, as shown in the inset of Fig. 5(b). 78 With a 980-nm laser pump, a broadband lasing signal in the 1550-nm band was observed with different pump power. A lasing threshold of 3 mW was deduced by fitting the signal power data. The wavelength tuning of the mode around 1533 nm was electrically tuned in a range of 0.2 nm with an EO coefficient of 0.33 pm∕V, as shown in Fig. 5(c), by applying an external voltage to the electrodes integrated with the racetrack microcavity.
In addition, an integrated ytterbium-doped LNOI microring laser working at 1060-nm band was recently demonstrated by Luo et al. 79 Similar to the reported erbium-doped microring laser, 77 the microring was fabricated using the EBL-Ar þ etching technique on an ytterbium-doped LNOI wafer with a 1.5% doping concentration. In experiments, the multi-peaks in the range of 1056 to 1066 nm were observed when the pump wavelength tuned into the microring resonance mode, as shown in Fig. 5(d). The maximum signal power was up to 6.44 μW at 1060.49 nm, which is a great improvement compared with the previously reported erbium-doped LNOI microlaser. The signal power and linewidth of the dominant mode were recorded while increasing pump power, as shown in Fig. 5(e). By fitting the linear increase portion of signal power, the threshold and conversion efficiency were deduced as 59.32 μW and 5.45 × 10 −4 , respectively. A higher conversion efficiency is expected to improve the pump efficiency and signal extraction efficiency by introducing the pulley waveguide coupling design. Due to the integrated feature and stable performance of microring cavities, these microring lasers may find more practical applications.
Single-mode lasers
Due to the broadband gain property of REIs, the REI-doped LNOI microcavity lasers introduced above generally operate in a multimode state, which is subject to false signals, random fluctuations, and instabilities and thus limits its application scenarios. Therefore, single-mode lasers featuring monochromaticity, high stability, and controllable output wavelength have attracted much attention due to their great potential for practical applications, such as optical communication and optical sensing. At present, there are four main ways to realize a singlemode laser. (1) Decreasing the size of the cavity to enlarge the free spectral range (FSR) and ensure only one resonant mode in the gain band range.
(2) Designing narrowband distributed Bragg reflector (DBR) or distributed feedback (DFB) structures in the resonance cavity to achieve mode selection. (3) Cascading two or more cavities to realize mode selection by the Vernier effect. The specific mechanism is that the subcavities with different FSR (size) are coupled together. In such a system, the "supermode" is formed in the coupled cavities at the resonance overlapping location, while other normal modes resonate in only one cavity and dissipate in another cavity, resulting in greater losses. Therefore, the supermode with enlarged FSR and higher Q values can be selected. Combined with the gain competition between the modes in the gain band, the single-mode operation of the laser can be realized. (4) Spatially selective pumping to suppress high-order mode gain or controlling mode loss to achieve single-mode lasing. Among them, reducing resonator size will increase the radiation loss of the light field in the cavity and thus increase the threshold power density of the laser, so this scheme is less commonly used. The following part will introduce the research progress of single-mode laser based on the REI-doped LNOI platform. First, Gao et al. fabricated coupled microdisks, also referred to as photonic molecule, based on erbium-doped LNOI with 1% doping concentration using the PLACE technique. 80 The coupled microdisks with a diameter of 29.8 and 23.1 μm, respectively, are separated by a gap of 0.48 μm. Under the 977.7-nm laser pump, the single-mode laser emission at 1550.5 nm with a threshold of ∼200 μW was realized using the inverse Vernier effect, as shown in the spectra in Fig. 6(a). It is worth noting that the pump was resonant in both microdisks, and the signal was mainly localized in the small microdisk when the single-mode laser was working. In addition, a minimum signal linewidth of 348 kHz was characterized using a Michelson interferometer composed of an optical fiber coupler to measure the laser frequency and phase noise.
Subsequently, to effectively improve the integration and scalability of REI-doped LNOI single-mode lasers, Zhang et al. designed a microring photonic molecule with radii of 85 and 100 μm to achieve a single-mode laser, as shown in the inset Fig. 6(b). 28 The photonic molecule was fabricated by the EBL-Ar þ etching process based on an X-cut erbium-doped LNOI chip with 0.1% doping concentration. In experiments, the double resonance at 1531.6 nm for the two microring cavities was confirmed by the transmission spectra. Moreover, the FSR of the supermodes of the photonic molecule was enlarged to 11 nm in the 1550-nm band due to the Vernier effect. At the same time, the nonsupermodes were also observed in the transmission spectrum, which have a lower coupling depth due to a greater loss and thus a higher lasing threshold. As a result, the single-mode laser was achieved in the supermodes. Figure 6(b) shows the collected emission signal spectra at a pump power of ∼900 μW showing a side-mode suppression ratio (SMSR) of up to 26.3 dB. The threshold was estimated as ∼200 μW by analyzing the signal power with the increasing pump power.
Soon afterward, to reduce the requirement of tunability of pump light, Liu et al. demonstrated an erbium-doped LNOI single-mode laser based on a photonic molecule with a microdisk and a microring, as shown in the inset of Fig. 6(c). 81 In this device, a single-mode laser emission with an SMSR of 31.4 dB in the range of 1520 to 1570 nm was observed with a 974-nm LD light source pump. Figure 6(c) displays the collected singlemode laser power as a function of pump power. The threshold and slope efficiency were deduced as 1.31 mW and 4.41 × 10 −5 , respectively. Moreover, the dependence of output power and wavelength on temperature was also investigated. With the increase in temperature, the output power increment and signal wavelength redshift were observed, which may be due to the change of cavity mode resonance state caused by the thermaloptic effect of LN.
During the same period, Xiao et al. designed a singlefrequency erbium-doped LNOI laser based on a coupling structure composed of a short microring cavity with a diameter of 200 μm and a long cavity with a length of 1.2 cm. 82 Figure 6(d) depicts the schematic diagram of coupling structure and the operation principle of single-frequency laser. The FSRs of the two cavities are F a ∼ 200 GHz and F b ∼ 10 GHz, respectively. Based on the Vernier effect, only the frequency located in the gain bandwidth and resonated on both cavities can oscillate. With a 1484-nm laser pump, a single-frequency laser emission near 1531 nm with an SMSR of 31 dB was observed, as shown in Fig. 6(e). The linewidth of the obtained single-frequency laser was detected as 1.2 MHz through a self-heterodyne method. A threshold of 13.54 mW and a slope efficiency of 1.45 × 10 −4 were measured as well. The reason for a relatively high threshold may be using a broadband source (0.5 nm) as the pump.
Actually, controlling mode loss is also a popular way to achieve single-mode lasing due to the compact device size. For example, Li et al. demonstrated a single-mode laser based on a single microring resonator by regulating the mode loss. 83 The microring resonator with a pulley waveguide was fabricated on a Z-cut 1% doped erbium-doped LNOI wafer using the EBL-Ar þ etching process. Based on simulation analysis, the supported four modes (TE 00 , TE 10 , TM 00 , and TM 10 ) on the 2-μm-wide microring are shown in Fig. 7(a). Compared to mode TE 00 , other modes (TE 10 , TM 00 , and TM 10 ) have large mode areas and overlap with the rough sidewall and thus undergo a higher scattering loss. As a result, except for the TE 00 mode, the gain of the other modes was effectively suppressed. In the experiment, the designed single microring realized single-mode lasing at ∼1531 nm with a 35.5-dB SMSR under the 1484-nm laser source pump with the assistance of the gain competition, as shown in Fig. 7(b). The lasing signal power is up to 2.1 μW, and a threshold of 14.5 mW and a conversion efficiency of 1.20 × 10 −4 were inferred by linearly fitting the signal power data. In addition, the wavelength shift with the increasing pump power caused by the photorefractive effect was also observed. The linewidth of the single-frequency laser was measured as 0.9 MHz by a self-heterodyne approach.
Further, Lin et al. realized a single-frequency ultranarrow linewidth laser on a single microdisk, taking advantage of the polygon modes with high-quality factors and sparse mode distribution. 84 The microdisk with a diameter of ∼29.8 μm was prepared on a Z-cut 1% erbium-doped LNOI wafer. In experiments, the polygon modes for both pump (∼968 nm) and signal (1550-nm band) were excited by adjusting the coupled tapered fiber to a proper position, as shown in the inset of Fig. 7(c). Because the signal polygon mode has a large FSR (11.5 nm) and overlaps with the pump polygon mode, the gain of the conventional high-density WGMs was effectively suppressed. Accordingly, the single-frequency lasing with a threshold of ∼25 μW and a maximum SMSR of 37 dB was achieved in the gain band of erbium ions, as shown in Fig. 7(c). The output power of up to 2 μW was obtained at a pump power of 20 mW. The microlaser linewidth was assessed as low as 322 Hz by heterodyning two separately pumped single-mode microlasers. Moreover, a microelectrode with a radius of 5 μm was fabricated on the microdisk to investigate the wavelength tuning using the strong electro-optic effect of LN. As shown in Fig. 7(d), a linear tuning efficiency of 0.5 pm∕V was achieved when the applied voltage was tuned from −300 to 300 V. The demonstrated ultranarrow linewidth microlaser would facilitate highly coherent applications based on an LNOI integrated platform.
In addition, Liang et al. also demonstrated a single-frequency microlaser based on an erbium-doped LNOI microring with the shape of quarter Bezier curves. 85 The signal spectra in the wavelength range of 1500 to 1600 nm with a single-frequency lasing were recorded with the increasing pump power of a 976-nm laser, as shown in Fig. 7(e). The reason for realizing singlefrequency lasing was probably mode-dependent loss and gain competition by comparing the transmission spectrum of the microring and the amplified spontaneous emission (ASE) spectrum in the waveguide. For a similar mechanism, Zhu et al. observed a single-frequency lasing with an SMSR of 29.12 dB based on an electro-optically tunable erbium-doped LNOI microdisk, as shown in Fig. 7(f). 86 In addition, the wavelength of lasing mode was realized through continuous tuning in a 45 pm range by applying the electric voltage from −200 to 200 V.
With the continuous attention and efforts of researchers, microdisks, microrings, and microdisk-microring coupling lasers have been realized based on the REI-doped LNOI platform, and the working state of the laser has also been improved from multimode to single mode. Table 1 summarizes the main performance parameters of the REI-doped LNOI microlasers reported so far. Based on the current research results, we discuss the main limitations and potential solutions for the development of REI-doped LNOI lasers.
Low output power (μW-level) and conversion efficiency. At present, the conversion efficiency and output power of the reported lasers are still at a relatively low level of microwatts in either multimode or single-mode operation, which hinders the further integration of the laser with other functional devices. We believe that there are several ideas to improve the output power of lasers. (i) Increase the scale of the resonator. The reported works are based on WGM microcavities with a micrometer-scale radius, which limits the optical gain of the laser within such a small gain volume. Therefore, expanding the scale of the gain resonator is a promising method to improve the laser output power. For example, a high laser output power based on largesized microdisk cavities 87 and long waveguides with Sagnac loop reflectors 88 has been reported recently. Compared with previous work of REI-doped LNOI microlasers, the output power has been improved by an order of magnitude. It should be noted that increasing the cavity length will naturally bring about the impact of multimode resonance, which poses a challenge to the single-mode operation of the laser. The mode screening mechanism for realizing single-mode operation described above needs to be considered in laser design. At the same time, the optimization of coupling between the bus waveguide (or tapered fiber) and resonant cavity for both pump and signal bands, such as introducing the bending waveguide coupling design 68 to ensure effective pumping and effective extraction of signal light, can also effectively improve the output power and conversion efficiency of the laser. In addition, the waveguide integrated with the Bragg grating resonance structure can effectively increase the cavity length and ensure single-mode operation. The DBR or DFB laser with high output power has been confirmed on the silicon-based integrated photonics platform, 89-91 which can provide a reference for improving the laser performance.
(ii) Introduction of cladding pumping scheme. Referring to the development of fiber lasers, the design of the cladding pump can effectively improve the pump efficiency of high-power diodes and ensure the single-mode operation of signal light, which plays a key role in developing the high-power fiber lasers. 92,93 Similarly, based on the LN REI gain platform, coating the gain device with appropriate cladding layers, such as silicon dioxide and silicon nitride, can also improve the pump power in the gain structure. Unlike the optical fiber structure, the waveguide is not a circular symmetric structure and may need a reasonable design to ensure the adequate overlap of the pump light and the gain ions. (iii) Co-doped ytterbium ions. For erbium-doped fiber lasers working in the communication band, co-doped ytterbium ions are often used to increase the output power of the laser. 94,95 This is because ytterbium ions, as a sensitizer, can effectively transfer energy to erbium ions, improving the luminous efficiency of erbium ions. At the same time, for the LN thinfilm platform, erbium and ytterbium ions can be conveniently co-doped in the incorporation process of REIs, so it is expected to improve the output power and conversion efficiency of erbium-doped LN thin-film laser.
Optical pumping scheme. Due to the selection of REIs as the gain medium, the working mechanism of optical pumping for the lasers is necessary, which imposes limitations on using lasers for out-of-laboratory applications, such as gas detection and biosensing. A practical solution is to integrate a commercial semiconductor laser as the pump source for the LNOI laser. For example, the electrically pumped laser and REI-doped chip can be hybrid integrated by the flip-chip technology, which is expected to effectively improve the portability of the laser. Zhou et al. reported the pioneering work demonstrating the electrically pumped REI-doped LNOI laser for the first time by butt-coupling a laser diode chip with an erbium-doped LN gain chip. 96 In addition, the electrically pumped REI-doped LN lasers can also be realized by heterogeneously integrating III-V materials on REI-doped LN chips to construct the pump laser. 97 It should be mentioned that compared with the electric pump laser realized by direct hybrid integration or heterogeneous integration, REIs have a long excited-state lifetime, resulting in REI-doped LNOI lasers with low signal noise and narrow linewidth, which has advantages for developing applications, such as optical coherent communication and quantum optics.
Wafer global doping. As mentioned in the previous introduction, the LN thin film cannot tolerate the high temperature conditions required by thermal diffusion or ion implantation to achieve high-concentration doping. Therefore, the method of incorporating REIs into LN thin film for the currently reported REI-doped microlasers is mainly doping in the growth process of bulk LN before ion slicing. However, one of the side effects of this doping method is that REIs are distributed on the entire LN thin-film wafer, which brings additional absorption loss and refractive index changes to the passive device integrated on the same chip, degenerating the device's performance. There are several ways to realize local doping. (i) Before the LN thin film is formed by ion slicing, bulk LN wafers can be doped locally with REIs by thermal diffusion or ion implantation. Rüter et al. prepared a neodymium-doped LN thin film by thermal diffusion doping before ion slicing, which confirmed the possibility of local doping by this method. 57 At the same time, it should be noted that incorporating REIs by thermal diffusion or ion implantation may pose a challenge to the sliced thin film quality. For example, Xu et al. reported that the thermal diffusion doping process increases the roughness and causes a slight deformation at the diffusion surface of the erbium-doped LN wafer. 61 An additional CMP step before the ion slicing process is expected to improve this diffusion surface quality. (ii) Integrated REIsdoped LN thin film with undoped LN thin film by butt-coupling to construct active-passive LN thin-film devices. For example, Zhou et al. reported the work of tiling erbium-doped LN film and undoped LN film with ultraviolet curing adhesive and then prepared the monolithically integrated amplifier using a single continuous photolithography process based on the activepassive chip. 98 The limitation of this method is that due to the integration of two different chips, the thickness may vary, resulting in an additional loss at the optical interface of active and passive chips. At the same time, the scalability and stability of large-scale integration for this method have certain limitations due to splicing the two independent chips. (iii) Hybrid REI material on LN thin film. Another promising scheme for localized incorporation of REIs is to deposit materials locally with REI gain media on LN thin film. For example, REI-doped TeO 2 and REI-doped Al 2 O 3 materials were widely used in active integrated photonics for their high rare-earth solubility, large emission cross section, as well as wafer-level deposition techniques. 99,100 One consideration in this design is that sufficient overlap between the pattern of the LN layer and the gain material is required to obtain large gains or power output, for example, by depositing the gain material into the microtrenches design to improve the gain performance of the device. 101,102 In addition, by introducing the design of REI-doped loaded waveguide, the fabrication process of active devices based on LNOI can be simplified. For example, silicon nitride materials featuring low transmission loss, refractive index close to LN, and CMOS compatibility are successfully applied on the LNOI platform. 103,104 Some other aspects still need to be explored. The locking of the pump mode and the encapsulation of the coupling region deserve to be explored in the experiment to obtain high output power and stable operation of the laser during high-power pumping. Moreover, the wavelength tunability of the reported single-mode lasers is also limited. The thermooptic effect of LN crystal can be utilized to achieve broadband wavelength tuning by integrating microheaters on the coupled resonators. 105 Furthermore, combined with the outstanding electro-optical properties of LN, the REI-doped LNOI laser provides a promising platform for fundamental physics research, such as PT-symmetry breaking with natural advantages. 106,107
Amplifiers Based on REI-Doped LNOI
In addition to lasers, amplifiers can provide gain for on-chip signals and therefore have fairly broad-based demand in on-chip optical communication, nonlinear frequency conversion, and other applications. High-gain waveguide amplifiers based on silicon nitride and integrated silicon platforms have been proven to be successful. 108,109 However, due to the weak optical confinement as well as nonuniform distribution of REIs in titaniumdiffused channel waveguide, the gain performance of waveguide amplifiers based on bulk LN is generally low (<3 dB∕cm). 110 Fortunately, as mentioned above, waveguides prepared based on REI-doped LNOI can obtain strong mode localization and high doping concentration with uniform distribution, which lays the foundation for the realization of large-scale integrated highgain LNOI waveguide amplifiers. Consequently, the research on REIs-doped LNOI amplifiers has also attracted the attention of researchers. Zhou et al. demonstrated the first waveguide amplifier on a Z-cut 600 nm-thick erbium-doped LNOI chip with 1% doping concentration. 111 The 3.6-cm-long waveguide amplifier with a spiral design to reduce the overall device size was fabricated by the PLACE process. The signal gain at 1530 nm under different pump power was measured with a fixed onchip signal power of 19.64 nW by a bi-directional pumping scheme. Then, considering the propagation loss calibration, the internal net gain for this signal wavelength was obtained by the equation where P on and P off are the measured signal powers in pump-on and pump-off state and α and L denote the transmission loss coefficient at the signal wavelength and the corresponding waveguide length, respectively. As shown in Fig. 8(a), the internal net gain exhibits a rapid rising stage (small-signal gain) and a gradual trend to gain saturation process following the growing pump power. A maximum internal net gain of 18 dB was achieved at the pump power of ∼40 mW. At the same time, the strong green upconversion fluorescence in the waveguide was observed when the amplifier is operating, as shown in the inset of Fig. 8(a), which was investigated by Jia et al., who found a dual-color upconversion luminescence emission. 116 The signal gain at different wavelengths was measured and consistent with the signal fluorescence spectrum, as shown in Fig. 8(b). In addition, the polarization dependence of amplifier gain was studied under different polarization states for both pump and signal modes. As a result, the maximum gain was obtained where pump and signal modes all have a TE polarization. This probably is due to that the TE modes in Z-cut erbiumdoped waveguide have better absorption and emission cross sections for the pump at the 980-nm band and signal at the 1550-nm band, respectively. At the same time, the low transmission loss for TE polarization also contributes to obtaining a high gain. Additionally, the same group recently demonstrated a fourchannel erbium-doped waveguide amplifier with a net gain of ∼8 dB at 1530 nm based on a monolithically integrated activepassive LNOI chip. 98 At the same time, Chen et al. fabricated a compact 5-mmlong waveguide amplifier using the EBL-Ar þ etching and CMP process based on an erbium-doped LNOI with 0.5% doping concentration. 112 Under the 980-nm laser source pump, the measured signal spectra for the increasing pump power in the range of 0 to 21 mW are shown in Fig. 8(c). The maximum internal gain at 1531.6 nm is about 5.2 dB, corresponding to a net gain per unit length of >10 dB∕cm. The internal conversion efficiency was deduced up to 0.2% with the equation η ¼ 100% × ðP on − P off Þ∕P pump , where P pump is the pump power.
In addition, they also demonstrated a 4.3-mm-long erbiumdoped LNOI waveguide amplifier with a signal enhancement factor of 5.4 dB cm −1 at a low pump power of 3 mW. 61 Subsequently, Luo et al. reported on-chip erbium-doped LNOI waveguide amplifiers based on a similar simplified fabrication process without Cr film deposition and CMP steps. 113 The amplifiers consist of a compact straight waveguide with a length of 5 mm. The doping concentration of erbium ions is 0.1%, which is much lower than that in previous reports. Under the 974.3-nm laser pump, the net internal gain at 1531.5 nm with a fixed signal power of 5 nW was investigated by increasing the pump power. A maximum net internal gain of 5.5 dB was obtained at a higher pump power of ∼64 mW. In addition, the gain dependence on the signal power at 1531.5 nm with a fixed pump power (23 mW) was measured and is shown in Fig. 8(d). What is more, a maximum internal net gain of 15 dB at −65 dBm signal power is achieved. The internal net gain per unit length is up to 30 dB∕cm, which is the highest value for the reported REI-doped LNOI waveguide amplifiers. It should be mentioned that the optimal gain is obtained under a weak signal power (−65 dBm), so the amplifier may be more suitable for small signal amplification, sensing, and other related fields.
To enhance the integration density of devices, Yan et al. also fabricated a 1% erbium-doped LNOI waveguide amplifier with a spiral design using the EBL-Ar þ etching process. 29 The waveguide has a total length of 5.3 mm with a footprint of ∼0.06 mm 2 . To characterize the spiral waveguide amplifier net gain, two lasers operating at 974 nm were used for the bidirectional pump. A maximum net gain of 8.3 dB at 1530 nm Luo et al.: Advances in lithium niobate thin-film lasers and amplifiers was obtained, which is considered equivalent to a net gain per unit length of ∼15.6 dB∕cm. At the same time, the net internal gain in the 1520 to 1570 nm range was characterized and showed that the majority wavelength range could obtain an over 3 dB net gain.
In addition, inspired by the design of double-cladding fiber, Liang et al. fabricated ∼10-cm-long tantalum pentoxide (Ta 2 O 5 )-clad erbium-doped LNOI waveguide amplifiers by depositing a ∼1-μm-thick layer of Ta 2 O 5 on the top of LN waveguide core, as shown in the inset of Fig. 8(e). 114 As part of the optical power in the LN core is introduced into the cladding waveguide, the detrimental absorption of pump power and signal powers by quenching ions is reduced. The optical gain of the waveguide amplifiers with Ta 2 O 5 -cladding was measured and shown to be superior to the amplifiers without cladding, with a maximum net gain of more than 20 dB at 1532 nm, as shown in Fig. 8(e). Recently, ytterbium-doped LNOI waveguide amplifiers fabricated by the PLACE process were reported by the same group. 115 The maximum net gain at 1060 nm for a 4-cm-long waveguide [the inset of Fig. 8(f)] pumped by a 976-nm laser was measured at about 5 dB, as shown in Fig. 8(f).
Also, the influencing factors, such as pump wavelength, pump mode, and waveguide length for the performance of the amplifier were carefully investigated by Cai et al. based on an erbium-doped LNOI waveguide amplifier with a concentration of 0.72 × 10 20 cm −3 . 117 An internal net gain of 16 dB at 1531.6 nm with a saturation power of −8.84 dBm was achieved on a 2.5-cm-long waveguide, as shown in Fig. 9(a). The amplifier noise figure mainly originates from spontaneous emission and shows a trend of increasing with the signal power. Experimentally, the minimal noise figure of 4.49 dB at −50 dBm signal power and near 6 dB in the range of signal power from −45 to −28 dBm was observed, as shown in Fig. 9(b). In addition, the power conversion efficiency reflecting the pump effectiveness was characterized and displayed a trend of decreasing with the increasing signal power due to the reduction of the obtained internal gain, as shown in Fig. 9(c). Moreover, although the absorption coefficient of erbium ions in the 980-nm band is higher than 1480-nm band, 118 the amplifier operating on the 1484-nm pump exhibits a higher gain than on the 980-nm pump, as shown in Fig. 9(d), which is mainly because the 1484-nm pump has a better overlap factor with signal and a lower transmission loss. The gain performance for different waveguide lengths was also studied, as shown in Fig. 9(e). The optimal length of 2.58 cm close to the simulation value of 2.68 cm was observed. Additionally, the pumping scheme was also studied in this work. Specifically, for forward pumping, the pump power is high at the front end and low at the back end, while the signal is opposite. Thus, for an amplifier with a long waveguide length or high doping concentration, when the signal light is transmitted to the back end, the remaining pump power is insufficient to provide sufficient gain, resulting in overall gain performance degradation. Although backward pumping can be applied to avoid the power distribution mismatch and obtain a greater gain, this pumping scheme will introduce a higher noise. Therefore, bi-directional pumping is often used to balance gain and noise performance. As shown in Fig. 9(f), a better gain and noise performance was obtained at the bi-directional pumping scheme compared to forward pumping and backward pumping.
As can be seen from the above introduction, amplifiers with straight and spiral oxide cladding waveguide structures have been successfully verified based on REI-doped LNOI platforms. The pumping wavelength and pumping scheme of amplifiers are also explored. Table 2 summarizes the key parameters of REI-doped LNOI waveguide amplifiers reported so far. However, there are still several problems that need to be resolved in the development of amplifiers, as discussed below.
High output power
Although a maximum net gain factor of 20 dB was realized based on a 10-cm-long waveguide. The high gain of the amplifier is obtained at the low input signal power (−22 dBm), which leads to the low output power of the amplifier and limits the application scenarios of the amplifier to a certain extent. To further improve the gain performance of the amplifier, especially for high signal power amplification, the most important problem to be solved is to reduce the propagation loss of the waveguide, which limits the available REI-doped waveguide length and, therefore, the attainable output power level. 120 For example, a promising reference work is that Liu et al. achieved a signal output power of 145 mW with a signal input power of 2.61 mW based on a 0.21-m-long erbium-doped silicon nitride waveguide with a doping concentration of ∼3.25 × 10 20 cm −3 . 121 The propagation loss based on this silicon nitride platform is <5 dB∕m, which is far lower than the reported value of LNOI erbium-doped amplifiers (an optimal value of 16 dB∕m). 111 Therefore, it is expected to obtain higher output power on a longer REI-doped waveguide by improving the processing technology. Moreover, the REI doping concentration and the mode overlap factor between the optical mode and REIs can be further optimized for more significant gain. In addition, for the amplification of high signal power, the coupling of the on-chip waveguide and off-chip pump is also a special concern. Due to the long REI-doped waveguide and high input signal power, higher requirements for on-chip pump power are also put forward. Therefore, it is necessary to introduce a high-efficiency coupler in the pump band or adopt a multichannel pumping scheme to meet the pump power requirement of the amplifier and avoid degradation of amplifier performance. In addition, a high erbium ion doping concentration is usually required for the short waveguide amplifier to obtain high gain. However, the high doping concentration will cause some detrimental effects, such as cooperative upconversion and concentration quenching, thus degrading the amplifier gain performance. The introduction of ytterbium ions with a similar absorption band near 980 nm is expected to achieve high amplifier gain over shorter waveguide lengths by reducing the detrimental effects through energy transfer, 110,122 which can be explored in the research of the erbium-doped LNOI amplifier. 36
L-band amplification
At present, the erbium-doped LNOI waveguide amplifiers confirmed work in the C-band (1530 to 1565 nm), and there is no research report on L-band (1565 to 1625 nm) LNOI amplifiers. However, it is very important to expand the bandwidth of on-chip communication. Because the emission cross section of erbium ion in LN at the L-band is much lower than the C-band, it is necessary to accumulate the gain with a longer waveguide length than the C-band amplifier to achieve the L-band amplification. Referring to the research of L-band erbium-doped fiber amplifier (EDFA), 123 there are some methods to improve the L-band gain of the erbium-doped LNOI waveguide amplifier. (i) Since the signal to be amplified is in the L-band, the ASE in the C-band can also be used for pumping to improve the gain. For example, a C-band high-reflectivity grating structure can be introduced at the input end of the erbium-doped waveguide to recycle the unused backward ASE. (ii) An auxiliary pump light working at 1550 to 1560 nm can be introduced to enhance the L-band gain.
Gain flattening amplification
Another problem that needs attention is the gain flatness of the erbium-doped LNOI amplifier, which is particularly important in optical communications based on on-chip wavelength division multiplexing. Because the gain of an erbium ion is not flat in the broadband range, many studies have been done based on the erbium-doped fiber amplifier system to obtain the flat gain. The basic idea is to introduce a filter so that the changing trend of transmission loss relative to wavelength is consistent with that of the gain relative to wavelength and thus obtain broadband and flat gain. For example, Mach-Zehnder filter, 124 acousto-optic filter, 125 and long-period fiber grating 126 have been used in EDFA to achieve wide flat-band gain. In addition, a dual-core fiber design was demonstrated to obtain an ultrawide-band gain-flattened EDFA by regulating the coupling between parallel transmission fibers. 127 With the excellent photoelectric and acousto-optic properties of LN and the mature micro-nano processing technology of LNOI, the design experience based on the EDFA system can be conveniently transferred to the LNOI platform and hopefully achieve better gain flatness.
Some other limitations
The reported amplifiers mainly operate in the communication band. There is less research on amplifiers in other bands, which can be effectively expanded by doping with different REIs, such as neodymium and thulium. 128 In addition, the problems of optical pumping and local doping of REIs also exist, which can be referred to in the previous discussion on REI-doped microlasers.
Electrically Pumped III-V Lasers and Amplifiers on LNOI Platform
In addition to REI doping, introducing III-V material as a gain medium is another common method to realize an integrated light source and amplifier. Because the laser and amplifier based on III-V material can work through direct electric pumping, it has the advantages of high gain efficiency and portability. It thus has become the mainstream route to develop integrated photonics gain devices. At present, the way to integrate an electric pump light source based on III-V gain materials with an LNOI platform can be roughly divided into hybrid integration, heterogeneous integration, and microtransfer printing, as described below. [129][130][131][132][133][134] Hybrid integration is one of the most mature integration technologies that assembles several fully processed chips, such as III-V semiconductor devices and passive integrated photonics devices, into a single multifunctional device at the last packaging stage. Moreover, it can test and optimize discrete devices before integration to ensure the yield of integrated device products. Based on the current commercial mature small size III-V gain chip with high power and efficiency (such as support from Freedom Photonics LLC), namely, electrically pumped laser or semiconductor optical amplifier chip, there are mainly inter-chip hybrid integration and flip-chip hybrid integration methods used to realize an integrated active device. (i) Inter-chip hybrid integration. The active chip is assembled adjacent to the passive integrated external cavity chip and introduces gain through butt coupling. For example, Fig. 10(a) shows that the off-chip reflective semiconductor optical amplifier and the low-loss silicon nitride micro-ring external cavity chip are placed next to each other to realize the electrically pumped compact narrow linewidth integrated laser. 135 To improve the optical coupling efficiency of the gain chip and the external cavity chip, the optical waveguides of the two chips should be aligned. A spot-size converter design is usually introduced in the optical interface of the passive chip. (ii) Flip-chip hybrid integration. The active chip is assembled on a solder bumps recess at the passive chip by the pick-and-place method. A schematic diagram of a hybrid silicon photonic flip-chip laser is shown in Fig. 10(b), and vertical alignment accuracy of AE10 nm is achieved by adjusting the etching thickness of pedestals. 136 Compared with inter-chip integration, the flip-chip process has a higher integration degree, and the stability and thermal management of devices have also been improved. However, during the assembly process, both hybrid integration methods require high alignment accuracy, especially the flip-chip process demanding both horizontal and vertical directions, and can only integrate almost a singular chip at a time, which slows down the integration process and raises the cost of integrated chips, making hybrid integration technology a challenge in mass production.
Heterogeneous integration is an integration technology that bonds III-V thin film wafer or dies to the top of a processed base wafer at the intermediate stage, as shown in the left panel of Fig. 10(c). 134 Then the unprocessed III-V material is fabricated to the III-V gain devices, such as laser or amplifier arrays, by the lithography-etching process with a wafer-level scale. Due to the III-V devices being defined by the lithography process, an ultrahigh alignment accuracy for the light source and the passive device is realized by the adiabatic coupling at the interfaces between different layer waveguide structures to ensure an effective coupling. The right part of Fig. 10(c) shows the schematic diagram of a narrow-linewidth III-V∕Si∕Si 3 N 4 laser based on wafer bonding techniques. 137 Compared with hybrid integration, heterogeneous integration technology has the advantages of wafer-level productivity and high alignment accuracy defined by lithography, which effectively improves the integration density, alleviates the dependence on high alignment tools, and reduces the cost. However, heterogeneous integration cannot test the prepared III-V or passive devices in the intermediate process, resulting in a lower yield of products, which can be improved by improving the maturity of the process.
Microtransfer printing is another popular technique that can pretest III-V devices before integration and enables a largescale parallel integration, which elegantly combines the advantage of flip-chip hybrid integration and heterogeneous integration techniques. As shown in the left panel of Fig. 10(d), different from the wafer bonding technique, the laser or amplifier devices are prefabricated on the native III-V wafer before transfer to the processed wafer. In the microtransfer printing process, a polydimethylsiloxane (PDMS) stamp is used to pick up the prefabricated active devices (referred to as coupons) with an underneath release layer and transfer them to the passive devices with a single or arrays form. 129 The alignment between the coupons and the passive device is realized by the digital pattern recognition based on the markers defined on the III-V wafer and base wafer in the prefabricated process. At present, an alignment accuracy of AE1.5 μm (3σ) for transfer arrays and <1 μm for transfer single coupons is realized by the state-ofthe-art microtransfer printing tools, which is still lower than the wafer bonding approach. 139 The right part of Fig. 10(d) shows a schematic diagram of a low noise III-V-on-siliconnitride mode-locked comb laser demonstrated by the microtransfer printing techniques. 138 Compared to wafer bonding techniques, microtransfer printing technology can achieve high-yield integration and does not need to adjust the back-end process flow due to the parallel processing of III-V devices and passive devices and the short transfer cycle. In addition, microtransfer printing techniques can reuse the expensive III-V native substrate, which reduces the cost to a certain extent. On the other hand, the wafer bonding approach has a high alignment accuracy and thus provides great advantages in optical coupling efficiency between active devices and passive devices and scalability. A more comprehensive introduction and discussion of the three basic integration approaches can be found in recent reviews. 129,132 Based on the integration schemes of the above primary III-V gain devices, the electrically pumped III-V lasers and amplifiers based on the LNOI platform have also been developed, as presented in this section.
Hybrid Integration III-V Lasers
Han et al. first integrated an electrically pumped III-V laser with a passive LNOI chip. 30 To realize broadband tuning and singlemode lasing, the LNOI chip incorporates a Vernier filter consisting of two cascaded microring resonators and a distributed Bragg reflector with a Gaussian apodization profile, as shown in Fig. 11(a). The hybrid laser has a threshold current of 100 mA corresponding to a threshold current density of 2.5 kA∕cm 2 and maximum on-chip power of 2.5 mW. With the assistance of the incorporated Vernier filter, the SMSR of the hybrid singlemode laser achieved up to 60 dB at 1325.5 nm. In addition, the broadband wavelength tuning of the lasing signal was performed by applying voltages on the thermo-optic heater integrated with the microring. As shown in Fig. 11(b), the observed superimposed spectra of coarse tuning with a tuning range of 36.4 nm and a tuning efficiency of 0.42 nm∕mW were demonstrated. Furthermore, fine-tuning over a wavelength range of 0.5 nm was achieved by applying voltages on both thermo-optic heaters. Combined with the excellent electro-optical performance of the LNOI platform, this demonstrated hybrid laser shows great potential for the realization of a high-speed optical transmitter by further integrating an LNOI modulator. Subsequently, an O-band LN/III-V transmitter with a wavelength-tuning range of over 40-nm circuit was demonstrated by hybrid integrating an RSOA to an LNOI photonic integrated circuit. 142 Additionally, Shams-Ansari et al. developed a fully integrated high-power laser on a passive LNOI chip by flip-chip bonding a DFB laser. 33 The LNOI chip was fabricated on an X-cut 600 nm thick LNOI chip by the EBL-Ar þ etching process. The buffer layer thickness of SiO 2 was chosen to be 4.7 μm for the optical mode match in height between the DFB laser and LNOI waveguide. In addition, to ensure maximum overlap of the optical modes between the DFB laser and chip waveguide, a horn coupler with an out-taper width of 8 μm was introduced in the butt coupling region, as shown in Fig. 11(c). The waveguide width is designed to be 800 nm to ensure a single-mode operation. The light-current (L-I) curve of the hybrid laser was measured under an electrical pump applied by a source meter, as shown in Fig. 11(d). Thanks to the high output power of the DFB laser and the optimization design of the coupling between the DFB laser and LNOI chip, the on-chip optical power of the hybrid laser was up to 60 mW at 1.0 A current under room temperature, which is the highest reported value for LNOI integrated photonics platforms. Meanwhile, the single-mode spectrum was observed, and the linewidth was characterized as below 1 MHz using a delayed self-heterodyne technique. Furthermore, to explore applications of the hybrid LNOI laser in telecommunication spaces, a high-power laser transmitter was assembled by integrating an LNOI electro-optic modulator with the DFB laser. The demonstrated hybrid integrated laser transmitter effectively promotes the application of LNOI systems in long-haul telecommunication, data center interconnection, and other related fields.
In addition, Kippenberg's group reported a heterogenous Si 3 N 4 -LiNbO 3 integrated platform via direct wafer bonding, as shown in Fig. 11(e). 140 Combining the ultralow-loss merit of silicon nitride photonic platform and the large Pockels coefficient of LN, the integrated platform realized an ultrafast tunable laser featuring an intrinsic linewidth of 3 kHz and frequency-tuning rate of 12 PHz∕s by butt-coupling an InP-based DFB diode laser. Moreover, a proof-of-concept frequencymodulated continuous-wave (FMCW) lidar ranging experiment was performed based on this hybrid integrated laser with a resolution of 15 cm. More recently, this group also demonstrated a frequency-agile LNOI laser by hybrid integrating a DFB laser to LNOI photonic integrated circuits. 141 The maximum measured power in the output fiber of the hybrid laser is 1 mW with an SMSR of 60 dB lasing at 1556.3 nm. Compared to the free-running DFB laser, the frequency noise of the hybrid laser is suppressed by more than 20 dB by exploiting the laser self-injected locking effect, as shown in Fig. 11(f). Moreover, the ability of ultrafast frequency actuation of the laser was explored using the Pockels effect of the LN with a 7 × 10 11 Hz∕s tuning rate and CMOS-compatible driving voltage in the selfinjection locked state.
Recently, Li et al. demonstrated an electrically pumped Pockels laser through a hybrid integrated III-V RSOA with an LNOI external cavity structure. 34 As shown in Fig. 11(g), the LNOI chip was designed as a Vernier mirror structure consisting of two racetrack resonators. To enrich the functionality of laser and expand its application scenarios, a microheater, electro-optic modulator, phase shifter, as well as a PPLN wavelength converter were incorporated into the LNOI external cavity chip. At the same time, the racetracks and bus waveguides were designed to support only quasi-TE mode to use the significant Pockels effect of LN. The lasing mode at 1581.12 nm with a threshold current of 80 mA was observed in experiments. The on-chip power was measured as ∼3.7 mW at 200 mA and can be further improved to 5.5 mW by adjusting the Vernier mirror conditions. The laser operated at a singlemode state with a linewidth of 11.3 kHz and an SMSR greater than 50 dB. In addition, the coarse wavelength tuning with a tuning range of 20 nm was realized by thermo-optical tuning of a Vernier ring through the incorporated microheater. To explore the ultrafast frequency tuning capability, a high-speed driving electrical signal with a triangular waveform was applied to the phase shifter. The laser frequency tuning efficiency and rate under different modulation frequencies were investigated. The tuning efficiency remained at a constant level of (0.26 to 0.34) GHz/V in the broad range of modulation frequency; the frequency tuning rate displayed a linearly increased tendency with the modulation speed, and a value of 2 × 10 18 Hz∕s was realized at the modulation frequency of 600 MHz, as shown in Fig. 11(h). In addition to the pure frequency modulation, the fast on-off switching of the lasing mode and the switch between two lasing modes with one Vernier FSR were also realized by applying a square wave signal on the driving electrodes of the racetrack resonator. Moreover, the two-color laser was realized by the inherent SHG of the fundamental telecom laser based on the incorporated PPLN frequency-doubling structure, as shown in Fig. 11(i). Benefiting from the large Pockels effect of LN, this demonstrated laser has the essential capabilities of fast tuning and reconfigurability, which holds great potential for application in atomic physics, lidar, and microwave photonics. More recently, the same group also reported a self-injection-locked secondharmonic integrated source with an SHG power of 2 mW through integrating a DFB laser to a PPLN resonator chip. 143 Additionally, Cheng's group recently reported an electrically pumped compact laser through a hybrid integrated 980-nm band laser diode to an erbium-doped LN microring chip. 96 In experiments, a single-mode laser operating at 1531.7 nm with a linewidth of 0.05 nm was observed with a threshold of 6 mW and a conversion efficiency of 3.9 × 10 −5 .
Heterogeneous Integration and Microtransfer Printing III-V Lasers and Amplifiers
Besides hybrid integration, de Beeck et al. reported a novel strategy for the microtransfer printing of III/V gain materials on a thin-film LN platform. 32 First, the rib waveguides were fabricated starting from a 500-nm X-cut LN thin film with a sapphire substrate by the EBL-Ar þ etching process. To realize the EO phase shifting and lasing wavelength tuning, metal electrodes were introduced on the partially etched LN layer. Then, a 400-nm-thick coupon of crystalline silicon (Si) was microtransfer-printed on the LN recess as an intermediate layer and patterned. Finally, a prefabricated III-V semiconductor optical amplifier was microtransfer-printed on the Si waveguide. The schematic drawing at different cross sections in this platform is shown in the upper part of Fig. 12(a). Based on the heterogeneously integrated semiconductor optical amplifier system, multimode ring lasers and single-mode tunable lasers were investigated by designing a microring resonator structure with a directional coupler and a pair of electro-optically tunable ring resonator mirror structures, respectively, as shown in the bottom of Fig. 12(a). Experimentally, the gain performance of the III/V-on-LN amplifier was characterized first under different bias currents with a fixed input power of −22.7 dBm. A maximal gain of 11.8 dB was obtained at 1537 nm with a 3-dB gain bandwidth up to 45 nm for a bias current of 180 mA, as shown in Fig. 12(b). Then, the L-I curves of the multimode ring laser at different temperatures were measured, as shown in Fig. 12(c). The laser operating at the temperature range of 20°C to 60°C was investigated, and an output power of 427 μW was obtained at the temperature of 20°C. Additionally, the phenomenon of laser power oscillation caused by mode hopping was also observed. Afterward, the coarse wavelength tuning of the single-mode tunable laser with a range of 21 nm was observed by applying a voltage to the electrodes of the ring resonator mirror. In addition, a fine-tuning range of 180 pm was realized by changing the bias voltage over the ring mirrors, as shown in Fig. 12(d). Furthermore, an output power of 0.77 mW was detected for the single-mode laser with a fundamental linewidth below 1.5 MHz. Compared with the hybrid integration by butt coupling, the microtransfer printing method could integrate active devices with wafer-scale and electrically pretested active devices on their native substrate, leading to high throughput.
More recently, Zhang et al. demonstrated a heterogeneous integration of III-V active device on LNOI waveguide by adhesive bonding, as shown in Fig. 12(e). 144 An electroluminescence spectrum with a 3-dB bandwidth of 40 nm centered at 1600 nm was observed, as shown in Fig. 12(f). Moreover, due to selecting the InP-based III-V material, the active devices can also function as a photodetector. A peak responsivity of 0.38 A∕W at 1540 nm with a low dark current of 9 nA at −0.5 V was detected. The co-integration of a light source and a photodetector enables a fully integrated transceiver based on an LNOI photonics platform.
The electrically pumped lasers and amplifiers have the characteristics of portability, which is more conducive to the application out of the laboratory and can eliminate the influence of doping on non-gain area devices, which is a beneficial supplement to the optically pumped LNOI light sources. Table 3 summarizes the main parameters of LNOI electrically pumped lasers reported so far. Hybrid integration, heterogeneous integration, and microtransfer printing have their advantages and limitations. For example, with the help of commercial mature semiconductor single-frequency lasers or amplifiers and the fabricated low-loss LNOI devices, as well as the introduction of end-coupler design and fine adjustment and alignment before final packaging, hybrid integrated electrically pumped lasers can obtain narrow linewidth and high output power. The disadvantage is that hybrid integration requires high-precision alignment tools and can only integrate a single device at a time, which increases the cost of integrated devices to some extent and is therefore more suitable for initial prototype exploration. On the other hand, the heterogeneous integration and microtransfer printing technology can realize wafer-scale preparation and effectively lead to high throughput. However, due to the complexity of the preparation process, the whole process is still in the research and development stage. Inconsistent thermal diffusivity and unmatched refractive index of different film layers result in a low yield of products, especially for heterogeneous integrated products. In addition, the microtransfer-printing process is complex and requires high alignment accuracy (<1 μm), which is challenging to meet the general micro-nano processing platform. 139 Moreover, the output power of the laser is much lower than that of the flip-chip hybrid integrated laser. It is mainly due to the low coupling efficiency caused by the low alignment accuracy between the gain amplifier and passive component, as well as the fact that the integrated gain component is a III-V amplifier rather than a III-V laser, which limits the available power coupled to the LN layer. A promising way is to transfer the prepared III-V FP cavity device to the silicon-based photon platform with a butt-coupling scheme to improve the output power of the integrated laser. 145,146 Additionally, the scheme of realizing electrically pumped lasers through heterogeneous integration with wafer bonding technology was reported rarely and there is plenty of room for performance improvement. 147 As can be seen from the above introduction, various lasers and amplifiers based on the LNOI platform have been demonstrated by doping REIs or integrating III-V gain materials. Nevertheless, the two gain introduction schemes have their advantages and disadvantages. Specifically, compared with III-V gain media, REI-doped materials have a long excited-state lifetime and less refractive index change, resulting in relatively higher temperature stability, lower noise figure, and narrower laser linewidth for the REI-doped laser and amplifier. For 148 Therefore, lasers and amplifiers based on REI-doped LNOI are more suitable for on-chip coherent communication, quantum optics, and other related applications. Another advantage of the REIs-doped LNOI lasers and amplifiers is that the fabrication process is simple and compatible with the CMOS process, which promises to be scalable, low-cost, and amenable to mass production. But the output power for the REI-doped LNOI lasers and amplifiers is still at a low level (μW-level), which substantially limits the application of nonlinear frequency conversion and soliton comb based on on-chip active light sources. To enhance the output power, the doping concentration and resonator geometry, as well as the resonator size, need to be carefully studied and designed. On the other hand, the electrically pumped lasers based on LNOI through integrating III-V materials have the characteristics of high output power, adjustable broadband, and convenient operation. It can be widely used as an optical transmitter in longdistance communication networks and data center optical interconnection, and other related scenes. However, there are also some shortcomings for electrically pumped LNOI light sources. For example, the external gain chip of the hybrid integration needs to finely adjust the coupling between the active chip and LNOI passive chip before work. Therefore, it is challenging to realize batch preparation, leading to increased costs and preparation period. In addition, the preparation process of the heterogeneous integration scheme is complex, and the microtransfer printing scheme requires high alignment accuracy for the activepassive coupling transition process.
Application Prospects of LNOI-Based Lasers and Amplifiers
Based on the realization of the long-awaited active light source, a series of promising applications are expected to be carried out based on the platform of REI-doped LN film. This section discusses the application fields of combining on-chip light sources with other LNOI functional devices, including sensors, electrooptic modulators, frequency converters, and microcavity combs.
Sensing Based on LNOI Light Sources
WGM microcavity has a high Q value and a small mode volume, which can strongly enhance the interaction between light and matter. It is thus regarded as a high-sensitivity platform in sensing applications. 149,150 The main detection mechanism of WGM microcavity sensing is to monitor the mode drift, mode broadening, and mode splitting of the microcavity transmission spectrum. Compared with the passive resonator mode, the lasing mode has a narrower linewidth due to its gain, which is beneficial to further improve the sensitivity of detection. 151
Broadband Optical Communication
The optical transmitter is an essential building block in optical communication applications, and its critical component is the modulator module. Electro-optic modulators based on LNOI have been developed and shown noticeable advantages in driving voltage, bandwidth, linearity, and excitation ratio. However, due to the inherent luminescence difficulties of LN materials, the lack of an on-chip integrated light source, especially the electrically pumped high-power light source, is considered the main obstacle for applying the LNOI modulator to the optical transmitter. With the development of LN thin-film lasers, electrically pumped high-power lasers have become possible, effectively overcoming this limitation. In the future, an LN thinfilm optical transmitter with low power consumption and high performance is expected to play an important role in data centers.
In addition, to further increase the capacity of optical communication, the broadband tunable laser sources realized on the LNOI platform were expected to combine with the on-chip electrooptical modulator and wavelength division multiplexer 153 for the construction of a multichannel wavelength division multiplexer emitter, which is helpful to achieve ultrabroadband optical fiber communication and reduce the communication costs.
Frequency Converter Based on LNOI Active Light Source
It can be seen from Tables 1 and 3 that the reported lasers are mainly concentrated in the 1550-and 1060-nm bands. In addition, a single type of REI or semiconductor gain medium has a limited gain bandwidth (∼100 nm), which limits the application range of gain chips to a certain extent. Fortunately, LN has good second-order nonlinear characteristics, which can effectively expand the bandwidth of light sources. Nonlinear wavelength conversion processes based on LNOI resonators and waveguides, such as SHG, SFG, and differential frequency generation, have been confirmed successfully and have obtained better performance than traditional bulk LN devices. 154,155 In the future, combined with the excellent second-order nonlinearity of LN, the LNOI active laser can be used to obtain the desired light sources flexibly. For example, the 530-nm band light source can be generated through the cascade second-order nonlinear process of SHG-SFG by combining the C-band LNOI laser and the PPLN structure designed to achieve a phase-matching condition, which effectively alleviates the limitation of the low luminous efficiency of the gain material in the band referred to as the "green gap".
Chip-Based Microcombs
Optical frequency combs have attracted wide attention due to their application in optical clocks, metrology, and spectroscopy. Especially in recent years, the optical comb generated in the WGM microresonators, referred to as microcombs, has the excellent characteristics of low power consumption, high repetition rate, and on-chip integration compatibility, which endows the optical frequency comb with a new generation mechanism and application range. The LN has excellent third-order nonlinear and electro-optical effects. The soliton microcomb 19 and broadband electro-optic frequency comb based on the LN platform have been confirmed, which shows the feasibility of the LN thin-film platform to generate an optical microcomb. In addition, based on the unique electro-optic effect of LN, the generation and modulation of an optical microcomb can be realized simultaneously on a monolithic chip, which can expand the application field of microcombs, such as programmable pulse shaping and coherent microwaves processing. 156,157 With the development of LN thin-film laser sources, the combination of laser and high-quality Kerr microcavity is a promising way to realize the miniaturization and integration of the microcomb system, which can enhance the portability of the microcomb and expand its application scenarios, such as parallel coherent lidar. Some research routes can refer to the development of a silicon nitride-based integrated optical microcomb. For example, electrically pumped soliton frequency combs are generated by hybrid integrating semiconductor amplifiers or lasers with passive silicon nitride chips. [158][159][160] Furthermore, laser soliton microcombs based on a wafer-scale fabrication process are reported by heterogeneously integrated InP semiconductor lasers on an ultralow-loss silicon nitride platform. 161 On the other hand, on-chip mode-locked lasers are another way to realize chip-level optical frequency combs. The realization of a modelocked laser based on the introduction of saturable absorber into chip-based laser has been confirmed on the silicon nitride integrated platform, 138,162 which can also provide a reference for the research of mode-locked lasers on an LNOI active light source platform. In addition, the multilongitudinal mode laser output based on REI-doped LNOI microcavity is also expected to achieve a mode-locked laser by introducing a saturable absorber or loading active regulation, such as electro-optic modulation to lock each longitudinal mode signal. 163 In the early stage, the application of the electro-optic effect of LN to realize mode-locked lasers was demonstrated on the erbium-doped LN weak waveguide. 164
Conclusion
In this review, the current research progress on lasers and amplifiers based on LN thin-film platforms was reviewed comprehensively. Specifically, in the section on optical pumping laser and amplifier realized by REI doping, several mainstream ways of introducing REIs into LN were introduced, and their advantages and disadvantages were discussed. Then the fluorescence spectrum research of REIs based on the LNOI platform was introduced. Then, vital parameters of the WGMs microlasers, such as threshold and conversion efficiency, were analyzed. On this basis, the research progress on microdisks, microrings, and single-mode lasers on the REI-doped LNOI platform is introduced. At the same time, the research on the REI-doped LNOI amplifiers was also reviewed. The limitations and improvement measures of the current optically pumped lasers and amplifiers were also discussed. On the other hand, in the section on LNOI electrically pumped III-V lasers and amplifiers, several mainstream mechanisms of introducing III-V gain materials into the current integrated photonics platform, namely, hybrid integration, heterogeneous integration, and microtransfer printing, were introduced in detail. The research progress on LNOI electrically pumped III-V lasers and amplifiers was reviewed. On this basis, the restrictions and improvement schemes for the current realization of electrically pumped lasers and amplifiers were explored in depth. Then the advantages and disadvantages of the two routes to realizing LNOI lasers and amplifiers were also discussed. Finally, the application scenarios based on the combination of an LNOI-based light source and other excellent device-based LN thin-film platforms, such as sensing, frequency conversion, and on-chip optical communication, were envisioned. In addition to the rapid development of various photoelectric devices of LN thin-film platforms, the realization of on-chip light sources will undoubtedly make the LN thin-film platform achieve a high degree of integration. At the same time, detectors based on Si, 165 black phosphorus, 166,167 and superconducting nanowires 168,169 have been proven on the LNOI platform, demonstrating the feasibility of integrated photodetectors. Furthermore, the experience of heterogeneously integrated III-V photodetectors based on Si-based photonics can also be transferred into the LN thin-film platform and is expected to achieve high detector bandwidth. [170][171][172] Figure 13 shows an envisaged schematic diagram of several integrated multifunctional photonic circuits on an LN thin-film platform, including integrated lasers, amplifiers, frequency converters, electro-optic modulators, photodetectors, and other key devices. These integrated photonic chips will benefit the field of optical communication, laser radar, particle sensing, information processing, and so on. In the future, the highly integrated versatile LNOI chips are expected to move out of the laboratory and lead to more practical applications.
Fang Bo received his BS (also BE) and PhD degrees from Nankai University in 2002 and 2007, respectively. Currently, he is working as a professor at Nankai University. From 2013 to 2014, he was working as a visiting scholar at Washington University in St. Louis. His research interests include micro-/nano-optics, quantum optics, and nonlinear optics, in particular, fabrication and nonlinear effects of on-chip lithium niobate resonators.
Yongfa Kong received his BS and MS degrees from Nankai University and PhD degree from the School of Material Science and Engineering of Tianjin University. Currently, he is working as a professor of physics at Nankai University in China. He has worked at the School of Physics of Nankai University since 1999 following postdoctoral appointments at the Photonics Center of Nankai University. His research interests are diverse and cover the physics and devices of nonlinear optical and photonic materials. Guoquan Zhang received his bachelor's degree in 1993, the master's degree in 1995 in condensed matter physics, and the PhD in condensed matter physics from Nankai University, Tianjin, China. Currently, he is working as a professor at Nankai University. His research interests include nonlinear optics and quantum optics.
Jingjun Xu received his BS degree in solid-state physics and PhD in condensed matter physics from Nankai University in 1988 and 1993, respectively. Currently, he is working as a professor at the School of Physics at Nankai University. He is the founding director of the Ministry of Education Key Laboratory of Weak-Light Nonlinear Photonics. His research interests include nonlinear photonic materials and physics and their application in information technology. | 21,197.4 | 2023-05-01T00:00:00.000 | [
"Physics"
] |
Aspilota ajara sp . n . ( Hymenoptera , Braconidae , Alysiinae ) , the first species of the genus Aspilota Foerster from caves
Aspilota ajara sp. n., a new species of the A. miraculosa (fasciatae) species group with very short upper tooth, was collected in a cave on La Palma, Canary Islands, Spain. This is the first Aspilota species known to occur in caves as well as the first record of Aspilota for the Canary Islands. The new species is described, illustrated and compared with related taxa.
Introduction
The Alysiinae is an extremely diverse subfamily of parasitoids of the family Braconidae (Dolphin and Quicke 2001) with about 2,300 already described species (Yu et al. 2012), which are divided in to two large and morphologically diverse tribes, Alysiini and Dacnusini (Shenefelt 1974).Members of the tribe Alysiini are parasitoids of Diptera-Cyclorrhapha, usually inhabiting humid substrates (Wharton 1984;Yu et al. 2012).Dacnusini are almost exclusively specialized on leaf and stem miner flies, mainly from the families Agromyzidae, Chloropidae and Ephydridae (Belokobylskij 2005;Peris-Felipo et al. 2014).
The genus Aspilota Foerster, 1863 is one of the largest taxa of the Aspilota genus group (Alysiini) with approximately 250 species described almost from all zoogeographical regions.Species of Aspilota are well defined by the presence of the large paraclypeal fovea connecting with the inner margin of the eye and the presence of fore wing vein cuqu1 (2-SR) (van Achterberg 1988;Peris-Felipo and Belokobylskij 2016a).
A new species, Aspilota ajara sp.n., from the A. miraculosa (fasciatae) species group (characterized by the small size of the upper tooth of the mandible) is described and illustrated in this paper.This is the first record of an Aspilota species collected in caves and also the first record of the genus Aspilota for the Canary Islands.
Area of study
La Palma Island (Canaries Islands) has a subtropical climate, with annual average temperature of 24.2°C (winters of 20-22 and summers of 25-28°C) and a low annual average rainfall of 135mm (AEMET 2016).The "Llano de Los Caños" cave is located in Villa de Mazo (La Palma, Canary Islands, Spain), close to the La Sabina and Tirimaga (Fig. 1).The cave is almost a linear tube with a length of 1,200 m and with several short branches.It has a single entrance at the mountain of La Horqueta (1365 m; UTM 28RBS262646).Recently, a new section called "Galerias de los Zapadores" was opened (Fernández et al. 2015).However, our area of study belongs to the classic section ("Tramo clásico").
Methodology
Samples were carried out at four sampling points (E) located in the main cave in complete darkness during 1995 (Fig. 2).At each point, four pitfall traps were placed at the beginning of each annual season and were checked two weeks later.Automobile antifreeze liquid was used as preservative and pieces of cheese were used as bait (García and González 1998).
The first sampling point (E1) was located at 20 m from the entrance in earthysandy soil.The second (E2) was placed at 70 meters from the entrance and has an earthy substrate and also a crack of 30 cm which divides the cave into two.The third point (E3) was situated at 90 meters from the entrance, in earthy substrate.The last sampling point (E4) was located at 190 m. from the entrance in a substrate built up from the remains of demolition (García and González 1998).Climatic conditions and distance from the cave entrance are given in Table 1.
Type specimens are deposited in the following collections: holotype in the Natural History Museum of Tenerife (Tenerife, Canary Islands, Spain; MNHT); paratypes in the Entomological Collection at University of Valencia (Valencia, Spain; ENV), Museo Nacional de Ciencias Naturales (Madrid, Spain; MNCN), Entomological Collection at University of La Laguna (La Laguna, Canary Islands, Spain; CULL), Natural History Museum of Tenerife (Tenerife, Canary Islands, Spain; MNHT), Zoological Institute RAS (St Petersburg, Russia; ZISP), and in the private collection of Rafael García Becerra (La Palma, Canary Islands, Spain; RGB).Description.Female (holotype).Head.In dorsal view twice as wide as its median length, times as wide as mesoscutum, with rounded temples behind eyes.Head at level of temple (dorsal view) as wide as at level of eyes.Eye in lateral view 1.6 times as high as wide and 0.9 times as wide as temple medially; in dorsal view about as wide as temple.POL 1.5 times OD; OOL 4.7 times OD.Face 1.7 times as wide as high; inner margins of eyes subparallel.Clypeus slightly curved ventrally, 2.3 times as wide as high.Mandible weakly widened towards apex, 1.4 times as long as maximum width.Upper tooth of mandible distinctly shorter than middle and lower teeth, develop as rounded lobe; middle tooth long, narrow and pointed; lower tooth longer than upper tooth, wide, rounded apically.Antenna thick, 19-segmented, 1.1 times as long as body.Scape 2.1 times as long as pedicel.First flagellar segment 3.2 times as long as its apical width, 1.1 times as long as second segment; second segment 3.0 times as long as its maximum width, third to ninth segments 2.8 times, 10th to 14th 2.6 times, 15th segment 2.2 times, 16th segment 2.5 times, and 17th (apical) 2.75 times as long as their maximum width accordingly.
Mesosoma in lateral view about 1.2 times as long as high.Mesoscutum 1.1 times as long as maximum width.Notauli mainly absent on horizontal surface of mesoscutum.Mesoscutal pit absent.Prescutellar depression smooth, only with median carinae.Precoxal suture present, not reaching anterior and posterior margins of mesopleuron.Posterior mesopleural furrow crenulate in upper part and smooth below.Propodeum sculptured, with pentagonal areola.Propodeal spiracle small.
Legs.Hind femur 4.7 times as long as its maximum width.Hind tibia slightly widened towards apex, 12.0 times as long as its maximum subapical width, as long as hind tarsus.First segment of hind tarsus 2.6 times as long as second segment.
Metasoma.Distinctly compressed.First tergite smooth medially, weakly rugulose laterally, widened towards apex, 2.3 times as long as its apical width.Ovipositor 1.3 times as long as first tergite, distinctly shorter than metasoma, 0.8 times as long as hind femur.
Colour.Body reddish brown, metasoma paler.Antenna mainly pale brown, four basal segments yellow.Mandible and legs yellow.Wings hyaline.
Etymology.The name is derived from Canary dialect "ájara" meaning "be fortunate", referring to the difficulty in finding this genus in the caves.
Comparative diagnosis.This new species is similar to A. insolita (Tobias, 1962) (U.K., Ireland, Denmark, Spain, Hungary, former Czechoslovakia, European part of Russia, Iran: Peris-Felipo et al. 2016) as they share the sculptured, pentagonal areola on the propodeum, eye in lateral view 0.9-1.0 times as wide as the temple medially, mandible 1.4 times as long as its maximum width, and sixth flagellar segment 2.6-2.8 times as long as its maximum width.
Aspilota ajara sp.n. differs from A. insolita in having the head in dorsal view twice as wide as its median length (1.8 times in A. insolita), head in dorsal view 1.3 times as wide as mesoscutum (1.6 times in A. insolita), face 1.7 times as wide as high (1.9 times in A. insolita), clypeus 2.3 times as wide as high (1.6 times in A. insolita), head at level of eyes in dorsal view about as wide as head at level of temple (1.2 times in A. insolita), the first flagellar segment 3.2 times as long as its maximum width (4.7-5.3 times in A. insolita), and hind femur 4.7 times as long as its maximum width (4.0-4.1 times in A. insolita).In Belokobylskij's (Belokobylskij and Tobias 2007) key, A. ajara sp.n. runs to the Eastern Palaearctic A. tshirikovi Belokobylskij, 2007 (Russian Far East and Japan), but differs in having the lower mandibular tooth long (short in A. tshirikovi), middle and apical antennal segments slender and long (thick and short in A. tshirikovi), face 1.7 times as wide as high (1.2-1.4 times in A. tshirikovi), and paraclypeal fovea wide (rather narrow in A. tshirikovi).
Remarks.Specimens were found in all traps but mainly in E2 and E3 sampling points.One specimen was captured in each of March and June, five in September and 21 in December.Unfortunately, it is not possible to report precise collection data for sampling points and dates because the notes with this complete information were destroyed in a flood.The following Diptera were sampled in the same traps: Calliphora vicina Robineau-Desvoidy, 1830 (Calliphoridae), Megaselia sp.(Phoridae) and Aptilotus martini Wheeler & Marshall, 1989 (Sphaeroceridae) (García and González 1998).However, it is impossible to establish any biological relationships between them.
Discussion
Subterranean ecosystems have always interested people and there has been great scientific interest in cave fauna.Proof of this is in the significant number of animal species found and described from these peculiar localities.However, only nine species of Braconidae, belonging to the genera Aleiodes Wesmael, 1838 (Rogadinae), Apanteles Foerster, 1863 (Microgastrinae), Aulosaphes Muesebeck, 1935 (Lysiterminae), Dinotrema Foerster, 1863 (Alysiinae), Ontsira Cameron, 1900 and Spathius Nees, 1819 (Doryctinae) have been cataloged from subterranean environments (Peris-Felipo and Belokobylskij 2016b).The description of Aspilota ajara sp.n. provides the first record of Aspilota for the cave biota.
It is possible that most braconids collected in subterranean environments (caves, galleries or chasms) found their way there accidentally, searching for host refuges (Peris-Felipo and Belokobylskij 2016b).However, hosts of Alysiini (Alysiinae), which include also sarcophagous and necrophagous Diptera, as well as parasitoids (Calliphoridae, Muscidae, Sarcophagidae, and Phoridae), are common and distinct elements of cave fauna and we suggest that they have acquired stable parasitoid faunas in these peculiar subterranean conditions.Interestingly, no braconid parasitoids known from subterranean environments have any outstanding morphological characters (including colour) associated with subterranean life.Either these insects have relatively recently penetrated these environments, with insufficient time for major morphological adaptations, or they have regular contact with areas outside caves.To conclude, further studies on caves are recommended in order to improve our knowledge of these parasitoids from caves, which remain largely unknown.
Figure 1 .
Figure 1.Location of studied cave in Canary Islands (Spain) and in La Horqueta Mountain.
Figure 2 .
Figure 2. Map of the "Llano de Los Caños" cave with sampling points E1-E4 in red.
Figure 3 .
Figure 3. Aspilota ajara sp.n. (female).A Habitus, lateral view B Head and mesosoma, lateral view C Mandible D Antenna E Face, front view F Head and mesonotum, dorsal view. | 2,333.8 | 2016-10-28T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Galectin-3 leads to attenuation of apoptosis through Bax heterodimerization in human thyroid carcinoma cells.
Cancer cells survive escaping normal apoptosis and the blocks in apoptosis that keep cancer cells alive are promising candidates for targeted therapy. Galectin-3 (Gal-3) is, a member of the lectin family, which is involved in cell growth, adhesion, proliferation and apoptosis. It remains elusive to understand the role of Gal-3 on apoptosis in thyroid carcinoma cells. Here, we report that Gal-3 heterodimerizes Bax, mediated by the carbohydrate recognition domain (CRD) of Gal-3, leading to anti-apoptotic characteristic. Gal-3/Bax interaction was suppressed by an antagonist of Gal-3, in which in turn cells became sensitive to apoptosis. The data presented here highlight that Gal-3 is involved in the anti-apoptosis of thyroid carcinoma cells. Thus, it suggests that targeting Gal-3 may lead to an improved therapeutic modality for thyroid cancer.
INTRODUCTION
Thyroid cancer is the cause of ~90% of all endocrine malignancies, and papillary thyroid carcinoma (PTC) is the most common subtype of this disease [1]. Most patients with PTC have a favorable prognosis, but a subset of patients suffers from recurrent disease that is refractory to surgical resection, radioactive iodine ablation and chemotherapeutic drugs. The ultimate goal of cancer research is to learn how to make cancer cells selectively die.
Since a hallmark of cancer is the dysregulation of programmed cell death, cancer cells survive with the evasion of apoptosis. Also the resistance to apoptosis can cause resistance to conventional cytotoxic therapy [2]. Thus, cellular apoptotic pathways are attractive candidates for targeted therapies. The alteration in expression of B cell lymphoma-2 (Bcl-2) family protein members, which govern the commitment to programmed cell death, is one of the suggested mechanisms for resistance to apoptosis. The Bcl-2 family proteins harboring the Bcl-2 homology (BH) domains serve as pro-and anti-apoptotic executors [3]. Many subsequent mechanistic studies led to the current model that anti-apoptotic family members sequestered the BH3 domain of pro-apoptotic proteins (Bax/Bak) in the mitochondria, and relieved them in response to diverse intrinsic and/or extracellular stress [4]. Ultimately Bax/Bak is oligomerized, leading to mitochondrial outer membrane permeabilization (MOMP) in which cytochrome c is released into the cytoplasm with the subsequent release of apoptogenic factors that exert combined processing of DNA fragmentation [5,6].
The galectins are a family of mammalian β-galactoside binding proteins that share highly conserved carbohydrate recognition domain (CRD). To date, 15 galectin members have been identified and are associated with dysregulation in cancer cell growth that include defective apoptosis and cell-cycle alterations in carcinogenesis [7,8]. Galectin-3 (Gal-3) is the only member of the chimera-type galectin subgroup and contains a single CRD similar to other galectins as well as small N-terminal part and collagen-like sequence. Gal-3 is broadly expressed in many tumor cells and is a good diagnostic marker for differentiated thyroid cancer [9]. It is observed both inside and outside cells, and exerts multifunctional roles in transformation, survival and anti-apoptosis [10]. The mechanisms by which Gal-3 regulates these processes have been elucidated with cues of Gal-3 interacting proteins such as oncogenic Ras and activation of pro-survival signaling including phosphatidylinositol 3-kinase (PI3K) and mitogen activated protein kinases (MAPKs) pathways. The anti-apoptotic activity of Gal-3 has been shown to translocate either from the cytosol or the nucleus to the mitochondria in response to apoptotic stimuli. Supporting this, truncated protein and serine mutant of Gal-3 had increased sensitivity to apoptotic stimuli. With the respect to a binding motif, Gal-3 has unique aspartate-tryptophan-glycine-arginine (NWGR), shown as a conservative motif of the Bcl-2 family, and its substitution mutant loses apoptosis-resistant properties in overexpression studies [11,12]. The NWGR motif in Bcl-2 is also significant for Bcl-2/Bax heterodimerization, which prevents Bax oligomerization [13]. However, how this motif of Gal-3 is conducive to apoptotic resistance is still unclear.
In this study we examined the anti-apoptotic role of Gal-3 in thyroid carcinoma cells and demonstrate that Gal-3 interacts with pro-apoptotic Bax of the canonical apoptotic pathway, leading to inhibition of apoptosis.
Gal-3 expression contributes to cell growth and cell death in thyroid carcinoma cells in response to DXR treatment
Initially, we examined the profile of Gal-3 expression in thyroid carcinoma cell lines before addressing the roles of Gal-3. We observed Gal-3 overexpression in FTC-133 cells but not in TPC1 cells compared with thyroid cells A, Western blot analysis shows Gal-3 protein expression in Nthy-ori 3-1, TPC1 and FTC-133 cells. β-actin was used as the loading control. B, Gal-3 knockdown led to decreased cell growth of FTC-133 cells. Cell growth was analyzed by MTT assay. The value of day 1 was set as 1. **P < 0.01 vs vector. Points in MTT assay represent the mean of three independent experiments; bars, SE. C, FTC-133 and TPC1 stable cells were established as in Material and Methods. They were either untreated or treated with 1 µM DXR for 24 hours. Cell viability was determined by MTT. The value of untreated cells was set as 1. **P < 0.01, *P < 0.05. Columns represent the mean of three independent experiments; bars, SE. D, TPC1 cells were exposed to the indicated concentration of CDDP or DXR. Cell lysates were prepared and processed for western blot assay 24 hours after treatment. E, TPC1 cells were treated with 0.5 µM of DXR. Kinetic analyses of the indicated proteins were done at the times indicated. Analysis was normalized by β-actin. Points represent the mean of two independent experiments; bars, SE.
(Nthy-ori 3-1) ( Figure 1A). Thus we used FTC-133 cells for stable clone with Gal-3 knockdown and TPC1 cells for overexpression of Gal-3. Consistently observed with other cancer types such as breast and prostate, we observed that Gal-3 knockdown FTC-133 cells grew slower than control cells ( Figure 1B), indicative of Gal-3 role in favor of thyroid cancer growth. In addition, doxorubicin (DXR) treatment was more effective to Gal-3 knockdown FTC-133 cells than vector control. Supporting this, Gal-3 overexpressing TPC1 cells treated with DXR showed higher viability compared with vector control cells ( Figure 1C). To examine the expression pattern of Gal-3 following anticancer drug treatment, TPC1 cells were exposed to cisdiammineplatinum dichloride (CDDP) and DXR, which lead to DNA double-strand breaks and induce apoptosis of cancer cells. As shown in Figure 1D, after the treatment with CDDP or DXR, we detected increased expression of Gal-3 protein as well as the pro-apoptotic protein Bax. We found that Gal-3 expression increased but Bax expression was similar to level of untreated condition when cells were treated with 1µM of DXR treatment for 24 hours. When we treated the cells with 1µM of DXR, we observed much more cell death compared with other treatments. Based on these findings, we inferred that 1µM-treated cells increased Bax expression much earlier and then the effect of cell death might compensate the difference of Bax expression (lane 4 and 6 in Figure 1D). Bcl-2, which is an anti-apoptotic protein, was not detected in TPC1 cells. Bclxl did not show a significant change with CDDP treatment and decreased with DXR treatment. Furthermore, we investigated the kinetics of Gal-3 and Bax after treatment with DXR ( Figure 1E) and the analysis showed that Gal-3 and Bax increased gradually in response to DXR. The data that Bax is induced by DXR in TPC1 cells indicated the induction of apoptosis. Intriguingly, it is likely that antiapoptotic Gal-3 was induced in order to protect cells in response to severe apoptotic stimuli. si-Gal-3 for 24 hours, and then either untreated or treated with 1 µM of DXR for 24 hours. The release of cytochrome c was determined in the cytoplasmic fraction of TPC1 cells at the times indicated. The quantification of band intensity was performed using ImageJ software and the value of si-Control for 0 hour was set as 1. B and C, TPC1 cells were treated as in A. Indicated protein expressions were determined. β-actin was used as the loading control. D, TPC1 cells were pretreated with 1% of GCS-100/MCP for 3 hours, and then either left untreated or treated with 1 µM DXR for 24 hours. (Upper) Cell lysates were isolated, and PARP cleavage levels or cleaved caspase-3 expression were determined. (Lower) Bax oligomerization assay. Cells were cross-linked with 1, 6-bismaleimidohexane (BMH) and immunoblotted with polyclonal anti-Bax antibody. NS means non-specific band. E, TPC1 cells were transfected with indicated siRNA for 24 hours, and then either untreated (white columns) or treated with 0.5 µM of DXR (black columns) for 24 hours. Western blot analysis shows Gal-3 and Bax protein expression. Cell viability was analyzed by MTT assay. The value of untreated control cells was set as 1. The symbols of a, b, c and d indicate the black columns of lane 2, 4, 6 and 8 in the graph, respectively. **P < 0.01, *P < 0.05. Columns represent the mean of three independent experiments; bars, SE.
Gal-3 contributes to anti-apoptosis in intrinsic apoptotic pathway
To further explore how Gal-3 is linked to apoptotic pathways, we first performed suppression studies in which DXR-induced apoptosis was expected to be enhanced following siRNA-mediated knockdown of Gal-3. We firstly analyzed whether Gal-3 was involved in intrinsic apoptotic pathway, which was characterized by permeabilization of the mitochondria and release of cytochrome c into the cytoplasm. In cells in which Gal-3 was knocked down, there was an increase in cytoplasmic cytochrome c at 5 hours after DXR treatment, while in control cells, at 10 hours ( Figure 2A). Consistently, following the release of cytochrome c, activation of caspase-3 and poly (ADPribose) polymerase (PARP) cleavage in Gal-3 knockdown cells were increased compared with control cells ( Figure 2B), indicating that DXR-induced apoptosis is enhanced by siRNA against Gal-3 and anti-apoptotic function of Gal-3 is participating in intrinsic apoptotic pathway. In thyroid cancer cells, Gal-3 was reported to induce PI3K-Akt pathway in which acts as pro-survival signaling and inhibits pro-apoptotic sensors such as BID [14]. The extrinsic apoptotic pathway is induced by death receptors such as tumor necrosis factor receptor 1 (TNFR1) and Fas/CD95. Ligands bind to these receptors, which form the death inducing signaling complex (DISC), leading to initiation of the caspase cascade through caspase-8 [4]. To examine whether Gal-3 affects extrinsic pathway or alternative apoptotic pathway, we analyzed caspase-8, p-ERK and p-Akt and did not find significant difference in expression of these factors under Gal-3 knockdown ( Figure 2C), indicating the anti-apoptotic role of Gal-3 on the intrinsic apoptotic pathway. Next, to further delineate how Gal-3 plays in intrinsic pathway, we examined Bax oligomerization in response to apoptotic stimulus [15]. We used the GCS-100/modified citrus pectin (MCP) which shows inhibitory effect by targeting the CRD of Gal-3 [16,17]. The backbone of MCP is a galacturonic acid and is an antagonist. Although the question whether cells were treated with 0.5 µM DXR for 24 hours. Cell lysates were immunoprecipitaed with IgG rabbit, polyclonal anti-Bax, or polyclonal anti-Gal-3 antibody. The immunoprecipitates and input lysates were analyzed by immunoblotting with indicated antibodies. Input lysates indicate lysates used for immunoprecipitation from TPC1 cells and were used as positive control. C, TPC1 cells were pretreated with 1% of GCS-100/MCP for 3 hours, and then either left untreated or treated with 1 µM DXR for 24 hours. Cell lysates were immunoprecipitated with polyclonal anti-Gal-3 antibody. The immunoprecipitates and input lysates were analyzed by immunoblotting with indicated antibodies. D, Co-localization of Gal-3 and Bax in TPC1 cells treated with 0.5 µM DXR for 24 hours. TPC1 cells were immunofluorescently labelled with anti-Gal-3 (red), anti-Bax (green) antibodies and Hoechst 33258 (nuclear stain, blue). Scale bar represents 50 µm. E, Prediction of the interaction of Gal-3 carbohydrate recognition domain (CRD) with Bax. The references about the structure of Gal-3 CRD and Bax were indicated in Materials and Methods. In silico docking was performed using Second ClusPro 2.0 server (http://cluspro.bu.edu/login.php). Asn means asparagine. MCP binds to other galectins is reasonable, as other galectins may be involved in apoptosis [18,19], none have been linked to Bax-mediated intrinsic pathway. As a result, Bax oligomers along with PARP cleavage and activation of caspase-3 were induced in DXR treated cells, but not in untreated cells. Interestingly, treatment with GCS-100/MCP and DXR together enhanced Bax oligomerization ( Figure 2D). These results indicate that endogenous Gal-3 protects DXR-induced apoptosis through CRD of Gal-3, and Gal-3 directly or indirectly inhibits Bax oligomerization. Cell viability assay showed a slower growth rate in Gal-3 knockdown cells (b) after DXR treatment than in control cells (a). However, double knockdown cells with siRNA against Bax and Gal-3 with DXR treatment (d) did not exhibit the inhibition of cell viability when compared to knockdown of Bax alone (c) ( Figure 2E). It indicated that low levels of Gal-3 facilitated apoptosis induced by DXR on the intrinsic pathway, and the anti-apoptotic effect of Gal-3 was mediated by Bax.
Gal-3 binds to Bax through CRD in response to apoptotic stimulus
Since Gal-3 affected Bax oligomerization, we addressed the rising possibility that Gal-3 directly interacted with Bax. As shown in Figure 3A, coimmunoprecipitation assay showed that the anti-Baximmunoprecipitates contained endogenous Gal-3 after DXR treatment. Consistent with these results, the reciprocal experiments revealed that endogenous Bax was co-immunoprecipitated with Gal-3, but not without DXR ( Figure 3B). It is poorly understood that DXR treatment enhances interaction of both proteins. Because GCS-100/MCP enhanced DXR-induced apoptosis, we next validated whether the interaction of Gal-3 with Bax was mediated by CRD of Gal-3. In cells treated with GCS-100/MCP and DXR together, the interaction of Gal-3 with Bax was suppressed significantly (Figure 3C), indicating that Gal-3 binds to Bax through CRD. In addition, we performed immunofluorescence study whether Gal-3 was co-localized to Bax. Co-localization of Gal-3 and Bax was revealed in the cytoplasm of the cells treated with DXR, suggesting that they are physically proximal ( Figure 3D). Furthermore, taking advantage of the data obtained from the structures of the CRD of Gal-3 and Bax, we performed in silico docking analysis using Second ClusPro 2.0 server to predict their physical interactions. In silico docking analysis suggested that Gal-3 CRD bound within the BH1 domain of Bax containing asparagine 104 and 106 of the NWGR motif ( Figure 3E), whereas Bcl-2 anti-apoptotic proteins bound BH3 domain of Bax [5].
NWGR motif of Gal-3 CRD is crucial for interaction with Bax
As it suggested that the NWGR motif in the CRD of Gal-3 was pivotal to anti-apoptotic function of Gal-3 [11,12], we determined the significance of the NWGR motif in Gal-3 for Gal-3/Bax interaction. We constructed mutant Gal-3 (glycine 182 to alanine; G182A) using sitedirect mutagenesis ( Figure 4A). Co-immunoprecipitation study in transiently transfected 293T cells revealed that mutant Gal-3 G182A weakly bound to Bax, suggesting the deficiency of Gal-3 functionality ( Figure 4B). Furthermore, we established two stable clones transfected with Gal-3 mutant and differentially selected in TPC1 cells in order to confirm characteristics of Gal-3 mutant in apoptotic signaling pathway ( Figure 4C). We examined whether mutant Gal-3 affected PARP cleavage and activation of caspase-3 in stable clones under DXR treatment. PARP cleavage and activation of caspase-3 were significantly decreased in WT cells compared with VC cells or mutant Gal-3 clones ( Figure 4D), indicating that WT overexpression suppressed PARP cleavage and activation of caspase-3 but vector and mutant was similar. Consistent with a reduction of Gal-3 mutant functionality in apoptotic pathway, mutant clones demonstrated a reduced anti-apoptotic function in the cell viability test ( Figure 4E). These data showed that overexpression of Gal-3 led to the attenuation of apoptosis in TPC1 cells, and an amino acid substitution in the NWGR motif of Gal-3 abrogated the attenuation of apoptosis.
Anti-apoptotic role of Gal-3 through Bax is suppressed by Gal-3 inhibitor in cancer cells
Finally, we wonder whether our finding of Gal-3/ Bax interaction in thyroid carcinoma cells is extended to other cancer cells [20][21][22] and examined cell viability with Gal-3 inhibitor under apoptotic stimulus. The results showed that the treatment of Gal-3 inhibitor sensitized cells to DXR-induced cell death regardless of cancer cell types ( Figure 5A). These data indicate that Gal-3 inhibitor confers cells sensitivity to DXR-induced apoptosis, suggesting that the mechanisms for anti-apoptotic role of Gal-3 through Bax can be broadly applied to human cancers.
DISCUSSION
In thyroid cancer, the mutant genes such as BRAF and RAS constitutively activate aberrant cell signaling pathways that control apoptosis through MAPKs signaling [23]. MAPKs signaling pathway can function as a pro-or anti-apoptotic factor by switching to turn on or off the substrate proteins, including p53 and Bcl-2 family proteins [24]. Following the different etiology of thyroid carcinoma, the cancer cells commonly showed anti-apoptotic characteristics for their survival. The mechanisms for apoptotic resistance remain to be elucidated in thyroid cancer. Thus, we addressed the contribution of Gal-3 to apoptosis in thyroid carcinoma, based on that Gal-3 is clinically associated with malignant transformation of thyroid carcinoma [25,26].
The studies presented earlier have shown that intracellular Gal-3 imparts resistance to apoptosis in breast, bladder and prostate cancer cells [27,28], whereas recombinant Gal-3 is also exogenously added and enhances cancer progression [29]. In most of reports about the association of Gal-3 to the resistance of drug-induced apoptosis in several types of cancer cells, the anti-apoptotic function of Gal-3 is documented through promising molecular findings. These include the modification of phosphorylated Gal-3, its nuclear export to mitochondria [30][31][32], suppression of TNFR1induced extrinsic apoptotic pathway [33] or the activation of alternative pathways such as PI3K pathway for prosurvival [14,34]. Apart from these indirect evidences, we demonstrate in this study that Gal-3 plays a direct role in the modulation of intrinsic apoptosis in thyroid carcinoma cells ( Figure 2).
Although it is well known that interactions of pro-apoptotic Bax/Bak and anti-apoptotic Bcl-2 family members through protein crystallography [3], It is intriguing that cytosolic Gal-3 interacts with Bax, leading to inhibition of Bax oligomerizaiton instead of role of mitochondrial membrane-spanning Bcl-2 family. Here, the question includes that Gal-3 acts as (i) a modulator in Bcl-2/Bax complex or (ii) a sole regulator of Bax independently of anti-apoptotic Bcl-2 members. For the former mechanisms, we addressed the rising possibility that Gal-3 is involved in Bcl-2/Bax heterodimer because anti-apoptotic Bcl-2 normally sequesters pro-apoptotic Bax on mitochondria. In silico docking analysis also suggested that Gal-3 CRD can bind to the BH1 domain of Bax ( Figure 3E) instead of BH3 which mediate Bcl-2/ Bax heterodimerization [35]. However, we did not obtain promising data on a triple complex (Gal-3, Bax and Bcl-2) when co-immunoprecipitation assays were performed (data not shown). Since Bcl-2 family consists of many members and expresses in cell-type specific fashion [4], further studies will be needed to define the role of Gal-3 using constructs of these Bcl-2 members.
It is supported that cell death was partially overcome by Gal-3 overexpression when cells were exposed to an apoptotic drug (Figure 4). It opens the possibility that Gal-3-overexpressing cancer cells have a survival advantage. Given that Gal-3 overexpression is clinically associated with human thyroid carcinoma and other various cancers, Gal-3 targeting could be considered in an improved therapeutic modality for cancers. Although thyroid cancer cells remain to be explored in combination with typical Bcl-2 inhibitor and Gal-3 inhibitor, targeting Gal-3 might be effective to improve the efficacy of Bcl-2 targeted therapy [36].
Cells
Human thyroid carcinoma cells TPC1 were obtained from the University of Colorado Cancer Center Cell Bank (Denver, CO). FTC-133 was obtained from the University of California Cell Culture Core Facility (San Francisco, CA). Human thyroid cells Nthy-ori 3-1 were purchased from Sigma-Aldrich. 293 cells, HeLa cells, HT1080 cells and PC3 cells were purchased from American Type Culture Collectioun. These cell lines have been tested and authenticated by the supplier. FTC-133, Nthy-ori 3-1, 293, HeLa, HT1080 and PC3 cells were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS), and the other cells were cultured in RPMI 1640 supplemented with 10% FBS. To maintain stable clones, 200 µg/ml of G418 and 10 µg/ml of blasticidin (Invitrogen) were added to the culture media for FTC-133 and TPC1 transfectants, respectively.
siRNA against Gal-3 and control siRNA (Santa Cruz Biotechnology) were transfected into each cells using Lipofectamine RNAiMax reagent (Invitrogen) according to the manufacturer's instruction.
Expression plasmid of Gal-3-V5 was constructed in pcDNA6/V5 harboring human full-length Gal-3 and confirmed by sequence analysis. 293 Cells were seeded at 50% confluence per well in plates overnight and transfected transiently for 48 hours using Lipofectamine LTX and Plus Reagent (Invitrogen) according to the manufacturer's instructions. TPC1 stable clones were selected by 10 µg/ml of blasticidin.
RNA extraction and Reverse Transcription-PCR (RT-PCR)
Total RNA was extracted with Trizol reagent (Invitrogen). For the RT reaction, 2 µg of total RNA was used by using the First-Strand cDNA synthesis kit (GE Healthcare, Piscataway, NJ) according to the manufacturer's instruction. In PCR reaction, 1 µl of the resultant cDNA from the RT reaction was used as the template.
Preparation of cytoplasmic extracts
Cells were washed with phosphate buffered saline (PBS) and centrifuged at 500 g for 5 minutes. The supernatant was discarded and the cell pellet was used for extraction of the cytoplasmic fractions using cytoplasmic extraction reagents (Pierce Biotechnology, Rockford, IL) according to the manufacturer's instructions.
Western blot assay
Cells were lysed in buffer (50 mM Tris-HCl pH 7.4, 1% NP-40, 0.5% Na-deoxycholate, 0.1% SDS, 150 mM NaCl, 2 mM EDTA, 50 mM NaF and 0.2 mM Na 3 VO 4 ) containing protease inhibitors (Roche Applied Science, Nutley, NJ). After BCA protein assay (Pierce Biotechnology), equal amounts of proteins were separated on SDS-polyacrylamide gel electrophoresis gels and transferred to polyvinylidene fluoride membranes (Millipore, Bedford, MA). Membranes were blocked in 0.1% casein/ Tris buffered saline (TBS) for 1 hour, incubated with appropriate primary antibodies for overnight at 4˚C, then incubated with secondary antibodies conjugated with IRDye 800 (Rockland Immunochemicals, Gilbertsville, PA) or Alexa Fluor 680 (Invitrogen) for 1 hour at room temperature. Membranes were washed three times with TBS including 0.1% Tween20 at 5 minutes intervals, and were visualized using an Odyssey Infrared Imaging System. Relative protein levels were quantified using the ImageJ software (National Institute Health), and normalized to β-actin. Each experiment was repeated at least, twice.
Immunofluorescence
Cells were fixed with 4% paraformaldehyde in PBS for 15 minutes, permeabilized with 0.25% Triton X-100 for 10 minutes, blocked with 1% bovine serum albumin (BSA) in PBS for 30 minutes and incubated with indicated antibodies for overnight, then incubated with tetramethylrhodamine isothiocyanate (TRITC)conjugated antibody and fluorescein isothiocyanate (FITC)-conjugated antibody (Sigma-Aldrich) for 1 hour in the dark. Nuclei were stained with 2.5 µg/ml Hoechst 33258 (Invitrogen) for 5 minutes. Pictures were taken using same parameters with OLYMPUS BX40 microscope (Melville, NY) and CellSens Dimension Imaging Software (Olympus).
Co-Immunoprecipitation assay
Cells were lysed in previously described buffer containing 1% CHAPS and protease inhibitors (Roche Applied Science). After BCA protein assay (Pierce Biotechnology), cell lysates containing equal amount of proteins were incubated with appropriate antibodies overnight and 15 µl of protein G Sepharose (GE Healthcare) for 1 hour at 4˚C. The beads were washed three times and boiled in 2x sample buffer. Supernatant was subjected to SDS-polyacrylamide gel electrophoresis and immunoblotted for appropriate antibodies. Each experiment was repeated at least, twice. www.impactjournals.com/oncotarget
MTT assay
Briefly, cells were seeded at well in 24-well plates. At the time of assay, 0.1 mg/ml MTT in basic medium was added to each well and incubated for 1 hour. After removing MTT, dimethyl sulfoxide was added and mixed vigorously. Absorbance was measured at 485nm. All experiments were carried out in quadruplicates and repeated twice. Statistical analysis was done using paired Student t-test. P < 0.05 was regarded as significant.
Preparation of GCS-100/MCP
Citrus Pectin (CP) was purchased from Sigma Chemicals. Temperature modification of CP was performed as follows: CP solution (1.3%) was autoclaved for 1 hour, cooled to room temperature, centrifuged at 10,000 g for 10 minutes. Collected supernatant was precipitated with 2 volumes of absolute ethanol and frozen at -20˚C for 2 hours. After centrifuging at 10,000 g for 10 minutes again, the supernatant was discarded and pellet was saved. The pellet was suspended in acetone, filtered, and dried on Whatman filters. MCP was dissolved in deionized distilled water.
Detection of Bax oligomerization
For analysis of Bax oligomerization in cells, washed cell pellets were resuspended in PBS with freshly prepared 1, 6-bismaleimidohexane (Thermo Scientific) at a final concentration of 5 mM and incubated with rotation for 30 minutes at room temperature. The cells were pelleted, dissolved in buffer, incubated on ice for 5 minutes, and centrifuged at 16,000 g for 10 minutes. Supernatants were analyzed by immunoblotting with polyclonal anti-Bax antibodies, after BCA protein assay (Pierce Biotechnology) [37]. | 5,702.2 | 2014-09-16T00:00:00.000 | [
"Biology"
] |
Prospecting for Zoonotic Pathogens by Using Targeted DNA Enrichment
More than 60 zoonoses are linked to small mammals, including some of the most devastating pathogens in human history. Millions of museum-archived tissues are available to understand natural history of those pathogens. Our goal was to maximize the value of museum collections for pathogen-based research by using targeted sequence capture. We generated a probe panel that includes 39,916 80-bp RNA probes targeting 32 pathogen groups, including bacteria, helminths, fungi, and protozoans. Laboratory-generated, mock-control samples showed that we are capable of enriching targeted loci from pathogen DNA 2,882‒6,746-fold. We identified bacterial species in museum-archived samples, including Bartonella, a known human zoonosis. These results showed that probe-based enrichment of pathogens is a highly customizable and efficient method for identifying pathogens from museum-archived tissues.
Panel Development
We developed a set of biotinylated probes for UCE-based, targeted sequencing of 32 pathogen groups (Table 1). For example, given the large evolutionary distances covered by various pathogens, we generated sets of probes that target more discrete taxonomic groups (e.g., Nemotoda, Yersinia). For bacterial pathogens, probes were designed to capture all species within the genus or species group. For eukaryotic pathogens, probes were designed to be effective at taxonomic ranks that ranged from species group to class. The taxonomic rank varied in eukaryotic pathogens based on the following criteria: 1) the number of available genomes, 2) sequence diversity -because this impacted the number of probes needed. Table 1 provides information on the pathogen group, targeted zoonotic agent and zoonoses.
For each group we used the Phyluce package v1.7.0 (1,2); we generated probes to target ≈49 loci using the methods described below. First, we identified orthologous loci between a focal pathogen and the remaining species in the pathogen group. Focal taxa were chosen based on their assembly contiguity or prominence as a zoonotic agent. To do this we downloaded a genome for each species in the pathogen group. Accession numbers for these assemblies are provided in Table 2. Next, we simulated 25x read coverage for each genome using the ART v2016.06.05 (3); read simulator with the following options: art_illumina-paired-len 100-fcov 25-mflen 200-sdev 150 -ir 0.0 -ir2 0.0 -dr 0.0 -dr2 0.0 -qs 100 -qs2 100 -na. Simulated reads from all query taxa were mapped back to a focal taxon with bbmap v38.93 (4); enabling up to 10% sequence divergence (minid = 0.9). Unmapped, or multimapping reads were removed using Bedtools v2.9.2 (5) and phyluce_probe_strip_masked_loci_from_set (filter_mask 25%). The remaining reads were merged to generate a BED file containing orthologous regions between the query and focal taxa.
Then, we identified orthologous loci among all taxa within the pathogen group using phyluce_probe_query_multi_merge_table. Next, we filtered each set of loci to retain only those shared among 33% of taxa in the pathogen group using phyluce_probe_query_multi_merge_table. We extracted 160 bp from each locus and generated an initial set of in silico probes directly from the focal genome using phyluce_probe_get_genome_sequences_from_bed and phyluce_probe_get_tiled_probes.
Additional options for probe design included generating two probes per locus (-two_probes) that overlapped in the middle (-overlap-middle). Focal probes with repetitive regions or skewed GC Page 3 of 8 content (<30% or >70%) content were removed. Next, the probes from the focal taxa were mapped back to each genome in the pathogen group with phyluce_probe_run_multiple_lastzs_sqlite. We used the -identity option to limit searches with a maximum divergence of 30%. Using these results, we extracted 120-bp loci from the probed regions in each representative genome extracted using phyluce_probe_slice_sequence_from_genomes. Theoretically, this dataset should contain orthologous 120-bp sequences from most taxa in each pathogen group. We verified this with phyluce_probe_get_multi_fasta_table, which provides a table showing the number of taxa identified at each locus. We used this information to identify the 100 loci capable of capturing most taxa from the pathogen group. Next, we generated two 80-bp probes from each of the 100bp and 120-bp loci. We used phyluce_probe_easy_lastz to compare the probes to themselves and remove any that were possible duplicates. Then we reduced the probe set even further by clustering probes based on sequence identity with cd-hit-est v4.8.1 (6). We identified sequence clusters with >95% similarity and retained only 1 probe per group. Finally, we recalculated the number of probes needed to capture each locus.
The proceeding steps were repeated for each pathogen group shown in Table 1. To generate a final panel, we selected 49 loci per pathogen group in a way that minimized the number of probes needed. In some cases, we needed to generate 2 sets of probes to adequately represent target pathogens. For example, Kinetoplastea contains 2 pathogens of interest, Trypanosoma and Leishmania. The baits designed for Leishmania were able to target all 49 loci in the most of the Kinetoplastea but only 23 loci in Trypanosoma. We then generated a second set of 617 Trypanosoma-specific baits to augment the kinetoplastid baits and ensure that Trypanosoma parasites were represented adequately in the final panel. Likewise, we doubled the number of baits used to capture loci from the Bacillus cereus group to effective capture B. cereus and B. anthracis. The probe set was quality checked by Arbor Bioscienes. This included comparing the probe set to mammal genomes with blastn v2.12.0 (7) and checking for lowcomplexity sequences. Any probes that failed quality control were replaced before synthesis.
Library Preparation
Standard DNA sequencing libraries were generated from 500 ng of DNA per sample. We Individual samples with similar DNA concentrations were combined together into pools of 4-16 samples and the total volume was reduced to 7 µL with a speedvac vacuum concentrator.
Next, we used the high sensitivity protocol of myBaits v.5 (Daicel Arbor Biosciences) to enrich target pathogen loci from the host/pathogen control and museum archived samples. We used 2 rounds of enrichment for each pool of samples. Probe concentration was 100 ng/µL. Each round was 24 hours at 65°C. After washing of unbound DNA, each library was amplified with a 15cycle PCR amplification step and quantified using qPCR. Finally, the pools of 4-16 were combined into an equimolar pool for sequencing. All sequencing reactions were on single lanes of Illumina Hi-Seq 2500.
Bioinformatic Analyses
All analyses were performed on a single compute node with 48 processors and limited to 100 Gb of RAM. Bioinformatic steps were documented in a series of BASH shell scripts or Jupyter notebooks. These files along with conda environments are available at github.com/nealplatt/pathogen_probes and archived. The basic structure of the bioinformatic analyses are shown in Figure 3. In general, we used the Kraken2 v2. 1.2 (8) to assign a taxonomic id to each read, the Phyluce v1.7.1 (1,2) pipeline to identify, assemble, and align loci, and RaxML-NG v1.0.1 to generate phylogenies from each pathogen group of interest.
First, we used Trimmomatic v0.39 (9) to trim and quality filter low-quality bases and Illumina adapters. Then, we used Kraken2 v2.1.1 (8) to compare each read from our samples to a reduced dataset of target loci using a -conf cutoff of 0.2. We decided to compare our reads to a reduced dataset of target loci to minimize the computational expense of these comparison. To generate the reduced database of bait-targeted loci, we downloaded one representative or reference genome from all species in RefSeq v212 (10) with genome_updater.sh v0.5.1 (https://github.com/pirovc/genome_updater). Then we used BBMap v38.96 (4) to map all the baits to each genome and a kept the 10 best sites that mapped with ≥85% sequence identity.
Page 5 of 8
Next, we extracted these hits along with 1,000 bp up and downstream. These sequences were combined into a single fasta file that should contain the major mapping locations for our baits.
Once reads were classified we identified genera that were known pathogens or were present in at least one sample with more than 1,000 reads. Next, we extracted reads from the relevant family with KrakenTools v1.2 (https://github.com/jenniferlu717/KrakenTools/). These reads were then assembled (Figure 3, panel B) with the SPAdes genome assembler v3.14.1 (11) using the phyluce_assembly_assemblo_spades wrapper script. We filtered out low quality contigs based on size (<100 bp) and median coverage (<10×) as calculated by the SPAdes genome assembler. Next, we filtered individuals even further by removing individuals with fewer <2 contigs.
While we were assembling and filtering contigs from each isolated target loci from species with available genome assemblies, we used genome_updater.sh v0.5.1 (https://github.com/pirovc/genome_updater) to download one (-A 1) reference or representative (-c reference genome, representative genome) genome from either refseq or Genbank (-d refseq,genbank) for the pathogen group. We also included at least 1 individual from an outlier genus to root downstream analyses. These genomes were converted to twobit format with faToTwoBit. Next, we used phyluce_probe_run_multiple_lastzs_sqlite to compare probes from the pathogen group to the genome assemblies with an identity cut off of 85% (-identity 0.85).
These loci plus 1 kb of flanking sequence (-flank 1000) were extracted from the genome using phyluce_probe_slice_sequence_from_genomes. After extraction, the sliced loci were identified and counted using phyluce_assembly_match_contigs_to_probes (-min-identity 90) and phyluce_assembly_get_match_counts. Next, we combined the loci generated from our samples with those from representative and reference genomes and aligned them with phyluce_align_seqcap_align. The resulting alignments were trimmed with gblocks v0.91b (12) and phyluce_align_get_gblocks_trimmed_alignments_from_untrimmed. We then counted the number of taxa per locus alignment (phyluce_align_get_taxon_locus_counts_in_alignments) and removed taxa with fewer than 2 loci (phyluce_align_extract_taxa_from_alignments). Then we removed any loci that contain fewer than half of the expected number of taxa with phyluce_align_get_only_loci_with_min_taxa and concatenated the remaining loci into a single phylip alignment (phyluce_align_concatenate_alignments).
Page 6 of 8
We used RaxML-NG v1.0.1 (13) to generate a maximum-likelihood phylogenetic tree from the concatenated alignment. We ran 100 parsimony tree searches and then another 1,000 replicates using the GTR + G substitution model. Branches with less than 50% support were collapsed with the Newick Utilities v1.6 (14), Newick editor (nw_ed <input_tree_file>'i and b< = 50'). These steps were then repeated with other pathogen groups identified in the samples.
Host Identification
We verified museum identifications by comparing reads to a second Kraken2 v2.1.2 (8) database containing mammalian mitochondrial genomes. To do this, we downloaded all available mammalian mitochondrial genomes (n = 1,651) from https://www.ncbi.nlm.nih.gov/genome/organelle/ (last accessed 3 November 2022). We then created a custom database and compared each of our samples sing Kraken2 and no confidence cutoffs. The Kraken2 classifications were filtered by removing any samples with fewer than 50 classified reads and any single-read, generic classifications. | 2,396.8 | 2023-08-01T00:00:00.000 | [
"Biology"
] |
The Kodaira dimension of some moduli spaces of elliptic K3 surfaces
We study the moduli spaces of elliptic K3 surfaces of Picard number at least 3, i.e. $U\oplus \langle -2k \rangle$-polarized K3 surfaces. Such moduli spaces are proved to be of general type for $k\geq 220$. The proof relies on the low-weight cusp form trick developed by Gritsenko, Hulek and Sankaran. Furthermore, explicit geometric constructions of some elliptic K3 surfaces lead to the unirationality of these moduli spaces for $k\leq 11$ and for other 21 isolated values up to $k=100$.
Introduction
Moduli spaces of complex K3 surfaces are a fundamental topic of interest in algebraic geometry. One of the first geometric properties one wants to understand is their Kodaira dimension. Towards this direction, the seminal work [GHS07b] of Gritsenko, Hulek and Sankaran proved that the moduli space F 2d of polarized K3 surfaces of degree 2d is of general type for d > 61 and for other smaller values of d. It is then natural to address the general question about the Kodaira dimension of moduli spaces of lattice polarized K3 surfaces. We are interested in studying a particular class of such surfaces, namely elliptic K3 surfaces of Picard number at least 3.
A K3 surface X is called elliptic if it admits a fibration X → P 1 in curves of genus one together with a section. The classes of the fiber and the zero section in the Néron-Severi group generate a lattice isomorphic to the hyperbolic plane U, and they span the whole Néron-Severi group if the elliptic K3 surface is very general. The geometry of elliptic surfaces can be studied via their realization as Weierstrass fibrations. By using this description, Miranda [Mir81] constructed the moduli space of elliptic K3 surfaces and showed its unirationality as a by-product. Later, Lejarraga [Lej93] proved that this space is actually rational. We want to study the divisors of the moduli space of elliptic K3 surfaces which parametrize the surfaces whose Néron-Severi groups contain primitively U ⊕ −2k . These are the moduli spaces M 2k of U ⊕ −2k -polarized K3 surfaces. Geometrically we are considering elliptic K3 surfaces admitting an extra class in the Néron-Severi group: if k = 1, it comes from a reducible fiber of the elliptic fibration, while if k ≥ 2 it is represented by an extra section, intersecting the zero section in k − 2 points with multiplicity (cf. Remark 5.6).
In the present article, we aim at computing the Kodaira dimension of the moduli spaces M 2k . The Torelli theorem for K3 surfaces (see [PS72]) allows the moduli spaces M 2k to be realized as quotients of bounded hermitian symmetric domains Ω L 2k of type IV and dimension 17 by the stable orthogonal groups O + (L 2k ), where the lattice L 2k is the orthogonal complement of U ⊕ −2k in the K3 lattice Λ K3 := 3U ⊕ 2E 8 (−1). Via this description, one can apply the low-weight cusp form trick (Theorem 2.1) developed in [GHS07b]. This tool provides a sufficient condition for an orthogonal modular variety to be of general type. Namely, one has to find a non-zero cusp form on Ω • L 2k of weight strictly less than the dimension of Ω L 2k vanishing along the ramification divisor of the projection Ω L 2k → O + (L 2k )\Ω L 2k . In our case, to construct a suitable cusp form, we use the quasi-pullback method (Theorem 2.3) to pull back the Borcherds form Φ 12 along the inclusion Ω • L 2k ֒→ Ω • L 2,26 induced by a lattice embedding L 2k ֒→ L 2,26 . Here, the lattice L 2,26 denotes the unique (up to isometry) even unimodular lattice of signature (2, 26). The lattice embedding L 2k ֒→ L 2,26 determines the number N(L 2k ) of effective roots in L ⊥ 2k . If N(L 2k ) is positive, the embedding governs the weight 12 + N(L 2k ) of the cusp form. Therefore the whole proof of Theorem 0.1 boils down to finding the values of k for which there exists a suitable primitive embedding L 2k ֒→ L 2,26 , whose orthogonal complement contains at least 2 and at most 8 roots (cf. Problem 4.1).
In the second part of the article we give a geometric construction of all U ⊕ −2k -polarized K3 surfaces as double covers of the Hirzebruch surface F 4 branched over a suitable smooth curve admitting a rational curve intersecting the branch locus with even multiplicities. We then review that, for k ≥ 4 even, all U ⊕ −2k -polarized K3 surfaces admit a structure as hyperelliptic quartic K3 surfaces, i.e. double covers of P 1 × P 1 branched over a curve of bidegree (4, 4). Finally, we recall the realization of elliptic K3 surfaces as Weierstrass fibrations. These geometric constructions lead to the following: In Section §1 we review the general construction for the moduli spaces of lattice polarized K3 surfaces as orthogonal modular varieties. We give a description of the moduli spaces M 2k , which are the main object of study in this article. In Section §2 we describe the method used in proving Theorem 0.1, namely the low-weight cusp form trick (Theorem 2.1). The desired form is cooked up as a quasi-pullback of the Borcherds form Φ 12 (Theorem 2.3). Section §3 is devoted to the proof of Proposition 3.1. Indeed, we study some special reflections in the stable orthogonal group O + (L 2k ). This is then used to impose the vanishing of the quasi-pullback Φ| L 2k of the Borcherds form along the ramification divisor of the quotient map Ω L 2k → M 2k . In Section §4 we tackle Problem 4.1 of finding primitive embeddings L 2k ֒→ L 2,26 with at least 2 and at most 8 orthogonal roots. First, we prove that for any k ≥ 4900 such an embedding exists. Then, we perform a computer analysis (see Algorithm 4.6) to find explicit embeddings for the remaining values of k. In Section §5 we review the classical constructions of elliptic K3 surfaces as double covers of P 1 × P 1 and F 4 , and Weierstrass fibrations. Finally, in Section §6 explicit geometric constructions, as the ones presented in Section §5, are used to prove Theorem 0.2.
Conventions. Throughout the article we will always work over C. We have used the software Magma to implement Algorithm 4.6.
Ackowledgments. We would like to thank our PhD advisors Klaus Hulek and Matthias Schütt for many useful discussions and for reading an early draft of this manuscript. The first author acknowledges partial support from the DFG Grant Hu 337/7-1.
Moduli spaces of lattice polarized K3 surfaces
In this section we review the construction of the moduli spaces of lattice polarized K3 surfaces. An excellent reference to this subject is [Dol96].
First we recall some basic notions of lattice theory. Let L be an integral lattice of signature (2, n). Let Ω L be one of the two connected components of It is a hermitian symmetric domain of type IV and dimension n. We denote by O + (L) the index two subgroup of the orthogonal group O(L) preserving Ω L . If Γ < O + (L) is of finite index we denote by F L (Γ) the quotient Γ\Ω L . By a result of Baily and Borel [BB66], F L (Γ) is a quasi-projective variety of dimension n.
For every non-degenerate integral lattice L we denote by L ∨ := Hom(L, Z) its dual lattice. If L is even, the finite group A L := L ∨ /L is endowed with a quadratic form q L with values in Q/2Z, induced by the quadratic form on L. We define: is spanned by a non-degenerate holomorphic 2-form ω X . The cohomology group H 2 (X, Z) is naturally endowed with a unimodular intersection pairing, making it isomorphic to the K3 lattice Λ K3 := 3U ⊕ 2E 8 (−1), where U is the hyperbolic plane and E 8 (−1) is the unique (up to isometry) even unimodular negative definite lattice of rank 8. In particular the signature of H 2 (X, Z) is (3, 19).
Fix an integral even lattice M of signature (1, t) with t ≥ 0. An M-polarized K3 surface is a pair (X, j) where X is a K3 surface and j : M ֒→ NS(X) is a primitive embedding. Let be the orthogonal complement of M in Λ K3 . It is an integral even lattice of signature (2, 19 − t).
By the Torelli theorem [PS72] (see also [Dol96, Corollary 3.2]), the moduli spaces of Mpolarized K3 surfaces can be identified with the quotient of a classical hermitian symmetric domain of type IV and dimension 19 − t by an arithmetic group. More precisely, the 2-form ω X of a M-polarized K3 surface X determines a point in the period domain In the following, we will study the moduli spaces of M-polarized K3 surfaces with M = U ⊕ −2k , i.e. elliptic K3 surfaces of Picard rank at least 3. Since the embedding U ⊕ −2k ֒→ Λ K3 is unique up to isometry by [Nik79, Theorem 1.14.4], we get the isomorphism . As we discussed above, the quotient variety is the moduli space of U ⊕ −2k -polarized K3 surfaces. Notice that all these surfaces are elliptic, since they contain a copy of the hyperbolic plane U.
Low-weight cusp form trick
The computation of the Kodaira dimension of modular orthogonal varieties relies on the low-weight cusp form trick developed by Gritsenko, Hulek and Sankaran [GHS07b]. In order to describe it, we need a little theory of modular forms on orthogonal groups.
Let L be an integral even lattice of signature (2, n). A modular form of weight k and character χ : A modular form is a cusp form if it vanishes at every cusp. We denote the vector spaces of modular forms and cusp forms of weight k and character χ for Γ by M k (Γ, χ) and S k (Γ, χ) respectively.
Theorem 2.1. [GHS07b, Theorem 1.1] Let L be an integral lattice of signature (2, n) with n ≥ 9, and let Γ < O + (L) be a subgroup of finite index. The modular variety F L (Γ) is of general type if there exists a nonzero cusp form F ∈ S k (Γ, χ) of weight k < n and character χ that vanishes along the ramification divisor of the projection π : Ω L → F L (Γ) and vanishes with order at least 1 at infinity. If S n (Γ, det) = 0 then the Kodaira dimension of F L (Γ) is non-negative.
2.1. Ramification divisor. First, we need to describe the ramification divisor of the orthogonal projection, which turns out to be the union of rational quadratic divisors associated to reflective vectors. For any v ∈ L ⊗ Q such that v 2 < 0 we define the rational quadratic divisor where v ⊥ L is an even integral lattice of signature (2, n − 1). The reflection with respect to the hyperplane defined by a non-isotropic vector r ∈ L is given by σ r : l −→ l − 2 (l, r) r 2 r. If r is primitive and σ r ∈ O(L), then we say that r is a reflective vector. We notice that r is always reflective if r 2 = ±2, and we call it root in this case.
If v ∈ L ∨ and v 2 < 0, the divisor Ω v (L) is called a reflective divisor if σ v ∈ O(L).
Given a primitive embedding of lattices L ֒→ L 2,26 , with L of signature (2, n), we define To construct a modular form for some subgroup of O + (L), one might try to pull back Φ 12 along the closed immersion Ω • L ֒→ Ω • L 2,26 . However, for any r ∈ R(L) one has Ω • L ⊂ Ω • r ⊥ and hence Φ 12 vanishes identically on Ω • L . The method of the quasi-pullback, first developed by Gritsenko, Hulek, and Sankaran [GHS07b], deals with this issue by dividing out by appropriate linear factors: is non-zero, where in the product over r we fix a finite system of representatives in R L 2,26 (L)/± 1. The modular form Φ| L vanishes only on rational quadratic divisors of type Ω v (L) where v ∈ L ∨ is the orthogonal projection with respect to L ⊥ of a (−2)-root r ∈ L 2,26 on L ∨ . Moreover, if N(L) > 0, then Φ| L is a cusp form.
We want to apply the low-weight cusp form trick and Theorem 2.3 to the orthogonal variety isomorphic to the moduli space of U ⊕ −2k -polarized K3 surfaces.
For any l ∈ L we define its divisibility div(l) to be the unique m > 0 such that (l, L) = mZ or, equivalently, the unique m > 0 such that l/m ∈ L ∨ is primitive. Since div(r) > 0 is the smallest intersection number of r with any other vector, div(r) divides r 2 . Moreover, if r is reflective, the number 2 (l,r) r 2 must be an integer, so r 2 divides 2(l, r) for all l ∈ L, i.e. r 2 | 2div(r). Summing up div(r) | r 2 | 2div(r).
Proof. Similar to [GHS07b, Proposition 3.2, Corollary 3.4] . Now σ r ∈ O + (L ⊗ R) if and only if r 2 < 0 (see [GHS07a]). Recall that an integral lattice T is called 2-elementary if A T is an abelian 2-elementary group.
Proposition 3.3. Let r ∈ L 2k be primitive with r 2 = −2k and div(r) ∈ {k, 2k}. Then L r := r ⊥ L 2k is a 2-elementary lattice of signature (2, 16) and determinant 4. Proof. We have the following well-known formula for det(L r ) (see for instance [GHS07b,Equation 20]): Since L 2k has signature (2, 17) and r 2 < 0, we have that L r has signature (2, 16). Therefore det(L r ) cannot be 1, because there are no unimodular lattices with signature (2, 16) (see [Nik79, Theorem 0.2.1]). This shows that div(r) = k. Therefore the reflection σ r acts as −id on the discriminant group A L 2k (see [GHS07b,Corollary 3.4]). Now we can extend −σ r ∈ O(L 2k ) to an elementσ r ∈ O(Λ K3 ) by definingσ r | U ⊕ −2k = id on the orthogonal complement of L 2k ֒→ Λ K3 . Put S r := (L r ) ⊥ Λ K3 . It is easy to realize that σ r | Lr = −id andσ r | Sr = id. Proposition 3.4. Given any embedding L 2k ֒→ L 2,26 , let r ∈ L 2k be a primitive reflective vector with r 2 = −2k, and consider L r = r ⊥ L 2k as above. Under the chosen embedding, the orthogonal complement (L r ) ⊥ L 2,26 is isomorphic to either Proof. Since L 2,26 is unimodular, the discriminant groups of L r and (L r ) ⊥ L 2,26 are isometric up to a sign. The previous proposition thus implies that (L r ) ⊥ L 2,26 is a 2-elementary, negative definite lattice of rank 10 and determinant 4. By [Nik79, Proposition 1.8.1], any 2-elementary discriminant form is isometric to a direct sum of 4 finite quadratic forms, represented by the 2-elementary lattices A 1 , A 1 (−1), U(2), D 4 . Since (L r ) ⊥ L 2,26 has signature −2 (mod 8) and determinant 4, it is immediate to realize that its discriminant form must be isometric to the discriminant form of 2A 1 (−1). Now we notice that the lattice E 8 (−1) ⊕ 2A 1 (−1) is a 2-elementary, negative definite lattice of rank 10 with the desired discriminant form. Finally it is enough to compute the genus of E 8 (−1) ⊕ 2A 1 (−1). A quick check with Magma yields that the whole genus consists of E 8 (−1) ⊕ 2A 1 (−1) and D 10 (−1). Alternatively, one can use the Siegel mass formula [CS88] and check that the mass of the quadratic form f associated to the lattice .
Since a straightforward check shows that D 10 (−1) is in the genus of E 8 (−1) ⊕ 2A 1 (−1), and the equality 1 Now we are ready to prove Proposition 3.1.
Lattice engineering
By the previous discussion, we have transformed our original question of determining the Kodaira dimension of M 2k to the following Problem 4.1. For which 2k > 0 does there exist a primitive vector l ∈ U ⊕ E 8 (−1) with norm l 2 = 2k such that l is orthogonal to at least 2 and at most 8 roots?
We want to find a lower bound for the values 2k answering Problem 4.1 positively (see Proposition 4.5). Since U ⊕ E 8 (−1) contains infinitely many roots, we want to start by reducing to the more manageable case of E 8 (−1), whose number of roots is finite.
For simplicity we define The following is a slight generalization of [TV19, Lemma 4.1,4.3]. In other words, if l = αe + βf + v ∈ U ⊕ E 8 (−1) is a vector of norm 2k satisfying the assumptions of the previous lemma, then the roots of U ⊕ E 8 (−1) orthogonal to l are roots of E 8 (−1). Therefore the set R(l) coincides with the set of roots in v ⊥ E 8 (−1) . The following lemma, inspired by [GHS07b, Theorem 7.1], controls the number of roots of E 8 (−1) orthogonal to v. with v 2 = 2n is orthogonal to at least 10 roots in E 8 , including ±a. By [GHS07b, Lemma 7.2] we know that every such v is contained in the union where A . Denote by n(v) the number of components in the union (4) containing v. Since (A ∼ = D 6 , we have counted the vector v exactly n(v) times in the sum 28N E 6 (2n) + 63N D 6 (2n).
We distinguish three cases.
(i) If v · c = 0 for every c ∈ X 114 {±a}, then v is orthogonal to at least 4 copies of 2 (6 roots), then v is orthogonal to at least 2 copies of 7 . Therefore, under our assumption that every v ∈ E (a) 7 with v 2 = 2n is orthogonal to at least 10 roots, we have shown that any such v is contained in at least 2 sets of the union (4), i.e.
By Lemma 4.3, we immediately obtain the claim.
We are now ready to answer Problem 4.1: Proof. Pick k > 0 and consider l = αe + βf + v, where l 2 = 2k, v 2 = −2n, so that αβ = n + k. Suppose that there exist α and β satisfying the hypotheses of Lemma 4.2 such that n = αβ − k ≥ 952. Then Proposition 4.4 implies that we can find a v ∈ E 8 (−1) with v 2 = −2n such that v ⊥ E 8 (−1) contains at least 2 and at most 8 roots. Moreover Lemma 4.2 assures that the roots of U ⊕ E 8 (−1) orthogonal to l = αe + βf + v are contained in E 8 (−1), so that l ⊥ U ⊕E 8 (−1) also contains at least 2 and at most 8 roots. Therefore the existence of such α, β is sufficient for the existence of l ∈ U ⊕ E 8 (−1) with 2 ≤ |R(l)| ≤ 8. Now let k ≥ 4900 = 70 2 , and consider In order to deal with the remaining cases, we perform an exhaustive computer analysis. More precisely, for each k < 4900 we search for a primitive vector l ∈ U ⊕E 8 (−1) with l 2 = 2k and 2 ≤ |R(l)| ≤ 8 (or 2 ≤ |R(l)| ≤ 10, if we want to prove that M 2k has non-negative Kodaira dimension). We have implemented the following algorithm in Magma.
• Construct the list Lst of all vectors v ∈ E 8 (−1) with norm |v 2 | ≤ 2 · 4900 orthogonal to at most 4 of the 8 effective roots of a given basis of E 8 (−1).
• If the minimum norm of the lattice l ⊥ is −2 and l ⊥ contains at most 8 (or 10) roots, we return the vector l.
This search, exhaustive in the range specified by the algorithm, shows that a vector l ∈ U ⊕ E 8 (−1) with l 2 = 2k < 2 · 4900 and 2 ≤ |R(l)| ≤ 8 exists if The interested reader can find the list of such vectors in the ArXiv distribution of this article. We also attach a code in Magma to verify that such vectors actually have the desired properties.
We are now ready to prove Theorem 0.1.
Proof of Theorem 0.1. Proposition 4.5 combined with the previous search assures that there exists a primitive l ∈ U ⊕ E 8 (−1) with norm l 2 = 2k and 2 ≤ |R(l)| ≤ 8 if k ≥ 4900 or k belongs to the list (5), in particular for any k ≥ 220. Such an l ∈ U ⊕ E 8 (−1) determines an embedding L 2k ֒→ L 2,26 with the property where N(L 2k ) is the number of effective roots in the orthogonal complement (L 2k ) ⊥ L 2,26 . Hence Theorem 2.3 provides a non-zero cusp form Φ| L 2k of weight 12 + N(L 2k ) ≤ 12 + 4 < 17 = dim(M 2k ), which vanishes along the ramification divisor of π : Ω L 2k → M 2k in view of Proposition 3.1, since l ⊥ does not contain E 8 (−1), otherwise l would be orthogonal to at least 240 roots. Then the low-weight cusp form trick (Theorem 2.1) ensures that M 2k is of general type.
An analogous argument shows that M 2k has non-negative Kodaira dimension if k belongs to the list (6), in particular for any k ≥ 176.
Geometric constructions
In this section we recall three well-known geometric constructions of K3 surfaces. Namely, double covers of the quadric surface P 1 × P 1 (see §5.1) and of the Hirzebruch surface F 4 (see §5.2) branched over suitable curves define lattice polarized K3 surfaces with respect to the lattices U(2) and U respectively. Furthermore, every elliptic K3 surface can be reconstructed from its Weierstrass fibration (see §5.3).
5.1. Double covers of P 1 × P 1 . Let F 0 := P 1 × P 1 be the smooth quadric surface in P 3 . Its Picard group is generated by the classes of the two pencils ℓ 1 , ℓ 2 of lines, hence Pic(F 0 ) endowed with the intersection form on F 0 is isomorphic to the hyperbolic plane U. The canonical bundle is Now let π : X → F 0 be the double cover branched over a smooth curve B ∈ | − 2K F 0 | = |O F 0 (4, 4)|. Then X is a smooth K3 surface. The pullbacks E i = π * ℓ i for i = 1, 2 are smooth elliptic curves, and E 1 E 2 = 2ℓ 1 ℓ 2 = 2, so that This embedding is primitive, and NS(X) = U(2) for a very general branch divisor B.
Assume now that there exists a smooth rational curve C ∈ |O F 0 (1, d)| for d ≥ 0 intersecting B with even multiplicities. For instance, C can be simply tangent to B in exactly 2d + 2 points. Then we have the following (cf. [Fes18, Proposition 5.1]): Lemma 5.1. Let ν : X → Y be a double cover of smooth projective surfaces branched over a smooth curve B, and assume that there exists a smooth rational curve C ⊆ Y intersecting B with even multiplicities. Then the pullback ν * C splits into two disjoint irreducible components, both isomorphic to C.
Proof. Let D := ν −1 (C) ⊆ X. The double cover ν induces a double cover ν : D → C, which is isomorphic to an unbranched double cover. This is because the branch locus of ν coincides with the set b(C) := {x ∈ C | mult x (C, B) ≡ 1 (mod 2)} = ∅. The unique unbranched cover of C ∼ = P 1 is given by a disjoint union of two smooth rational curves isomorphic to C.
In the case Y = F 0 as above, the pullback D = π * C = D 1 + D 2 splits into the union of two irreducible components D 1 , D 2 ∼ = P 1 . Since D 1 is smooth rational, we have D 2 1 = −2, and moreover D 1 E 1 = 1, D 1 E 2 = d. This implies that there exists an embedding (not necessarily primitive) If instead the branch divisor B is not smooth, but has simple singularities, the double cover π : X → F 0 is a K3 surface with isolated simple singularities. Therefore the desingularization X → X is a smooth K3 surface, since simple singularities do not change adjunction.
The following result is well known, but we include its proof for the sake of completeness.
Proposition 5.2. Let X be an elliptic K3 surface with NS(X) ∼ = U ⊕ −2k for some k ≥ 1. Then X can be realized as a double cover of F 0 if and only if k is even and k ≥ 4.
Proof. If X is a double cover of F 0 , the pullback map induces a primitive embedding It is then easy to notice that any even lattice of rank 3 containing primitively U(2) must have the discriminant divisible by 4, so we conclude that k = 1 2 det(NS(X)) is even. Conversely assume that NS(X) = U ⊕ −2k for a certain k ≥ 4 even. Then as above we have an isomorphism for d = 1 2 (k − 4) ≥ 0, so there are two genus one fibrations |E 1 |, |E 2 | : X → P 1 induced by the two elements E 1 , E 2 of the previous basis of square zero. We can now consider the surjective map It is a morphism of degree 2, since the preimage of any point of F 0 consists of the two points of intersection of two elliptic curves in |E 1 | and |E 2 |, as E 1 E 2 = 2. Consider the branch divisor B; if B is smooth, then π is a double cover, as claimed. Assume by contradiction that B is singular. B must have simple singularities, since otherwise the canonical divisor of X would be strictly negative. Thus X is the desingularization of the double cover π : X → F 0 branched over B, and therefore NS(X) contains the class of a smooth rational curve orthogonal to U. This is however absurd, since rkNS(X) = 3 and NS(X) ∼ = U ⊕ A 1 (−1).
It only remains to deal with the case k = 2, so consider a K3 surface X with NS(X) = U ⊕ −4 . If by contradiction X is a double cover of F 0 , then NS(X) contains primitively U(2), so that for a, b, c ∈ Z, c ≥ 1. Say that the previous isomorphism is given by the choice of a basis {E 1 , E 2 , D}. The determinant of NS(X) is 4, and this forces ab + 2c = 1. Thus a, b are odd, and without loss of generality a < 0, b > 0. Now choose n ≥ 0 such that a + 2n = 1 and consider the divisor D + nE 2 . It is effective by Riemann-Roch, since and D + nE 2 has intersection 1 ≥ 0 with the nef divisor E 1 . Moreover (D + nE 2 )E 1 = 1 means that D + nE 2 coincides with kE 1 + S for a certain k ≥ 0 and a section S of the elliptic pencil |E 1 |. In other words, NS(X) is generated by the three elements E 1 , E 2 , S. However the intersection form of X with respect to this basis is 0 2 1 2 0 α 1 α −2 and this matrix has determinant 4 only if α = −1, which is a contradiction, as E 2 is nef and S is effective.
Remark 5.3. Let X be a K3 surface with NS(X) = U ⊕ −2k for a certain k ≥ 4 even. Then an argument as above shows that a basis of NS(X) is given by where d = 1 2 (k −4), π = (|E 1 |, |E 2 |) : X → F 0 is the double cover branched over a (4, 4)-curve B, and C = π(D) is a smooth (1, d)-curve meeting B with even multiplicities. 5.2. Double covers of F 4 . Consider the Hirzebruch surface F 4 := P(O P 1 ⊕ O P 1 (4)). We denote by p : F 4 → P 1 the P 1 -bundle structure. We have that Pic(F 4 ) = Z f, s , where f is the class of a fiber F of the projection p, while s is the class of the unique curve S ⊆ F 4 with negative self-intersection. The intersection form on Pic(F 4 ) with respect to this basis is The canonical bundle of F 4 is given by is the desingularization of the quartic cone C 4 ⊆ P 5 over the normal rational curve C = Im(|O P 1 (4)|) ⊆ P 4 . Now consider the double cover π : X → F 4 branched over a curve B ∈ | − 2K F 4 | = |4s + 12f |. The linear system |4s + 12f | has a fixed part, given by the curve S, and a moving part |3s + 12f |. Assume that B splits as the sum S + B 0 , where B 0 ∈ |3s + 12f | is a smooth irreducible curve disjoint from S, as s(3s + 12f ) = 0. Then the surface X is a smooth K3 surface. The pullback E = π * F is a smooth elliptic curve, since the restricted double cover E → F is branched over (4s + 12f )f = 4 points. Moreover π is totally ramified over S ⊆ B, so π * S = 2C, where C = π −1 (S) ∼ = P 1 is a smooth rational curve. Since EC = 1 2 (π * F )(π * S) = F S = 1, we have a primitive embedding 0 1 1 −2 ∼ = U ֒→ NS(X).
For a general branch divisor B, we simply have NS(X) ∼ = U. Consider the linear system |s + 2kf | for k ≥ 2. Its general member D is a smooth rational curve meeting F in 1 point, S in 2k − 4 points and B in (s + 2kf )(4s + 12f ) = 8k − 4 points. Assume further that the curve D intersects the branch divisor B with even multiplicities. Then Lemma 5.1 assures that the pullback π * D = D 1 +D 2 splits into two disjoint components D 1 , D 2 ∼ = P 1 . This implies that there exists an embedding (not necessarily primitive) since D 1 E = 1 2 (π * D)(π * F ) = DF = 1 and D 1 C = 1 4 (π * D)(π * S) = 1 2 DS = k − 2. Proposition 5.4. Every elliptic K3 surface X is the desingularization of a double cover of the Hirzebruch surface F 4 .
Proof. Assume that U ֒→ NS(X), and denote by E, C the smooth curves in X generating U such that E 2 = 0, C 2 = −2. Consider the linear system |4E + 2C|. By [Huy16, Corollary 8.1.6] the divisor 4E + 2C is nef, as it has non-negative intersection with every smooth rational curve. Moreover 4E + 2C has intersection 0 with the curve C. Since (4E + 2C) 2 = 8 and dim |4E + 2C| = 5, ψ = ϕ |4E+2C| : X → P 5 is a morphism onto a surface Y ⊆ P 5 contracting C. C is a smooth (−2)-curve, so Y is singular. Now the elliptic curve E has intersection (4E + 2C)E = 2 with 4E + 2C, so ψ has degree 2 by [Sai74, Theorem 5.2]. This implies that deg(Y ) = 4, so Y ⊆ P 5 is a singular surface of minimal degree, thus Y is the quartic cone C 4 (see [del87]). Therefore ψ factors through the minimal resolution of C 4 , which is F 4 , giving a morphism π : X → F 4 of degree 2. Now we can repeat the argument 13 in the proof of Proposition 5.2, obtaining that X is the desingularization of a double cover of F 4 .
Remark 5.5. Every K3 surface X with NS(X) = U ⊕ −2k for a certain k ≥ 2 can be obtained as a double cover π : X → F 4 branched over a smooth curve B ∈ |4s + 12f | admitting a rational curve D ∈ |s + 2kf | intersecting B with even multiplicities.
If instead X is a K3 surface with NS(X) = U ⊕ −2 , then it is the desingularization of the double cover of F 4 branched over a curve B with a unique singularity of type A 1 . 5.3. Weierstrass fibrations. Let X be a smooth K3 surface. Recall that X is said elliptic if it admits an elliptic fibration, i.e. a morphism π : X → P 1 whose general fiber is a curve of genus one, together with a distinguished section. The Néron-Severi group of an elliptic K3 surface contains primitively a copy of the hyperbolic plane U, spanned by the classes of the fiber and the zero section of the elliptic fibration.
Let X be a smooth elliptic K3 surface. By [Mir89, Section §II.3] X is the desingularization of a Weierstrass fibration π ′ : Y → P 1 , where Y is defined by an equation in P(O P 1 (4) ⊕ O P 1 (6) ⊕ O P 1 ) with A ∈ H 0 (O P 1 (8)) and B ∈ H 0 (O P 1 (12)) minimal and with ∆ = 4A 3 + 27B 2 not identically zero. Conversely, every such Weierstrass fibration desingularizes to a smooth elliptic K3 surface. We will usually restrict to the chart {Z = 0} over the affine base A 1 t ⊆ P 1 , where the equation (7) becomes with A and B polynomials in t of degree at most 8 and 12 respectively. Notice that this is the equation of the generic fiber of the Weierstrass fibration, which is an elliptic curve over C(t). Under this identification, sections of the fibration π (or π ′ ) correspond to C(t)-rational points of equation (8). In particular the distinguished zero section is located at the point of infinity S 0 = (0 : 1 : 0). Moreover we will write S = (u(t), v(t)) to denote the section S of π corresponding to the C(t)-rational point (u(t), v(t)) of equation (8). By the above description, u, v ∈ C(t) are rational functions of degree at most 4, 6 respectively.
Remark 5.6. Let X be a U ⊕ −2k -polarized K3 surface. If k ≥ 2, the given elliptic fibration on X admits an extra section S such that SS 0 = k − 2. This follows from the isomorphism of lattices Conversely, if X is an elliptic K3 surface and S is an extra section with SS 0 = k − 2, then there exists an embedding This embedding is not necessarily primitive. However, it is primitive if the lattice U ⊕ −2k has no non-trivial overlattices (for instance if 2k is square-free, cf. [Nik79, Proposition 1.4.1]). 14 6. Unirationality of M 2k for small k The aim of this section is to prove Theorem 0.2, i.e. the unirationality of M 2k for k ≤ 11 and k ∈ {13, 16, 17, 19, 21, 25, 26, 29, 31, 34, 36, 37, 39, 41, 43, 49, 59, 61, 64, 73, 100}. For some of the cases we will use the geometric constructions of Section §5. For the others, we will find projective models of U ⊕ −2k -polarized K3 surfaces given by (quasi-)polarizations of degree ≤ 8. More precisely, the strategy will consist in finding Z-bases of U ⊕ −2k given by the (quasi-)polarization and (−2)-curves of small degree.
6.1. k = 1. The variety M 2 is the moduli space of U ⊕ −2 -polarized K3 surfaces. If X is a general K3 surface in M 2 , then X is the desingularization of an elliptic K3 surface Y with an A 1 singularity. Hence X admits an elliptic fibration with a unique reducible fiber, consisting of two irreducible smooth rational curves. A quick inspection of the Kodaira fibers [Mir89, Table I.4.1] yields that this reducible fiber can be either of type I 2 (two smooth rational curves meeting transversely at two distinct points) or III (two smooth rational curves simply tangent at one point). This depends on whether the A 1 singularity on Y belongs to a nodal or cuspidal rational curve respectively. After moving the singular fiber at t = 0, Y can be written as a Weierstrass equation satisfying t | b(t) and t 2 | c(t). Up to a change of coordinates in x, this equation is equivalent to the one in (8). Conversely, a general such Weierstrass equation desingularizes to an elliptic K3 surface with an I 2 or a III fiber at t = 0. From this description we can define a dominant rational map sending the polynomials (a, b, c) into the isomorphism class of the desingularization of the corresponding Weierstrass equation. Since P 2 is an affine space, M 2 is unirational.
6.2. k = 2. An U ⊕ −4 -polarized K3 surface X is an elliptic K3 surface admitting a section S disjoint from the zero section S 0 of the given elliptic fibration by Remark 5.6. Let for polynomials d, e, v of degree at most 4, 8, 6 respectively. Conversely, a general Weierstrass equation as in (9) defines an elliptic K3 surface containing the disjoint sections S 0 = (0 : 1 : 0), S = (0, 0), and therefore an U ⊕ −4 -polarized K3 surface. This implies that there exists a dominant rational map P 4 is an affine space, thus M 4 is unirational. 15 6.3. k = 3. Let X be the desingularization of a double cover of P 2 branched over a sextic B with an A 2 singularity. Then X is a K3 surface with 2 ⊕ A 2 (−1) ∼ = U ⊕ −6 ֒→ NS(X).
Since 6 is square-free, the previous embedding is primitive. Conversely, if X is a K3 surface with NS(X) = 2 ⊕ A 2 (−1), the linear system associated to the first element of the basis induces a morphism X → P 2 of degree 2 contracting the two (−2)-curves, thus X is the desingularization of a double cover of P 2 branched over a sextic with an A 2 singularity. Up to a projective transformation, we can assume that the sextic B ⊆ P 2 has an A 2 singularity at P = (0 : 0 : 1) ∈ P 2 , and that the unique line of P 2 meeting B in P with multiplicity 3 is V (x 0 ). This forces B to be given by an equation f (x 0 , x 1 , x 2 ) ∈ H 0 (O P 2 (6)) with coefficients of the terms x 6 2 , x 0 x 5 2 , x 5 1 , x 0 x 1 x 4 2 , x 2 1 x 4 2 zero. We denote by P 6 the linear subspace of H 0 (O P 2 (6)) parametrizing the polynomials with this vanishing of the coefficients. Therefore there exists a dominant rational map P 6 M 6 .
P 6 is an affine space, hence M 6 is unirational.
6.5. k = 5. Let X be the desingularization of a double cover of P 2 branched over a sextic B with a simple node and admitting a tritangent line. Then X is a K3 surface with 2 1 0 1 −2 0 0 0 −2 ∼ = U ⊕ −10 ֒→ NS(X) (see Lemma 5.1). Since 10 is square-free, the previous embedding is primitive. Conversely, let X be a K3 surface with NS(X) isometric to the previous lattice, with basis {H, L, C}. The linear system |H| induces a morphism X → P 2 of degree 2 contracting C. Let Y → P 2 denote the double cover obtained contracting C. Then Y has a singular point of type A 1 , so the branch locus B ⊆ P 2 has a node. Moreover L is mapped onto a line of P 2 meeting B with even multiplicities, so generically it will be a tritangent of B. Now, up to a projective transformation, we can assume that the tritangent line is given by V (x 0 ), so that B is given by an equation of the form We can also assume that the node of B is located at P = (1 : 0 : 0). This forces the coefficients of g of the terms x 5 0 , x 4 0 x 1 , x 4 0 x 2 to be zero. We denote by Q 10 the linear subspace of H 0 (O P 2 (5)) parametrizing the polynomials with this vanishing of the coefficients. Then there exists a dominant rational map sending (g, h 1 , h 2 , h 3 ) to the isomorphism class of the double cover of P 2 branched over f defined as in equation (10). As P 10 is an affine space, M 10 is unirational.
6.6. k = 6. By Proposition 5.2 and Remark 5.3, a general such K3 surface is the double cover of F 0 branched over a (4, 4)-curve B admitting a smooth (1, 1)-curve C intersecting B in 4 points with multiplicity 2. Up to an automorphism of F 0 we can assume that C = V (x 0 y 1 − x 1 y 0 ); moreover we can assume that B doesn't pass through the point ((0 : 1), (0 : 1)) ∈ C, so that the intersection B ∩C is contained in the chart U = {x 0 = 0, y 0 = 0}, with coordinates (1 : u), (1 : v). Say that B is given by the equation where β η = j+l=η α ijkl and β 8 = α 0404 = 1. Now g(u) has 4 double roots at u = ε 1 , ε 2 , ε 3 , ε 4 if and only if The choice of ε 1 , ε 2 , ε 3 , ε 4 uniquely determines the coefficients β η for η ≤ 7, which in turn uniquely determine 8 of the α ijkl . The other 17 coefficients α ijkl are free parameters so, if we denote them by α ′ 1 , . . . , α ′ 17 , we have that there exists a rational dominant map P 12 is an affine space, so M 12 is unirational. 6.7. k = 7. Let X ′ ⊆ P 3 be a quartic surface containing a line L and with an A 1 singularity P located on the line L. Then the desingularization X of X ′ is a smooth K3 surface with 4 1 0 The embedding is primitive, since 14 is square-free. Conversely, let X be a K3 surface with NS(X) isometric to the previous lattice, with basis {H, L, N}. The unique (−2)-divisor in H ⊥ is ±N. The linear system |H| induces a map ϕ : X → P 3 contracting N. ϕ is an embedding outside of N by [Sai74, Theorem 5.2], since there is no isotropic vector E with EH = 2. The image ϕ(X) ⊆ P 3 is a quartic containing a line ϕ(L) (since LH = 1) and a singular point ϕ(N) of type A 1 located on L. Now, if X ′ ⊆ P 3 is a quartic as above, containing the line L = V (x 0 , x 1 ) and with an A 1 singularity at P = (0 : 0 : 0 : 1), it is given by an equation with g(P ) = h(P ) = 0. For a general choice of g and h, the singularity at P is of type A 1 . This shows that there exists a dominant rational map P 14 is an affine space, thus M 14 is unirational.
6.8. k = 8. By Proposition 5.2 and Remark 5.3, a general such K3 surface is the double cover of F 0 branched over a (4, 4)-curve B admitting a smooth (1, 2)-curve C intersecting B in 6 points with multiplicity 2. Fix F 0 = V (x 0 x 3 − x 1 x 2 ) ⊆ P 3 ; then, up to automorphism of F 0 , we can assume that C is the twisted cubic curve . Suppose that B doesn't pass through the point (0 : 0 : 0 : 1) ∈ C, so that the intersection B ∩ C is contained in the chart U = {x 0 = 0} ⊆ P 3 with coordinates (1 : u : v : w). Then Now an argument as in the case k = 6 shows that M 16 is unirational.
6.9. k = 9. Let Q ⊆ P 4 be a quadric containing a plane π ⊆ P 4 . We assume that Q is the cone over a smooth quadric in P 3 , so that it has a unique singular point, the vertex P . Let K ⊆ P 4 be a smooth cubic containing a conic and a line C, L ⊆ π with C ∩ L consisting of two points. If P / ∈ K and X = Q ∩ K is a complete intersection, then X is a smooth K3 surface. By construction X contains C and L, so that 6 1 2 1 −2 2 2 2 −2 ∼ = U ⊕ −18 ֒→ NS(X).
The isomorphism follows from the fact that the elliptic fibration induced by E := H − L − C has a section S 0 := 3H − 4L − 2C. Moreover the previous embedding is primitive. If it weren't, its saturation in NS(X) would be the only non-trivial overlattice of U ⊕ −18 , which is U ⊕ −2 . This is however impossible, since it is easy to check that |E| has infinitely many sections, while any elliptic fibration on U ⊕ −2 has only one section.
Conversely, let X be a K3 surface with NS(X) isometric to the previous lattice, with basis {H, L, C}. The divisor H is ample, and actually very ample by [Sai74, Theorem 5.2], since there is no isotropic vector E with EH = 2. The image ϕ(X) is the complete intersection of a quadric Q and a cubic K. Moreover ϕ(L) and ϕ(C) are a line and a conic respectively. Since ϕ(L) and ϕ(C) meet at two points, their union is contained in a plane π ⊆ P 4 . The quadric Q contains ϕ(L) ∪ ϕ(C) if and only if it contains the plane π, so Q must be singular. Generically Q will be the cone over a smooth quadric in P 3 . Now fix the plane π = V (x 3 , x 4 ) ⊆ P 4 . Up to an automorphism of π, we can assume that L = V (x 2 , x 3 , x 4 ) and C = V (x 2 2 − x 0 x 1 , x 3 , x 4 ) = {(u 2 : v 2 : uv : 0 : 0) | (u : v) ∈ P 1 }. Q contains the plane π if and only if it is given by an equation for some linear forms l 1 , l 2 ∈ H 0 (O P 4 (1)). The cubic K contains the line L if and only if it is given by an equation for some quadratic forms q 1 , q 2 , q 3 ∈ H 0 (O P 4 (2)). Moreover K contains the conic C if and only if f 3 (u 2 , v 2 , uv, 0, 0) ≡ 0 is zero as a polynomial in (u : v). This imposes linear conditions on the coefficients of q 1 , q 2 , q 3 . We denote by Q 18 the set of quadratic forms q 1 , q 2 , q 3 satisfying such linear conditions. Then we have a dominant rational map sending the quadric Q = V (f 2 ) and the cubic K = V (f 3 ) defined as in (11) and (12) to the isomorphism class of the (smooth complete) intersection Q ∩ K.
6.10. k = 10. Let X ⊆ P 3 be a smooth quartic surface containing two disjoint lines L 1 , L 2 . Then X is a K3 surface with 4 1 1 Since U ⊕ −20 has no non-trivial overlattices (cf. [Nik79, Proposition 1.4.1]), the previous embedding is primitive. A geometric way to see that X is elliptic is to take the pencil of hyperplanes containing L 1 . The linear system |H − L 1 | consists of planar cubic curves, and defines a genus one fibration on X. Since (H − L 1 )L 2 = 1, L 2 is a section of this fibration. Conversely, let X be a K3 surface with NS(X) isometric to the previous lattice, and denote by {H, L 1 , L 2 } the corresponding basis. Since there are no isotropic vectors E with EH = 2, [Sai74,Theorem 5.2] shows that the linear system |H| induces an embedding ϕ : X ֒→ P 3 , sending L 1 and L 2 onto disjoint lines contained in the quartic ϕ(X). Now, up to automorphism of P 3 , we can fix L 1 = V (x 0 , x 1 ) and L 2 = V (x 2 , x 3 ). A quartic surface X = V (f ) ⊆ P 3 contains L 1 and L 2 if and only if the coefficients of f of the terms only in x 0 , x 1 and only in x 2 , x 3 are zero. It follows that M 20 is unirational. 6.11. k = 11. Consider a projective subspace π ⊆ P 5 of dimension 3, a twisted cubic C 3 ⊆ π and a conic C 2 ⊆ π with C 2 ∩ C 3 consisting of three points. Let X = Q 1 ∩ Q 2 ∩ Q 3 ⊆ P 5 be a smooth complete intersection of three smooth quadrics containing the union Conversely, let X be a K3 surface with NS(X) isometric to the previous lattice, with basis {H, C 2 , C 3 }. H is very ample by [Sai74, Theorem 5.2], since there is no isotropic vector E with EH = 2, thus the linear system |H| induces an embedding ϕ : X ֒→ P 5 . The image ϕ(X) is a smooth complete intersection of three quadrics Q 1 , Q 2 , Q 3 containing the conic ϕ(C 2 ) and the twisted cubic ϕ(C 3 ). Now fix the projective subspace π = V (x 4 , x 5 ) ⊆ P 5 of dimension 3, and consider C 3 = {(u 3 : u 2 v : uv 2 : v 3 : 0 : 0) | (u : v) ∈ P 1 }. If P is any plane contained in π, the intersection P ∩ C 3 consists of three points. Moreover, the set of conics in P containing P ∩ C 3 is a linear subspace of H 0 (O P (2)). We consider the incidence variety is zero as a polynomial in (u : v), and this imposes linear conditions on the coefficients of f . Similarly, imposing that Q contains any conic C 2 = V (g) ⊆ P , forces other linear conditions on the coefficients of f . This shows that P 22 is a projective bundle over the variety By the discussion above, this is a projective bundle over |O π (1)| ∼ = P 3 , so Z (and thus P 22 ) is rational. There exists a dominant rational map . We conclude that M 22 is unirational.
6.12. k = 13. Let X ⊆ P 3 be a smooth quartic surface containing a line L and a smooth conic C, with L and C disjoint. Then X is a K3 surface with Conversely, let X be a K3 surface with NS(X) isometric to the previous lattice, and denote by {H, C, L} the corresponding basis. H is very ample by [Sai74, Theorem 5.2], since there is no isotropic vector E with EH = 2. Therefore the linear system |H| induces an embedding ϕ : X ֒→ P 3 , sending C and L onto a smooth conic and a line contained in the quartic ϕ(X), with ϕ(C) ∩ ϕ(L) = ∅. Now, up to automorphism of P 3 , we can fix C = V (x 0 , x 2 1 − x 2 x 3 ). If L ∈ Gr(1, 3) is any line disjoint from C, a quartic surface X = V (f ) ⊆ P 3 contains L if and only if the coefficients of the terms of f satisfy some linear conditions. Moreover X contains as a polynomial in (u : v). This also imposes linear conditions on the coefficients of f . Therefore the incidence variety is a projective bundle over the rational variety Gr(1, 3), hence P 26 is rational. We have a dominant rational map P 26 M 26 sending the pair (L, f ) to the isomorphism class of the quartic surface X = V (f ). We conclude that M 26 is unirational. 20 6.13. k = 16. We consider smooth quartics X ⊆ P 3 containing a twisted cubic C and a line L meeting at two points. Then X is a smooth K3 surface with 4 3 1 3 −2 2 1 2 −2 ∼ = U ⊕ −32 ֒→ NS(X).
An easy check shows that the embedding is not primitive if and only if the class C + L is divisible in NS(X). The divisor C + L has square zero and it is reduced, thus it is primitive in NS(X), and hence the embedding is primitive. Conversely, an argument as above using [Sai74,Theorem 5.2] shows that every K3 surface X with NS(X) isometric to the previous lattice is such a quartic surface. Now, up to automorphism of P 3 , we can fix C = {(u 3 : as a polynomial in (u : v). This imposes linear conditions on the coefficients of f . Moreover, if P 1 , P 2 ∈ C are any points on C, and L = P 1 P 2 ⊆ P 3 is the line through them, X contains L if and only if the coefficients of f satisfy other linear conditions. This shows that the incidence variety is a projective bundle over Sym 2 (C) ∼ = P 2 , thus it is rational. There exists a dominant rational map P 32 M 32 sending (P 1 , P 2 , f ) to the isomorphism class of the quartic surface defined by f . We conclude that M 32 is unirational.
6.14. k = 17. We consider a quartic surface X ′ ⊆ P 3 containing a twisted cubic curve C and a singular point P of type A 1 . Its desingularization X is a smooth K3 surface with Conversely, an argument as above using [Sai74,Theorem 5.2] shows that every K3 surface X with NS(X) isometric to the previous lattice is the desingularization of such a quartic surface.
We fix C = {(u 3 : u 2 v : uv 2 : v 3 ) | (u : v) ∈ P 1 }. As above, a quartic X = V (f ) ⊆ P 3 contains C if and only if the coefficients of f saitisfy some linear conditions. Moreover, if P ∈ P 3 is any point not in C, imposing that X has a singularity at P forces other linear conditions on the coefficients of f . Therefore the variety P 34 := {(P, f ) ∈ P 3 × |O P 3 (4)| : f is singular at P, V (f ) ⊇ C} → P 3 is a projective bundle over P 3 , hence it is rational. There exists a dominant rational map sending the pair (P, f ) to the isomorphism class of the desingularization of the quartic surface defined by f . We conclude that M 34 is unirational.
Conversely, an argument as above using [Sai74, Theorem 5.2] shows that every K3 surface X with NS(X) isometric to the previous lattice is such a quartic surface.
We fix the twisted cubic curve C = {(u 3 : u 2 v : uv 2 : v 3 ) | (u : v) ∈ P 1 }. We consider the incidence variety where Gr(1, 3) is the Grassmanian of lines in P 3 . An argument as in the case k = 17 shows that P 34 is a projective bundle over the variety which in turn is a P 2 -bundle over C ∼ = P 1 . This shows that P 38 is rational. There exists a dominant rational map P 38 M 38 sending the triple (P, L, f ) to the isomorphism class of the quartic surface defined by f . We conclude that M 38 is unirational.
An argument as in the case k = 9 shows that the embedding is primitive. Conversely, let X be a K3 surface with NS(X) isometric to the previous lattice, with basis {H, C 2 , C 3 }. H is very ample by [Sai74, Theorem 5.2], since there is no isotropic vector E with EH = 2. Therefore X is a smooth complete intersection of a quadric and a cubic containing a conic C 2 and a twisted cubic C 3 with C 2 ∩ C 3 = ∅.
6.18. k = 26. Let X ⊆ P 3 be a smooth quartic surface containg two disjoint twisted cubics The embedding is primitive, as U ⊕ −52 has no non-trivial overlattices (cf. [Nik79, Proposition 1.4.1]). An argument as above using [Sai74, Theorem 5.2] shows that a general K3 surface in M 52 is such a quartic surface.
We fix C 1 = {(u 3 : u 2 v : uv 2 : v 3 ) | (u : v) ∈ P 1 }. The variety T = SL(4, C)/SL(2, C) of twisted cubics in P 3 is rational by [PS85]. An argument as in the case k = 25 shows that the incidence variety is a projective bundle over T , hence it is rational. We conclude that M 52 is unirational. 6.19. k = 29. Let X = Q ∩ K ⊆ P 4 be a smooth complete intersection of a quadric Q and a cubic K containing a rational normal curve C of degree 4 and a line L with C ∩ L = ∅. Then X is a smooth K3 surface with 6 4 1 4 −2 0 1 0 −2 ∼ = U ⊕ −58 ֒→ NS(X).
An argument as above using [Sai74, Theorem 5.2] shows that a general K3 surface in M 58 is such a sextic surface.
6.20. k = 37. Let X = Q ∩ K ⊆ P 4 be a smooth complete intersection of a quadric Q and a cubic K containing a rational normal curve C 1 of degree 4 and a conic curve C 2 intersecting transversely at one point. Then X is a smooth K3 surface with 6 4 2 4 −2 1 2 1 −2 ∼ = U ⊕ −74 ֒→ NS(X).
An argument as above using [Sai74, Theorem 5.2] shows that a general K3 surface in M 74 can be embedded into P 4 as such a complete intersection.
We will closely follow the approach for the case k = 11. Let X ⊆ P 5 be a smooth complete intersection of three quadrics containg two rational normal curves C 1 , C 2 with the appropriate degrees and intersection number, depending on the case above. We denote by H the hyperplane class. The embedding of the lattices above in NS(X) is always primitive. The only non-trivial cases are k = 36, 49, 64, 100, since the other lattices have non non-trivial overlattices (cf. [Nik79, Proposition 1.4.1]). For k = 49 we use the argument of the case k = 9. For the others, it is enough to notice that the embedding is not primitive if and only if the class C 1 + C 2 is divisible in NS(X) (cf. the case k = 16). However C 1 + C 2 has square 0 and it is reduced on X, hence it is primitive in NS(X). Therefore X is a U ⊕ −2k -polarized K3 surface.
Conversely, a general K3 surface X with NS(X) isometric to one of the previous lattices is the smooth complete intersection of three quadrics in P 5 containing rational normal curves with suitable degrees and intersection number. This follows from [Sai74, Theorem 5.2], by showing that H is very ample in each case.
We now study the geometry of such K3 surfaces to prove the unirationality of the corresponding moduli spaces. For each case, we will denote by C 1 , C 2 two rational normal curves of degree d ∈ {1, . . . , 5}, e ∈ {3, 4, 5} respectively, intersecting at n ∈ {0, . . . , 3} points. Up to automorphism of P 5 we fix the curve C 2 which spans a linear subspace π 2 ⊆ P 5 of dimension e. First we choose a set of points P 1 , . . . , P n ∈ C 2 , which will be the points of intersection of C 1 and C 2 . Then we choose another linear subspace π 1 ⊆ P 5 of dimension d ≥ n containing P 1 , . . . , P n , and a rational normal curve C 1 ⊆ π 1 of degree d, passing through P 1 , . . . , P n .
We will need the following: Lemma 6.1. The parameter space C d,n of rational normal curves in P d of degree d passing through 0 ≤ n ≤ 3 points of P d in general position is unirational.
This shows that the parameter space C d,0 of rational normal curves is unirational, as there exists a dominant rational map Let P 1 , . . . , P n ∈ P d be n points in general position. The curve C = ϕ A (P 1 ) passes through P 1 , . . . , P n if and only if there exist R 1 , . . . , R n ∈ P 1 mapped to P 1 , . . . , P n under ϕ A . Since n ≤ 3, we can suppose that {R 1 , . . . , R n } is a subset of {(1 : 0), (0 : 1), (1 : 1)} up to automorphism of P 1 .
In order to prove the unirationality of M 2k we show that P ′ 2k is rational. The variety Sym n (C 2 ) ∼ = P n is rational and Z 2 is also rational, since it is a Gr(d − n, 5 − n)-bundle over Sym n (C 2 ). Then Z 1 is a projective bundle over Z 2 with fiber isomorphic to C d,n in the proof of Lemma 6.1. Finally an argument as in the case k = 25 shows that P ′ 2k is a projective bundle over Z 1 due to the rationality of C 1 and C 2 . In conclusion, it follows that P ′ 2k is rational. Therefore M 2k is unirational, since p 1 and p 2 are dominant rational maps. | 15,477 | 2020-03-24T00:00:00.000 | [
"Mathematics"
] |
Cystine and theanine: amino acids as oral immunomodulative nutrients
The decreases in the glutathione (GSH) level in the mouse spleen and liver after immune stimulation are suppressed by the oral administration of cystine and theanine (CT). GSH is considered to be important for the control of immune responses. Antibody production in mice after infection is enhanced by the oral administration of CT. In humans, also, the oral administration of CT has been confirmed to enhance antibody production after vaccination against Flu and also reduce the incidence of cold. However, the GSH level is reduced by intense exercise and surgery. In clinical studies of body-builders and long-distance runners, the intake of CT suppressed excessive inflammatory reactions and a decline in immune functions after intense training. Surgery as well as intense exercise induces excessive inflammatory reactions. In mice, the preoperative administration of CT suppressed excessive inflammatory reactions associated with surgery and promoted the postoperative recovery. Moreover, in clinical studies of gastrectomized patients, CT intake suppressed excessive postoperative inflammatory reactions and induced early recovery. If infection is regarded as invasive stress, CT intake is considered to exhibit an immunomodulatory effect by suppressing the decrease in GSH due to invasive stress. The clarification of their detailed action mechanisms and their application as medical or function foods is anticipated.
Introduction
Cystine is a sulfur-containing amino acid consisting of 2 cysteine molecules connected by an S-S bond. This sulfurcontaining amino acid is one of the precursors of glutathione (GSH), which is vital for antioxidant reactions in the body, and its supply is considered to be a rate-limiting factor in GSH synthesis (Grimble 2006;Rimaniol et al. 2001). GSH has been shown to play an important role in the regulation of immune functions as well as in antioxidant reactions in the body, and is known to decrease when the body is exposed to stress such as intense exercise and surgery (Droge and Holm 1997;Luo et al. 1996;Margonis et al. 2007). Theanine (γ-glutamylethylamide) is an amino acid abundant in green tea and is known to be absorbed after oral ingestion through the small intestine and hydrolyzed into glutamate and ethylamine in the intestine and liver (Asatoor 1966;Bukowski et al. 1999). Indeed, the blood level of glutamate was reported to significantly increase after the intake of green tea or a capsule containing theanine (Scheid et al. 2012).
An experiment using human peripheral blood macrophages (Mϕ) showed that the intracellular GSH content was dose-dependently increased by treatment of Mϕ with cystine, and was increased further by the addition of glutamate (Rimaniol et al. 2001). This report suggests that the GSH content is increased additively or synergistically by the simultaneous intake of cystine and glutamate. However, most of the glutamate orally ingested is known to be metabolized in the small intestine and not to enter the circulation (Windmueller and Spaeth 1975). As mentioned above, theanine, a glutamate derivative, is metabolized into glutamate and ethylamine in the intestine and liver after oral intake. Therefore, theanine is considered to function as a donor that supplies glutamate to the body (Figure 1). In fact, in an experiment using mice orally administered cystine and theanine alone or in combination before immune stimulation, no significant increase in the GSH level was observed in the liver after immune stimulation in the groups administered either agent alone compared with the control group, but it was significantly increased, and the antigen-specific immunoglobulin (Ig) G production in blood was also significantly augmented, in the group administered both agents (Kurihara et al. 2007). Furthermore, in an experiment using influenza-infected old mice, a combination of cystine and theanine (CT) was confirmed to increase GSH synthesis and enhance resistance to infection (Takagi et al. 2010). From these results of animal experiments, not yet confirmed in human trials, the oral administration of CT is considered to increase GSH synthesis and reinforce immune functions. We, therefore, performed studies using several models to clarify the usefulness of CT for strengthening immune functions in humans. These studies are summarized and reviewed.
Immune response-improving effect after vaccination against influenza in older people
Vaccination against influenza is important in older people for preventing exacerbation of the course of, and reducing the mortality due to, the disease (Jefferson et al. 2005;Mullooly et al. 1994;Nordin et al. 2001). However, as immune functions decline with aging, enhancement of the effectiveness of vaccination is considered important in older people (McElhaney et al. 1990;Remarque et al. 1996;Vu et al. 2002). Therefore, a clinical study was performed to confirm the effectiveness of CT intake at vaccination against influenza in 67 users of a nursing home affiliated to a hospital (mean age: 77 years) (Miyagawa et al. 2008). The 67 institutionalized elderly people, who were given sufficient explanation about the clinical study and consented to participate in it, were randomly divided into placebo and CT groups, and given either a placebo or CT once a day for 2 weeks prior to vaccination. The effectiveness of vaccination was analyzed by examining the antibody level (HI titer) in blood 1 month after vaccination. The rate of seroconversion, which is the percentage of people who have acquired an antibody level effective for the prevention of infection by vaccination among those who did not show an effective antibody level before vaccination, was higher in the CT group than in the placebo group regardless of the vaccine type (type A (H1N1), type A (H3N2), or type B), but the differences were not significant (Miyagawa et al. 2008). Therefore, we performed analysis by stratification using nutritional parameters in blood reportedly related to an aging-associated decline in the immune function (Hara et al. 2005), and observed a marked increase in the rate of seroconversion, particularly for type A (H1N1), in the groups with subaverage blood total protein and hemoglobin levels, i.e., those in a relatively poor nutritional state (Miyagawa et al. 2008). These results suggest that CT enhances the effectiveness of vaccination against influenza in older people with aging-associated declines in immune functions, and that this effect is more notable in older people with a poorer nutritional state. However, more trials in older people are needed, especially in those with malnutrition, to ascertain the effectiveness of CT intake after vaccination against influenza in the elderly. Figure 1 Working hypothesis regarding the immunomodulative actions of cystine and theanine. After the oral administration of cystine and theanine, cystine is incorporated into antigen-presenting cells (APCs: monocytes, Mϕ, or dendritic cells) which express cystine transporter (xCT/4F2hc) and reduced to cysteine. Theanine is hydrolyzed into glutamate and ethylamine and the glutamate is incorporated into APCs. The incorporated cysteine and glutamate enhance glutathione synthesis (Rimaniol et al. 2001) and then induce immunomodulative activity. On the other hand, the ethylamine derived from theanine acts on γδT cells (Bukowski et al. 1999).
Cold-preventing effect
The common cold is an infection usually caused by the entry of viruses into the upper airway in the dry season (Heikkinen and Jarvinen 2003). This process of infection can be naturally prevented more often if the host's immune competence increases. We performed a clinical study to evaluate the effect of CT intake on vulnerability to the common cold in winter in 176 adult male volunteers (mean age: 40 years) . The 176 adult males, who were given an explanation about the clinical study and consented to participate in it, were randomly divided into placebo and CT groups, administered the assigned preparation daily for 5 weeks, and answered a questionnaire concerning the body temperature and various common cold symptoms daily. The responses to the questionnaire were quantified according to the literature on clinical studies of cold and influenza infection (Hayden et al. 1999;Hayden et al. 1997), conditions of the appearance of various symptoms of cold and the occurrence of cold itself were defined using our original method, and the records during the 5-week period were analyzed. In the CT group, the frequency of fever and chills, which are cold symptoms, was significantly lower, and symptoms due to inflammation (nasal secretion, throat pain, etc.) were milder, compared with the placebo group. Moreover, the incidence of cold and total number of days with symptoms during the investigation period were significantly reduced . From these results, CT intake (980 mg/day, containing 700 mg of cystine and 280 mg of theanine) is considered to be effective for the prevention of cold and its various symptoms due to inflammation during winter in middle-aged and elderly people. Future human trials may reveal the specific effects of CT intake on cold prevention.
Suppressive effect against decline in immune functions due to intense exercise
It is known that we become more vulnerable to infections including cold after vigorous exercise such as marathon running (Nieman 1997). Vigorous exercise induces excessive inflammatory reactions, promotes the secretion of stress hormones such as corticosterone and glucagon, and weakens the immune functions of individuals (Gleeson and Pyne 2000;Suzuki et al. 2000). Moreover, disruption of the immune system associated with vigorous exercise has been reported to lead to problems during training and the poor physical condition of athletes, such as in overtraining syndrome (MacKinnon 2000;Smith 2003). We, therefore, carried out clinical studies to evaluate the effects of CT intake on changes in immune functions before and after intense training using college body-builders as a model of resistance training and college long-distance runners as a model of endurance exercise (Kawada et al. 2010;Murakami et al. 2009). First, 15 male body-builders who provided informed consent to this clinical study were randomized to placebo and CT groups and given the test food once a day at dinner during a 2-week training period. The training intensity was doubled during the final week compared with the first week of the training period, and changes in blood natural killer (NK) cell activity were serially monitored. In the placebo group, NK cell activity decreased gradually after the beginning of training compared with the pre-training level, with a significant drop after 2 weeks. In the CT group, on the other hand, no decrease in NK cell activity associated with intense training was noted (Kawada et al. 2010). Similarly, 15 male college longdistance runners who consented were randomized to placebo and CT groups and given the test food once a day at dinner for 10 days during a regular training period. Thereafter, they participated in an 11-day summer training camp, and changes in immunological parameters in blood before and after this period were analyzed. In the placebo group, the neutrophil count and high-sensitivity CRP level, which are blood inflammation markers, increased significantly, and the number of lymphocytes, which are immunocompetent cells in blood, decreased after the camp, but no significant change in these parameters was noted in the CT group (Murakami et al. 2009). These results suggest that CT suppresses an increase in inflammatory reactions associated with the stress of sustained intense exercise such as that during a training camp, and prevents an associated decline in immune functions. Therefore, we further analyzed changes in blood immunological parameters immediately after intense endurance exercise in the long-distance runners. Similarly, 16 male college longdistance runners who consented were randomized to placebo and CT groups and given the test food once a day at dinner for 7 days of regular training and 9 days during a training camp (daily for a total of 16 days). Interval training of 1,000 m × 15 times was performed as intense endurance training before breakfast on the first and last days of the camp, blood was sampled before and after the training, and changes in blood markers were analyzed. The neutrophil count and myoglobin level showed marked increases, but the lymphocyte count decreased, due to intense exercise stress on the first day of the camp, but these changes were significantly milder in the CT group than in the placebo group (Murakami et al. 2010). On the last day of the camp, the changes in blood markers associated with intense exercise were reduced, probably due to the physical and mental effects of the training, and no particular effect of CT intake was noted at this point. The results on the last day suggest that CT exert no effect when inflammatory reactions due to exercise stress are mild. Thus, CT are considered to suppress excessive inflammatory reactions without inhibiting physiologic and necessary inflammatory reactions. From the results of these clinical studies involving athletes, CT are considered to suppress excessive inflammatory reactions induced by severe stress, such as that due to intense exercise training, and prevent a decline in immune functions.
Promotion of recovery after surgery
Postoperative anti-inflammatory/recovery-promoting effects in mouse surgery models Surgery was selected as an invasive stress other than exercise. Generally, laparotomy, which involves more intestinal manipulation, is considered to cause severer inflammation than endoscopic surgery (Hiki et al. 2006). Therefore, using a mouse intestinal manipulation model (Kalff et al. 1998) as a model of laparotomy, the effects of the preoperative oral administration of CT on inflammation associated with surgical stress were evaluated according to changes in the blood inflammation marker interleukin (IL)-6 level (Biffl et al. 1996). Compared with a sham operation group, the blood IL-6 level showed a marked increase in the intestinal manipulation group administered the vehicle (V: 0.5% methylcellulose) alone, but this increase was suppressed by preoperative CT administrations (oral administration once a day for 5 days including the day of surgery), significantly at 70 mg/kg. The GSH levels in the intestine and Peyer's patches were significantly reduced by intestinal manipulation, but these decreases were significantly inhibited by the preoperative oral CT administration. Moreover, on linear regression analysis of the GSH levels in the intestine and Peyer's patches and blood IL-6 level, the GSH levels in the intestine and Peyer's patches both showed significant negative correlations with the blood IL-6 level. From these results, preoperative oral CT administration is considered to suppress the decrease in the intestinal GSH level and increase in the blood IL-6 level associated with intestinal manipulation, i.e., control excessive postoperative inflammatory reactions. Next, to clarify whether or not the suppression of excessive inflammatory reactions by CT leads to early postoperative recovery, behavioral analysis was carried out. While the spontaneous activity level recovered rapidly from Days 1 to 4 after surgery in the sham operation group, little recovery was noted in the manipulation (V) group, and the spontaneous activity level was significantly lower than in the sham operation group on Days 1 to 4. In the manipulation (CT) group, however, the spontaneous activity level recovered gradually from Days 1 to 4, and it was significantly higher than in the manipulation (V) group on Day 4. Regarding the changes in the body weight and food intake from Days 1 to 4, both were significantly lower in the manipulation (V) group than in the sham operation group, but were significantly higher in the manipulation (CT) group than in the manipulation (V) group. From these results in the mouse surgery model, preoperative CT administration is considered to suppress excessive inflammatory reactions and promote postoperative recovery by preventing the decrease in the intestinal GSH level associated with surgical stress (Shibakusa et al. 2012).
Postoperative recovery-promoting effect in stomach cancer patients after distal gastrectomy On the basis of the data obtained in the mouse surgery model, we then carried out a clinical study of stomach cancer patients after distal gastrectomy at Sendai City Medical Center. Forty-three patients aged 75 years or younger underwent elective surgery were randomized to placebo (P) and CT groups. They were given the test food for 10 days from 4 days before surgery, and the resting energy expenditure (REE), blood granulocyte and lymphocyte counts, CRP and IL-6 levels, and body temperature were measured serially. Eventually, 10 patients were excluded due to the retraction of consent and intraoperative complications, and 18 patients in the P group and 15 in the CT group were evaluated. First, the IL-6 level, a blood inflammation marker, showed a peak immediately after surgery and decreased thereafter, but it decreased more rapidly in the CT group and became significantly lower compared with the P group on Day 4 after surgery. Similarly, the CRP level, another blood inflammation marker, showed a peak on Day 1 after surgery and decreased thereafter, but was normalized earlier in the CT group similarly to the IL-6 level, being significantly lower than in the P group on Day 7 after surgery. Also, as mentioned above, CT intake suppressed the increase in granulocytes and decrease in lymphocytes induced by intense exercise, which is acute stress, in athletes (Murakami et al. 2009;Murakami et al. 2010). Therefore, we examined changes in the granulocyte count after gastrectomy, which is also acute stress, compared with the preoperative value. It was increased on Day 1 after surgery and decreased gradually thereafter in both the P and CT groups, but the decrease was faster in the CT group, and became significantly lower than in the P group on Day 4 after surgery. The lymphocyte count was lowest on Day 4 after surgery and recovered thereafter, but no significant difference was noted between the 2 groups. Since the body temperature increases with an inflammatory reaction, we also analyzed postoperative changes in the body temperature as differences compared with the preoperative level. As a result, the temperature was 2.5°C or more higher on the day of surgery compared with the preoperative value and decreased thereafter, but it decreased faster in the CT group than in the P group, and became significantly lower than in the P group on Day 5 after surgery. As there are reports that inflammatory reactions and the REE increased after surgery (Kotani et al. 1996) and that the REE after burn injury, which is an acute stress similar to surgery and intense exercise, was reduced by nutritional therapy with an antiinflammatory effect (early enteral nutrition) (Mochizuki et al. 1984), we also analyzed the effects of CT administration on the postoperative REE. In the P group, the REE increased to 1.15 times the preoperative level on Day 1 after surgery and decreased thereafter, but had not decreased to the preoperative level even on Day 7 and eventually recovered on Day 14. In the CT group, the REE level did not show an increase as observed in the P group on Day 1 after surgery, when it was significantly lower than in the P group, and remained lower thereafter. From these results, the perioperative administration of CT is considered to accelerate the recovery of the granulocyte count, IL-6, and CRP levels from the postoperative increase and to prevent the postoperative increase in the REE. Therefore, CT administration is considered to promote the postoperative recovery of gastrectomized patients (Miyachi et al. 2013). In the series of trials on intense exercise and surgery mentioned above, we observed the decreasing effects of CT intake on inflammatory markers and immunosuppression, not only in athletes after intense exercise but also in surgery patients. Cystine is a nonessential amino acid. In a clinical trial targeting surgery patients, cystine became essential during the perioperative period (Dale et al. 1977). In addition, the synthetic pathway from methionine to cysteine remains inhibited in rats under surgical stress (Vina et al. 1992). These reports support a role for cystine in suppressing the decrease in GSH during surgical stress. In addition, theanine may act as a glutamate donor in vivo and enhance GSH synthesis via cystine treatment in cells (Rimaniol et al. 2001). Based on these reports and our findings, preoperative oral CT administration has a useful anti-inflammatory and immunosuppressive effect.
Conclusion and perspectives
As mentioned in this review, CT were shown by clinical studies to not only enhance the antibody-producing ability on infection but also suppress excessive inflammatory reactions induced by intense exercise and surgery. Infection as well as intense exercise and surgery can be regarded as invasive stress. From this viewpoint, CT intake is considered to enhance the antibodyproducing ability, control excessive inflammatory reactions, and, in consequence, promote early recovery by inhibiting the decrease in GSH due to invasive stress ( Figure 2). However, to prove the hypothesis, more studies are needed that investigate the correlation between GSH levels and immune response to CT intake in mice and humans. While more detailed analysis of the action mechanism is necessary, CT with such effects are expected to be used, for example, as: (1) an oral adjuvant food on the vaccination of older people with reduced immunological functions, (2) a supplement for those who wish to maintain their physical condition throughout the year, (3) a conditioning food for expert and amateur athletes undergoing intensive training, and (4) a medical food to promote postoperative recovery for those expected to undergo surgery (Fearon et al. 2005;Wilmore and Kehlet 2001).
Figure 2
Scheme of enhanced recovery due to cystine and theanine after the indicated trauma. Trauma/stress due to exercise, surgery, or infection induces excessive inflammation and immunosuppression. It also delays recovery after trauma. Cystine and theanine suppress the decrease in the GSH level due to trauma, which may inhibit excessive inflammation and immunosuppression. As a result, cystine and theanine enhance recovery after trauma. | 4,880.4 | 2013-11-26T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
IoT Enabled Smart Lighting System for Smart Cities
The pace of urbanisation has risen tremendously during the last few decades. To provide a higher quality of life, urban dwellers will require a greater variety of improved services and apps. The term “smart city” refers to integrating contemporary digital technology in the setting of a city to improve urban services. There are possibilities to create new services and connect disparate application areas with each other as a result of the use of information and communication technologies in the smart city. However, to make sure the services in an IoT-enabled smart city environment remain running without depleting valuable energy resources, all of the apps have to be maintained using energy resources that are kept at a minimum. IoT can enhance a city’s lighting system since it uses more energy than other municipal systems. An intelligent city integrates lighting system sensors and communication channels with enhanced intelligence features for a Smart Lighting System (SLS). To control lighting more efficiently, SLS systems are built to be autonomous and efficient. We cover the SLS and evaluate several IoT-enabled communication protocols in this article. Furthermore, we evaluated several use scenarios for IoT enabled indoor and outdoor SLS and generated a report detailing the energy consumption in different use cases. By using IoT-enabled smart lighting systems, our research has shown that energy savings are possible in both indoor and outdoor settings, which is equivalent to a forty percent reduction in energy usage. Finally, we went through the SLS in the smart city research plans.
Introduction
The phrase smart city is a fairly new word that has had a high rate of dissemination in the last few years. The arrival of the new paradigm has fostered cooperation among academia, industry, governments, and organisations, with people joining in as well. In [1], the authors use a smart city as an example of a well-defined geographical area, in which a range of technologies such as ICT, logistics, energy production, and more work together to help people achieve things like overall wellbeing, inclusion, and participation, while also ensuring that the environment is clean and healthy. Nowadays, along with practical implementations, the smart city idea has been blamed for frequently being solely technology-driven, and pushed only by the interests of technology firms. Meanwhile, the 2 municipality and citizens have been given very little attention. As a result, this has necessitated a more sustainable methodology.
Sustainability has been well-established throughout time and enjoys widespread support. It is built on three essential elements: social well-being, environmental well-being, and financial wellbeing. Recently, a new definition [2,3] has been proposed to refer to a "sustainable city." It defines these cities as those that are able to absorb the inflow of materials and energy, as well as properly dispose of waste, without overextending the city's ecosystem. In other words, if a city wants to conserve its natural resources, then the amount of resources used inside the city should be equal to or less than the quantity of resources given by the environment (e.g., soil, water, or energy resources). Finally, since the city's activities have the potential to greatly impact the environment's ability to provide resources to citizens and other members of the ecosystem, pollution levels resulting from those activities should not overwhelm the environment's capacity to supply resources to citizens and other members of the ecosystem. While the idea of sustainability is quite basic and obvious, it has been attacked since in certain instances it does not align with contemporary societal trends, such as a rise in the amount of digital activity.
The ideas' development is therefore leading to a new wave of academic debates proposing a new paradigm: Smart Sustainable City. In further depth, this paradigm strives to create a "smart city" by concurrently considering urban sustainability and smartness. Consequently, understanding how to apply ideas such as these will influence the day-to-day activities of people. The information used to create this strategy is derived from the latest wave of technological progress, namely the increase in the number of IoT-enabled devices and entities.
The IoT is at the core of technological change and transformation in many situations and environments by creating and managing a network of linked devices that gather information about the physical world and modify their behaviour based on the ever-changing context in which they "live." Since IoT innovation is being introduced, smart sustainable cities will be able to enhance various elements of their urban administration, for instance, public transit, public lighting, e-governance, public safety, security, environmental monitoring, and mobility. The use of IoT technology is predicted to enable all of the available resources, including electricity, soil, water, people, and more, to be monitored, controlled, and managed [4,5]. Connection is crucial for organisations' ambitious goals; the most effective approach is to provide dependable connectivity to encourage efficient sharing of information. As such, due to the variety of different city scenarios that include varying communication technologies and network architectures, most instances necessitate heterogeneity of technologies and architectures, since they depend on the characteristics of specific services that need to be implemented or operational constraints, such as the availability of a power source. Connectivity, the backbone of a smart city, enables the implementation of services of interest to people and institutions.
Literature Survey
In order to build a smart city ecosystem, technology has to be a critical component as well as look at factors such as social and human capital. Most cities now use bespoke systems and solutions to meet their unique requirements, however these approaches are not appropriate for other cities across the world, and occasionally just a subset of different elements are required [6,7]. These results synthesise many of the literature's findings on smart cities and their major obstacles, problems, and open challenges. These four problems may be split into four distinct categories, and these categories can be described in more depth as citizens, mobility, government, and the environment. Smart cities should take into consideration the quality of life of the inhabitants while also factoring in the privacy issue, particularly when their personal information and household-level data are concerned. Because people may be concerned about the introduction of new technology, or see it as invasive, this is noteworthy. In addition, it is critical to focus on equality, which means that everyone in the community should benefit from improvements in smart city technology, and in particular, no metropolitan regions should be left behind [8,9]. The need for a change in government models stems from the wider idea of the smart city concept, which involves combining institutional policies with bottom-up initiatives in order to be more flexible and to improve the strength of community relations, fostering collaboration, and promoting communication among various entities to prevent the formation of multiple similar initiatives, which would not work together the most efficiently [10]. Mobile city deployment includes the provision of a sustainable, inclusive, and efficient mobility system for both products and people. Still another domain of smart city design that has not been thoroughly researched, and thus may hold answers to previously unanswered questions, is the incorporation of the environment into city services. For instance, sustainable resource management (such as water and energy), pollution, and the impact of urban activities can all be explored. The Key areas to deal with in a smart city are shown in Figure.1. Interoperability, as now deployments are more frequently based on private and isolated solutions, is currently thought of as a potential obstacle to smart city development [11,12]. To achieve affordable scale and maximise the outcomes, open standard-based devices must be utilised at all levels. In order to effectively coordinate data collecting and analytic operations across many systems, further coordination also is needed between the systems.
IOT Communication Protocols Used In Smart Lighting Systems
Size and scalability are important in SLS. An SLS's main focus is on how these different components may logically interact. The LU and CC should be able to communicate data via an IoT communication protocol stack. Various IoT-enabled SLS communication protocols are found here. Using an SLS requires two different ways of communication. long-distance communication A LCSU and a CC unit are both examples of long-range communication in SLS. SLS is typically composed of multiple LCUs and a central CC. Following the LU study, local LCUs link the data to a CC. LCUs are also connected to exchange data. A few hundred metres to several hundred miles are CC distances since each LCU is accessible across the city. A long-range communication protocol is required to connect LCUs and CC. A variety of protocols are used to link LCUs to the CC. Short-range communication, or communication between devices within visual range, is a wide phrase. A SLS is a short range (under 100m) between LUs and connected LCUs. SLS uses short-range protocols to communicate between LCUs and LUs. Some short-range protocols (e.g., DALI [13,14]) are wired or wireless (e.g., ZigBee [15], JenNET-IP [16], 6LoWPAN [17]). This article will concentrate on IoT-enabled SLS communication protocols, since IoT use in SLS is growing. Hundreds of LUs are placed around the city in an IoT-enabled SLS, forming LCUs. A communication protocol should be able to communicate with a large number of LUs and LCUs while maintaining battery life, low cost, low data rate, and low complexity. Short-range IoT communications may use both wired and wireless methods. Wired solutions are less expensive than wireless solutions since they utilise existing infrastructure and do not need cables or complicated connections. suggesting wireless outdoors lights while favouring wired implement the ZigBee protocol. 2) IPV6 over Low-Power Wireless Personal Area Networks (WPAN). 6LoWPAN is built on the Internet Protocol Suite for smaller IoT devices. 6LoWPAN provides versatility to SLSs through data transfer and control. 6LoWPAN data packets allow sensor data and control message transfer. This protocol stack acknowledges each successful packet delivery. Combining wired and wireless networks may significantly reduce 6LoWPAN installation costs [10]. In this way, 6LoWPAN outperforms other wireless protocols (for example, ZigBee, JenNET-IP). The benefits aside, 6LoWPAN is a basic Internet of Things application platform that connects existing sensor networks or other IoT devices through IP. 6LoWPAN allows developers to build new applications such as temperature control and weather monitoring. 6LoWPAN and JenNet-IP. The JIP layer provides application-level device access. Control the system by stacking this layer on top of the preceding one. Developing apps that utilise this protocol layer allows for more data transmission. The JenNet protocol offers multi-hop capabilities [18,19]. JenNet is used to manage the network and safeguard outbound communications. Using JenNet-IP in the SLS allows for more nodes to be connected than other IoT-enabled protocols. JenNet-IP says the system can handle over 1,000 LUs, allowing for enormous network building. In addition, JenNet-IP, an upgraded version of 6LoWPAN, offers a sophisticated application development platform. An SLS may also create additional procedures. Z-Wave is a comparable IEEE 802.15.4 protocol to ZigBee. They are low-cost, low-power communication devices. In contrast, Z-wave allows for the creation of a mesh network rather than a single point-to-point link. The range of the devices is increased, allowing delivery even if the LCU fails. Z-Wave was private until 2016, however now that the specs have been made public, anybody may build their own Z-Wave device. In recent years, long-range wide-area networks like LoRaWAN have emerged. The LoRaWAN platform uses LoRa technology developed by the LoRa Alliance. Various protocols are discussed here, including Bluetooth Low Energy (BLE). BLE is best used for one-to-one communication, such as monitoring exercise equipment, computers, and peripherals. An inexpensive option for simple sensor networks, this network may accommodate a number of network topologies. This enables for many-to-many communication, which is unique to Bluetooth mesh. also known as wired protocols Indoor lighting (in homes, schools, and businesses) is part of a smart city's SLS. While wired protocols exist to link an SLS to a CCS, wireless methods should be used instead. DALI [21] and Power Line Communication (PLC) [20] are herewired methods for transmitting an SLS across short distances. PLI: Power line infrastructure is used for indoor and outdoor networking and communication. Using PLC in SLSs is intended to save costs by using prebuilt networks. PLC-based lighting systems include two primary hardware components: a microcontroller and a PLC modem. A PLC microcontroller receives, processes, and transmits control signals to and from a PLC modem. The PLC modem modulates and demodulates data before to transmission to reduce the impacts of noise and interference. Serial connection allows for data transmission rates up to 500 Mbps between LUs. In a PLC modem with an RF transmitter, a IEC standard, DALI uses a proprietary protocol to link lighting equipment through a bus or star network. Digital circuitry is used to set up an SLS for DALI. A Manchester-coded frame connects each LU in a DALI-enabled SLS. Sensors such as motion sensors and light sensors provide and monitor data for the command to regulate motion as well as the reaction to that command. The DALIcompatible devices must be connected to the two DALI terminals. DALI's 6-bit addresses limit it to 64 nodes. DALI benefits SLSs. DALI allows the control of devices from a variety of manufacturers. Because it does not need several processes for various goods, this light is more efficient. Less electricity is utilised, saving money. DALI's total LU capacity is 64, making it useless for street lighting. DALI data transmission speed and interoperability with wireless sensor networks have been improved by Yuan Ma et al. NRZ and MPE (Manchester Phase Encoding) are used at 9600 baud. DALI has created a new transfer layer with sensors to enhance the lighting system's utility.
Elements of Smart Lighting System
Sensors are the most prevalent, followed by algorithms, with everything else in between. Lighting control systems may evaluate the day, light spectrum, or occupancy to decide the final reaction. Algorithms may operate inside devices or systems to manage workloads or tasks given to them. They may also be operated on the cloud, eliminating the need to transmit command messages [22]. Algorithms may refer to many cutting-edge technological solutions that constantly shift colours, such as tunable lights, techniques that control colour response, real-time colour adjustments, and real-time techniques that help reduce energy use. Circadian cycles are often used to create aesthetically complex lighting patterns. The initial lighting design schematic presented in Fig.2 represents the main components of the design. Rather than following rigid input design requirements, autonomous algorithms are taught to react to user choice and gender. The biological clock that controls our circadian cycles, as well as numerous other systems, including as hormone release, body temperature, and circadian awareness, have all been shown to be influenced by lightweight in the last decade. Since circadian cycles rely on luminescence rather than colour correction, the spectrum of light frequencies from red to blue is more significant. Because the system may affect and control many physiological characteristics, expanding can also imply gaining power. The museum benefits from complete spectrum management in gardening, fine arts, and public gathering spaces, among other disciplines.
In a network, the physical and logical layers interact at the system device level and the device hierarchy. Using different physical topologies, such as a loop, stars, or a combination of both star and bus, increases reliability and opens up expansion options. Traditional communication networks may be placed over wireless ones utilizing cables or wireless connecting covers in physical installations. You may connect to the network through wired or wireless connections.
Lighting products have been linked with IoT networking technologies to better serve a broader variety of applications. 0-10V, DALI, Digital Multiplexer (DMX), Local Area Network (LAN), and Power Line Communication (PLC) are the primary wired interfaces used for networking. They also utilise wireless technologies like as infra-s GHz, Bluetooth, and infra-shred optical Lumina [23]. 6LoWPAN, for example, uses a physical layer focused on short range and rapid network connection.
Sensors for Smart Lighting Platforms
A wide range of working sensor technologies and communication techniques is what makes a smart lighting device excellent, if not what keeps it inexpensive. In contemporary IoT applications, digital sensors are used to alter lighting mechanisms to aid in adaptive operation [16,17]. If you know about low-intensity light sensors and photodiodes, can they alert you when it becomes dark? Red, green, and blue sensors are used for LED and CFL recolour (fluorescent) lighting to detect their primary colour, producing RGB material for indoor environments and optical connections, but for wireless applications, visible light communication (VLC) is the most important photo diode works technology.
Creating white light by mixing individual red, green and blue LEDs as shown in figure.3. If you know about photo resistors and photosensitive cells that react to lesser amounts of light to alter the luminous flux depending on user activity. Spectroscopy is a unique and creative characteristic that allows us to utilise these devices to collect light in the visible spectrum. Since any type of light spectrum and any point-range is generated by a circadian lighting device, the control ability is critical to tuning the Color Rendering Index (CRI) and correlated color temperature (CCT) in real time is crucial. Sensor technologies embedded in smart lighting is presented in figure.4. As well as optimizing the lighting efficiency, it is known that sensors and controls which use LED technology may lose Luminescence levels increase with age and/ decrease over time. [20,21].
Communication Interfaces for Smart Lighting
Manufacturers may build systems to connect with goods using simplified standards, data centre management, or automation (IPv4 and IPv6). Although based on the IoT platform, the Open AIS project (from Europe) is creating a framework for diverse lighting systems. A standardized framework for lighting interface and extendable APIs enable the light system to be utilized in a broad variety of building systems and independent of specific cloud services.
The answer to this interoperability problem is to standardize lighting protocols, which are helpful in situations where open ecosystems with conflicting protocols are acceptable and closed ecosystems are unfeasible. In this instance, utilizing lightweight protocols like Universal Plug and Play (UDP) and TCP may extend compatibility.
Future Research on Smart Lighting System
As the IoT-enabled smart lighting system continues to grow, the rate of development is increasing.
This part details open issues connected to the execution of smart city initiatives and the security of SLSs. A lot of issues still need to be fixed in order to further increase the efficiency of SLS. This has been around for ages. Connecting all of an SLS's components with an IoT-enabled protocol is very necessary. The Low-power Wide-Area Network (LoRaWAN) makes it possible for low-data-rate connection to be established over large geographical areas, with many different IoT devices. Several SLSs, each using a different protocol, need to communicate in order to create a centralized lighting system. Looking at an issue from many angles To link various application domains, the concept of a Smart City is essential. SLS systems allow new services to be delivered to urban regions, making SLS more versatile. In conjunction with SoLS, low-cost autonomous solutions may be made available to traffic managers via smart traffic management. Additionally, weather systems using environmental sensors, including rain sensors, temperature sensors, and humidity sensors, may be used in SLS settings. Municipal services that use SLS may be less expensive and more efficient if used with other applications. System security system As they are centrally controlled, an attacker may target SLSs because this gives them access to other connected services. This also granted the attacker complete control over the city's lighting system, which might lead to even more severe attacks or allow them to totally rule the city. Sensors may be used to alter behavioral patterns that have been anticipated. With the implementation of future smart lighting standards, future researchers will confront difficulties. Unfortunately, at this time, there is no trustable method for granting and cancelling keys. A privacy mechanism may be employed to safeguard the user's privacy, but a complete security system may not be utilised. While low-power and low-cost end devices are available, security and efficiency will be sacrificed for them. A large and rigorous security system may slow down the functioning of the system and result in an increased installation cost. Implementation of inadequate security measures may have catastrophic results.
Conclusion
Energy efficiency is a critical problem in an IoT-enabled smart city setting. This is a serious issue, given the anticipated population growth in urban regions over the next few decades. We have devoted this article to discussing IoT-enabled Smart Indoor and Outdoor Lighting Systems (SiLS, SoLS) in the context of a smart city, where energy consumption may be reduced and operations made more intelligent via the use of sensors and actuators. In terms of power consumption, connection, and reliable administration, a variety of Internet of Things-enabled communication protocols may be utilised to construct a successful smart lighting system. Finally, we computed and provided the power consumption for SiLS and SoLS in a variety of use cases and situations. Energy consumption in indoor and outdoor settings may be decreased by up to forty percent when IoT-enabled SLS is used instead of conventional lighting systems. Finally, we addressed the benefits of SiLS and SoLS, as well as research difficulties for those who are interested in furthering their study in these areas. | 4,944.8 | 2021-11-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Inferring chromatin accessibility during murine hematopoiesis through phylogenetic analysis
Objective Diversification of cell types and changes in epigenetic states during cell differentiation processes are important for understanding development. Recently, phylogenetic analysis using DNA methylation and histone modification information has been shown useful for inferring these processes. The purpose of this study was to examine whether chromatin accessibility data can help infer these processes in murine hematopoiesis. Results Chromatin accessibility data could partially infer the hematopoietic differentiation hierarchy. Furthermore, based on the ancestral state estimation of internal nodes, the open/closed chromatin states of differentiating progenitor cells could be predicted with a specificity of 0.86–0.99 and sensitivity of 0.29–0.72. These results suggest that the phylogenetic analysis of chromatin accessibility could offer important information on cell differentiation, particularly for organisms from which progenitor cells are difficult to obtain. Supplementary Information The online version contains supplementary material available at 10.1186/s13104-023-06507-8.
Introduction
Cell differentiation is important for understanding how multicellular organisms develop based on their genetic programs.Recent high-throughput sequencing technologies and single-cell omics have revolutionized the way it is studied.To illustrate, recent advances in single-cell RNA-seq analysis have allowed researchers to infer cellular differentiation trajectories, which could be interpreted as a proxy for cellular progression along cellular differentiation pathways [1,2].However, this analysis requires cells in different states during the differentiation process, including stem cells and progenitor cells, to order them along pseudo-time.Considering that tissue stem cells and progenitor cells are typically rare and difficult to identify experimentally [3,4], important processes involved in intermediate progenitor states might not be known from the analysis.To address this problem, previous studies based on bulk transcriptomes have applied phylogenetic analysis; phylogenetic analysis can infer not only tree topology-corresponding to the cell differentiation hierarchy [5]-but also ancestral states-corresponding to the states of the differentiating intermediate progenitor cells.Based on phylogenetic analysis of terminally differentiated mature cells, Kin et al. inferred the expression pattern of differentiating intermediate progenitor cells [6].
Epigenomes reflect the expression status of genes and contain information not only on gene regions but also on cis-regulatory regions [7].Thus, the transition in epigenetic states during cell differentiation can provide further insight into the underlying mechanisms of cell differentiation.Epigenomes are somatically heritable and change during cellular differentiation, showing diversity among cell types [7].Previous studies have shown that phylogenetic analysis of epigenomic information such as DNA methylation [8][9][10] and histone modification [11][12][13] of terminally differentiated cells could be used to infer the cell differentiation hierarchy and predict the epigenomes of differentiating intermediate progenitor cells.
Nucleosomes are typically depleted in regulatory regions such as promoters and enhancers, resulting in accessible chromatin [14].Chromatin accessibility in gene-regulatory regions dynamically changes during cellular differentiation; moreover, the cell type-specific chromatin accessibility pattern is important for establishing and maintaining cellular identity [14,15].Chromatin accessibility not only reflects the expression status of genes [16], but also provides additional information to the transcriptome [17,18].Indeed, chromatin accessibility could represent cell types better than gene expression patterns in mammalian hematopoiesis [19,20].Therefore, estimating changes in chromatin accessibility during cell differentiation would be useful, especially for difficult-to-obtain progenitor cells.
The purpose of this study was to examine the feasibility of phylogenetic analysis based on genome-wide chromatin accessibility according to (1) tree topology, whether genome-wide chromatin accessibility data of differentiated cells can be used to infer the cell differentiation hierarchy and (2) ancestral state estimation, to predict the chromatin accessibility of differentiating intermediate progenitor cells.Mammalian hematopoietic differentiation is one of the best-studied systems due to its biological and medical importance, and the hierarchical structure along the course of differentiation is well known [21].Additionally, many experimental efforts have been undertaken for obtaining epigenomes of not only terminally differentiated cells but also stem cells and progenitor cells (e.g.[22][23][24][25]).These known hierarchical structure and epigenomes of progenitor cells can be used as a reference (correct answer) to verify the computational inference, thus hematopoiesis provides a unique opportunity for inference.If phylogenetic analysis of epigenomes can be demonstrated for hematopoiesis, it offers the potential to explore other cell differentiation systems, such as solid tissues, which are more difficult to study.
Based on the binary information, a phylogenetic analysis was performed using neighbor-joining (NJ) [27], maximum parsimony (MP) [28], and maximum likelihood (ML) [29] methods.For the NJ method, the number of pairwise character (0/1) differences was used for calculating the distance matrix.For the MP method, characters (0/1) were treated as undirected characters (the cost of open is equal to that of close) and an exhaustive search was performed using PAUP version 4.0b10 [30].For the ML method, six different models (BIN, BIN + I, BIN + I + G, and BIN + I + Rn where n = 4, 8, and 12, respectively) were computed where BIN represents binary data, I represents the ML estimates of the proportion of invariant sites, G represents the Gamma model of among-site rate heterogeneity with four categories, and Rn represents the free rate of among-site rate heterogeneity with n categories.Using the model with the lowest Akaike's Information Criterion, the best ML tree was searched using RAxML-NG version 1.1.0[31].For all three methods, branch support was evaluated based on 1000 bootstrap replicates.
Treelikeness was assessed using δ plots [32] with the delta.plotfunction of the ape package in R 4.3.0.For calculating δ q , LSK, B, TCD4, TCD8, and NK cells were used for the lymphoid lineage, whereas LSK, Neu, Mon, Ery, and iMK cells were used for the myeloid lineage.
An ancestral state of each site at each internal node was estimated based on MP and ML methods under the constraint of a fixed tree topology (see text).For the MP method, the ACC TRA N and DELTRAN algorithms were used to estimate the most parsimonious reconstruction [33].For the ML method, marginal probabilities were used based on the best model described above.Ambiguous sites, estimated as "-" by RAxML-NG, and stable sites, classified as STABLE (see text) in all lineages, were removed when calculating sensitivity and specificity.For both MP and ML methods, LSK was used as an outgroup.
To analyze characteristics of each cCRE, 27 epigenetic states defined by Xiang et al. [20] based on epigenetic marks of six histone modifications, CTCF binding, and nuclease accessibility were downloaded from https:// usevi sion.org/ data/ mm10/ IDEAS mouse Hem20 19/ segme ntati on/.The genomic positions were compared using the GenomicRanges package in R 4.3.0.To identify DNA motifs enriched in a specific cell lineage, find-MotifsGenome.pl of HOMER [34] was used with default parameters.
Results and discussion
Chromatin accessibility data of murine hematopoietic cells were obtained from the VISION project, which integrates precise and comprehensive epigenetic states and provides valuable resources for murine hematopoiesis [20,26].The obtained accessible chromatin regions are 150-3659 bp in length and 77,695,128 bp in total.In this study, each region was treated as a site; a total of 205,019 sites were used.During hematopoiesis, hematopoietic stem cells produce lymphoid and myeloid lineages, consisting of a variety of differentiated cell types (Fig. 1A).Using this data, putative time-course changing patterns of open/closed chromatin were examined in each of eight lineages (from LSK to B, TCD4, TCD8, NK, Neu, Mon, Ery, and iMK) during hematopoiesis.Sites were classified into four categories as follows: "STABLE, " "UP, " "DOWN, " and "OTHER" depending on their changes along the differentiation path: STABLE sites are consistently open or closed; UP or DOWN sites are gradually open or closed during hematopoiesis, respectively; the rest of sites were classified as OTHER.As a result, UP sites and DOWN sites accounted for 4.9-14% and 7.8-22%, respectively (Fig. 2).These sites could be suitable for phylogenetic analysis (see below).On the other hand, OTHER sites comprised 27%-34% in myeloid lineages (Fig. 2).This proportion of OTHER sites were larger than those previously reported for DNA methylation [10].When considering all lineages, about half of the sites contained OTHER sites in ≥ 1 lineage (Additional file 1: Figure S1).The difference in the proportion of OTHER sites between chromatin accessibility and DNA methylation may reflect the feature of each epigenome, where chromatin accessibility showed a strong positive correlation with gene expression, whereas DNA methylation was relatively stable [17], whose mechanism of enzymatic maintenance is well known [35].Note that no sites could be classified as OTHER for the lymphoid lineages because chromatin accessibility data were not available for progenitor cells, and the time course of this lineage only included one step (Fig. 1A).
When using chromatin accessibility data to reconstruct the cell differentiation hierarchy in phylogenetic analysis, UP and DOWN sites could contain useful information.Conversely, STABLE sites contain no information while OTHER sites may contain too many multiple changes in a site and/or homoplasious changes in multiple lineages, which sometimes hinders correct phylogenetic inferences for the MP method [36] and possibly for the ML method because incorporating the appropriate model can be difficult for these cases.Based on the chromatin accessibility data, phylogenetic analysis was performed with NJ, Tree topology is based on the known hierarchical hematopoietic differentiation [20].Inferred phylogenetic trees for all sites which are 205,019 sites (B) and all sites without OTHER sites which are 102,521 sites (C).Numbers on internal branches indicate bootstrap values.LSK was used as an outgroup MP, and ML methods.Open (1) or closed (0) chromatin states were treated as binary information, and each chromatin region was treated as a site.For the ML method, the best-fit model, BIN + F0 + I + R4, was used.
When all sites were included in the analysis, the lymphocyte lineage was separated with high bootstrap values with all three methods (Fig. 1B).On the other hand, the diversification pattern of the myeloid lineage was different from the known topology (Fig. 1A).Removing the OTHER sites improved the monophyly of neutrophils and monocytes with the NJ method but not with the other methods (Fig. 1C).Furthermore, removing iMK could recover the monophyly of the myeloid lineage in the ML method and could reconstruct the known topology in the NJ method (Additional file 2: Figure S2).The iMK contained more UP sites than other cells (Fig. 2), which might cause the long branch of iMK.These results may reflect a limitation of phylogenetic analysis, which can be affected by homoplasious sites, long branches [36], and dependencies between sites [37].Another possibility is heterogeneity in cell populations, recently revealed by new technologies [21], which implicated multiple differentiation paths.In fact, megakaryocytes could be differentiated directly from stem cells [38], a finding consistent with the inferred tree (Fig. 1B and C).In addition, since red blood cells and platelets lack DNA, their progenitors (erythrocytes and immature megakaryocytes) were used in this study, which may cause some problems.When treelikeness was compared between the lymphoid and myeloid lineages, lymphoid lineage exhibited lower δ q (Additional file 3: Figure S3), which ranges from 0 (perfectly treelike) to 1 [32].It appears that chromatin accessibility of myeloid lineage contains less information suitable for phylogenetic analysis.Removing OTHER sites increased the treelikeness, consistent with results of the phylogenetic analysis (Fig. 1B and C).Therefore, selecting the appropriate sites is important for applying phylogenetic analysis based on genome-wide chromatin accessibility for inferring cell differentiation processes.
Phylogenetic analysis also allows for estimating the ancestral states of internal nodes.Therefore, we next predicted the open/closed chromatin states of internal nodes, which correspond to differentiating progenitor cells (CMP, GMP, and MEP), and compared the predicted states of internal nodes with those of progenitor cells obtained from the VISION project [20,26].For this analysis, the STABLE sites on all lineages were removed; thus, a total of 175,083 sites were used.The ancestral states of the internal nodes were estimated using MP (ACC TRA N and DELTRAN) and ML (best-fit BIN + F0 + I + R4 model) methods under the topological constraint of the known tree (Fig. 1A) and the consistently inferred lymphoid topology of (B, ((TCD4, TCD8), NK)) (Fig. 1B and C).Then, comparison of the chromatin states of the predicted internal nodes with those of progenitor cells was used to calculate the sensitivity and specificity (Fig. 3).For the ML method, the calculations were performed by removing ambiguously estimated sites (4188, 4026, 4188 sites for the internal nodes corresponding to CMP, GMP, and MEP, respectively).
As a result, both MP and ML showed good specificities, between 0.86 and 0.98, depending on cell types.On the other hand, both methods showed low sensitivities for all cell types, ranging between 0.29 and 0.36, possibly due to false negatives from OTHER sites (Fig. 2).In fact, when the OTHER sites (102,498 sites) were removed, the specificity and sensitivity was improved to 0.90-0.99 and 0.47-0.72,respectively.Note that even among UP and DOWN sites, some sites cannot be correctly predicted in principle.For example, stem and progenitor cell specific/unique open regions are impossible to infer by phylogenetic analysis.When DNA methylation data were used (materials and methods were described in [10]), specificities and sensitivities ranged between 0.61-0.96and 0.72-0.92,respectively, indicating better predictability, especially for sensitivity.This difference between DNA methylation and chromatin accessibility may reflect the dynamic feature of chromatin accessibility compared with DNA methylation [17], as discussed in Fig. 2.
Finally, the biological implications from the inferences based on this phylogenetic analysis were explored in two aspects.First, epigenetic marks for each site class (STA-BLE/DOWN/UP/OTHER) were examined (Additional file 4: Figure S4).Xiang et al. assigned "epigenetic states, which are common combination of epigenetic features" [20] based on six histone modifications, CTCF binding, and nuclease accessibility of mouse hematopoietic cells.Based on this information, regions overlapping with these epigenetic states were analyzed for each site class of each myeloid cell (Neu, Mon, Ery, and iMK).As a result, DOWN sites exhibited a decrease, while UP sites exhibited an increase in epigenetic state 9, indicating high levels of nuclease accessibility, as expected, except for UP sites of erythrocytes.Interestingly, OTHER sites showed a progenitor-specific elevation of this state, suggesting progenitor-specific gene regulation.In addition, increase in epigenetic states of active promoter and enhancer signature, such as 12 and 21 [20], was observed in the UP sites.Second, enrichment of the transcription factor binding motifs was searched for genomic regions that showed lineage-specific changes of open/closed chromatin states, as demonstrated by Xiang et al. [20].There were 3072, 44, and 184 sites with a change from open to closed in the lineages from LSK to CMP, from CMP to MEP, and from CMP to GMP, respectively, whereas 521, 129, and 53 sites with a change from closed to open in the lineages from LSK to CMP, from CMP to MEP, and from CMP to GMP, respectively, where no changes were observed in other cell lineages of myeloid.When DNA motifs were searched in the most prominent 3072 sites with open-to-closed changes in the lineage from LSK to CMP, it was found that 12 DNA motifs were statistically enriched in these regions (Additional file 5: Figure S5).Of these, four motifs, including Runx1 and IRF1 binding motifs, are involved in lymphoid cell lineage determination [24], which is consistent with the closed states at the branching point of myeloid cell lineage.PBX2, NF1, NF-E2, CREB, and Tlx-1 are related to hematopoietic cells (eg.[39][40][41][42][43]).Other motifs might contain candidates for further studies.
In summary, the present phylogenetic analysis of chromatin accessibility data could partially infer the cell differentiation hierarchy of murine hematopoiesis.The epigenomes of progenitor cells could be estimated with high specificity but with low sensitivity, possibly due to the characteristics of chromatin accessibility, which is closely related to gene expression [17] and reflects diverse cell types [19,20].Changes in chromatin accessibility during cell differentiation include important changes involved in the divergence of cell lineages.Therefore, the results presented in this study suggest that the phylogenetic analysis of chromatin accessibility may provide additional information on cell differentiation.
Limitations
This study is based on murine hematopoiesis; thus, it is unclear whether the present findings are applicable to other species and/or cell types.Based on transcriptomic data, hierarchical differentiation was observed for many cell types other than hematopoietic cells [44]; thus, it is interesting to see whether it can be applied to other cell types.In addition, a traditional hierarchical differentiation of hematopoiesis was assumed in the present study (Fig. 1A).However, this model has recently been challenged by new evidence of a continuous model of hematopoiesis [21].These need to be further studied in the future.
• fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
•
At BMC, research is always in progress.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from:
Fig. 1
Fig. 1 Inferred phylogenetic trees of hematopoietic cells.A Cells used in this study.Circles represent differentiating progenitor cells.Tree topology is based on the known hierarchical hematopoietic differentiation [20].Inferred phylogenetic trees for all sites which are 205,019 sites (B) and all sites without OTHER sites which are 102,521 sites (C).Numbers on internal branches indicate bootstrap values.LSK was used as an outgroup
Fig. 2
Fig. 2 Classification of sites based on the changing pattern of open/closed chromatin states for each lineage.Each site was classified as STABLE, OTHER, DOWN, and UP according to the time-course changes of open chromatin signals for each lineage.STABLE includes both consistently open and closed sites.UP includes from closed to open changes, while DOWN includes from open to closed changes.Other sites are classified as OTHER
Fig. 3
Fig. 3 Prediction of open chromatin regions in differentiating progenitor cells.Black bars indicate sensitivity and white bars specificity.A and D represent ACC TRA N and DELTRAN, respectively | 4,046 | 2023-09-19T00:00:00.000 | [
"Biology"
] |
A Deep Autoencoder-Based Convolution Neural Network Framework for Bearing Fault Classification in Induction Motors
Fault diagnosis and classification for machines are integral to condition monitoring in the industrial sector. However, in recent times, as sensor technology and artificial intelligence have developed, data-driven fault diagnosis and classification have been more widely investigated. The data-driven approach requires good-quality features to attain good fault classification accuracy, yet domain expertise and a fair amount of labeled data are important for better features. This paper proposes a deep auto-encoder (DAE) and convolutional neural network (CNN)-based bearing fault classification model using motor current signals of an induction motor (IM). Motor current signals can be easily and non-invasively collected from the motor. However, the current signal collected from industrial sources is highly contaminated with noise; feature calculation thus becomes very challenging. The DAE is utilized for estimating the nonlinear function of the system with the normal state data, and later, the residual signal is obtained. The subsequent CNN model then successfully classified the types of faults from the residual signals. Our proposed semi-supervised approach achieved very high classification accuracy (more than 99%). The inclusion of DAE was found to not only improve the accuracy significantly but also to be potentially useful when the amount of labeled data is small. The experimental outcomes are compared with some existing works on the same dataset, and the performance of this proposed combined approach is found to be comparable with them. In terms of the classification accuracy and other evaluation parameters, the overall method can be considered as an effective approach for bearing fault classification using the motor current signal.
Introduction
Rotating machinery is among the most pervasive and substantial components of the industrial sector. Whether the system is mechanical or electro-mechanical, one or more rotating machines are involved; examples include motors, generators, turbines, gearboxes, drive trains, automobile, and aircraft engines. Due to rapid industrialization and automation, the use of complex rotating machinery has increased by a lot, which increases the chance of multiple and significant faults occurring because of a generating fault in any single component [1]. Among all the various types of rotating machinery, induction motors (IMs) are the most commonly used because of their vigorous design, high productivity, reliability, and low cost [2]. In general, the IM needs to operate uninterrupted over a long time and under difficult operating scenarios. The operating conditions and unfavorable environment in many cases initiate different faults and may eventually lead to undesirable downtime, huge economic losses, and in the worse case, human causalities [3]. To avoid these unwanted situations, the fault diagnosis mechanism has emerged as an important part of the prognosis and health management (PHM) techniques. Research on the fault diagnosis of rotating machinery recently became a very popular topic, and many significant breakthroughs were achieved because of the speedy development of artificial intelligence. Designing a robust and accurate condition monitoring system can improve the fault diagnosis system by reducing maintenance costs, as well as by increasing reliability, productivity, and safety. It becomes very challenging to properly carry out this type of research work in practical industrial circumstances because of the complex, changing, and noisy environments that exist around rotating machinery, which makes it almost impossible to collect noise-free and accurate signals with proper fault information [4]. Statistical reports reveal that among different parts in IM, faults occurring in the bearing are a common phenomenon. Specifically, for small and large size machines, the rate of bearing faults occurring is approximately 90% and 40%, respectively [5]. Bearing faults can be initiated at the time of manufacture or during the period of operation.
Signals recorded from an IM may contain fault-specific information. In general, an impulse is initiated in the bearing fault signal when the bearing collides with the defective contact surface, and due to the damped oscillation, it generates a strident transient response. This recurrent transient response holds the necessary information about the bearing condition [6]. If an accurate analysis of the transient response can be performed, then any possibility of a fault occurring in bearing elements in the future can be identified at an early stage, which may avoid unpredicted downtime as well as make the industrial operation more efficient by avoiding huge monetary loss.
In general, the model-based [7,8] and data-driven-based [9,10] techniques are considered as the basic fault diagnosis mechanisms. Model-based fault diagnosis identifies the faults by using a small dataset, but it needs to model the system's dynamics accurately. In highly nonlinear and uncertain conditions situations, it is difficult or impossible. Additionally, the technology of recent times ensures the availability of sensors for collecting various types of signals from machines to be used in the various fault monitoring systems. The availability of abundant data makes the data-driven -based approach in the fault diagnosis field very popular. The basic steps of the data-driven-based method are the acquisition of multiple state signals, extracting and selecting important features carrying fault signatures, and finally, classifying faults with machine learning algorithms [11][12][13]. Among the steps, the feature selection and extraction process is laborious and time-consuming; not only does it require deep knowledge of advanced signal processing techniques but also a great understanding of the working process and fault signals of the IM system [14]. Generally, vibration [15,16], current [17], acoustic emission [18,19], electromagnetic signals [20], and thermal imaging [21] are the signal types applied to diagnose faults. Therefore, it is very important to investigate and build a relationship between the recorded signals and corresponding types of bearing defects.
The fault signatures initiated in the bearings of IM mainly depend on fault-specific harmonic frequencies generated due to inner race, outer race, cage, or rolling element faults, the rotational speed of the rotor, or the geometrical dimensions of the bearing. Single point fault localization and diagnosis of fault in the entire bearing can be investigated with the additional installation of vibration sensors (accelerometer) [22]. The installed sensor can be difficult to access when located in a remote place, contributing to the overall costliness of the fault diagnosis process. To deal with these difficulties, fault diagnosis with motor current signal analysis (MCSA) has been proposed as an alternative way by many researchers as it does not require any additional sensors mounted around the bearing, which makes the data acquisition system non-invasive in nature and very cost-effective. Martínez-Montes et al. applied MCSA to measure the fault severities of two different bearing elements, the cage and the rolling elements, in a 5.5 kW IM [23]. Along with the bearing faults, eccentricity and broken rotor bar fault detection were also investigated by estimating the fault-related frequency and applying a matched subspace detector [24]. Techniques such as wavelet transform and short-time Fourier transform were also studied in [25][26][27] for fault detection from a stator current signal. Multiple signature analysis, such as the combination of MCSA and stray flux analysis, was introduced in [28], and it was found to detect mechanical faults in IM with good precision. Furthermore, the inner race faults were diagnosed by current, vibration, and stray flux signal in [29], where a 4 kW IM was involved in the experiment. From the comparative analysis, the researchers concluded that in the case of a mechanical fault, MCSA is more sensitive than stray flux, whereas in the case of misalignment and eccentricity, the stray flux becomes more sensitive.
Fault diagnosis from raw sensor data becomes computationally inefficient since the size of the original signal is generally large and the whole signal stream might not be adequate for fault identification. To reduce dimensionality, different signal processing techniques are carried out to extract useful features in different domains such as the time domain, frequency domain, and time-frequency domain. Later, the extracted features are fed as the input of various machine learning (ML) and deep learning (DL) methods for fault diagnosis. Machine learning approaches used for bearing fault classification include support vector machine (SVM), K-nearest neighbors (KNN), gradient boosting decision tree (GBDT), random forest (RF), principle component analysis, and artificial neural network [30][31][32][33]. The mentioned approaches require a fair amount of historical fault signature data (online dataset or laboratory measurement data) to train the model.
With the availability of high-volume data, the performance of DL approaches has improved. DL approaches have been successfully implemented in various research areas such as speech recognition, object detection, image classification, and fault diagnosis [34]. An overview of the amazing performance of CNN in fault diagnosis can be found in [35]. The main difference between classical ML and DL methods is that the accuracy of the classical ML model largely depends on the appropriate feature extraction and selection approach, whereas the DL inherits the automatic feature extraction ability. Although the combination of the signal processing techniques and classical ML/DL algorithms can also extract valid features and classify faults very efficiently, the performance largely depends on the input of a suitable amount of training samples.
In the industrial environment, it is quite difficult to collect a huge amount of noiseless data or data containing low noise with correct labeling. Other less forthright issues exist, such as when the results obtained from training and testing samples do not exhibit the same distribution, when there is a deficit in the amount of training data, and when samples are contaminated with different types of noise at different time instances of the recording [36]. Generally, in the real industrial sector, the collected bearing data are not labeled during acquisition [37]. The process of labeling is performed afterward when proper labeling cannot be confirmed. The lack of accurately labeled data will create challenges for the mostly applied supervised learning methods in the field of fault diagnosis. Furthermore, the complexity among different conditions of data, the time delay of acquiring raw signals, and imbalances occurring among different fault samples are some other technical challenges of the supervised fault diagnosis process [38].
For the challenges discussed above, sometimes it is beneficial to use unsupervised learning approaches. The autoencoder (AE) is considered as one of the promising unsupervised methods that can effectively learn features from unlabeled data. The AE approach is also being applied as a prominent dimensionality reduction mechanism in the rotating machinery fault diagnosis system [39,40] due to its high efficiency and ease of implementation. In fault diagnosis problems, the AE model is trained with the normal state data only, and it learns discriminative features that can be considered as the feature extraction mechanism for estimating the system state. There exist several extensions of conventional AE. Among them, the denoising AEs help to learn features that make differences among states in the highly noisy system [40]. Another form of AE named sparse autoencoder (SAE) is also implemented effectively in fault diagnosis systems by many researchers [14,41,42]. A combination of the SAE-deep belief network (DBN) is applied to fuse multi-sensor features for diagnosis bearing faults [43]. The variational AE (VAE) method was also applied in combination with deep generative models [44] and by using separate latent variables for each health state [45] for bearing fault diagnosis. In addition, the fault mode identification is performed by Huang et al. by combining recurrent neural network (RNN)-based VAE where the model preserves high-dimensional data information [46]. In [47], a convolution sparse autoencoder was designed for image recognition, where the convolutional Sensors 2021, 21, 8453 4 of 21 autoencoder (CAE) was utilized for specifying the feature maps of the input. The ability to achieve good performance of AE with a small amount of data makes it a reasonable alternative to CNN as it requires huge data to perform. Many researchers have combined the AE and CNN in different fields such as fault diagnosis and image classification with a limited amount of data [40,48,49].
However, the CNN was designed to deal with the 2-D image of a large sample size. The one-dimensional CNN (1-D CNN) has been developed in recent times to deal with one-dimensional signals and has achieved good performance in terms of accuracy and computational time [50]. Researchers applied the 1-D CNN in real-time fault diagnosis and it not only achieved high performance but also can eliminate the manual feature extraction phase [50][51][52][53].
In this paper, a novel bearing fault diagnosis approach has been presented based on deep autoencoder (DAE) and 1-D CNN to address the drawbacks mentioned for supervised algorithms using motor current signal analysis (MCSA). Here, the DAE technique is used to substitute the traditional two-state control approach for approximating the behavior of the signal. In the beginning, with the normal state of MCSA, the DAE is trained and taught the latent coding, which denotes the equivalent nonlinear function of the bearing considering only the normal operating state. After model training with the normal state data, the current signal with the unknown condition is provided as the input of the DAE model, and with the learned latent coding from the normal data, the estimation of the new unknown signal state is performed. After that, the mean squared error (MSE) of the signal, also known as the residual signal, is calculated with the difference between the real and estimated signals of the data from the unknown state by the DAE model. The generated residual signals from DAE of different conditions act as an indication of dissimilarities of the different faulty signals, and these discriminative features help to improve the accuracy of the fault classification approach. At last, a 1-D CNN is designed with the residual signals as input to classify multiple types of faults of the bearing.
The major contributions of this work can be summarized as follows.
1.
A novel data-driven approach based on DAE and 1-D CNN is presented using MCSA to investigate multiple fault states in the bearing of an induction motor.
2.
An unsupervised DAE-based approach is introduced for the initial identification of faulty and normal state current signals for induction motors.
3.
The fault diagnosis model is evaluated through a publicly available current signal dataset, and the final findings are compared with some previous works on the same dataset.
The rest of the paper is organized as follows. The experimental setup and data collection details are provided in Section 2. The details of the data segmentation and overall structure of the proposed model are demonstrated in Section 3. The experimental results of the proposed model and comparison with existing works on the same dataset are provided in Section 4. Finally, Section 5 includes the conclusion.
Experimental Setup and Data Acquisition
The current signal of IM used in this work was obtained from the Kat-Data Center contributed by the Mechanical Engineering research center of Paderborn University, Germany [54]. In addition to the current signal, this dataset also contains vibration signal, temperature, torque, speed, and radial load measurements of different operating conditions of the IM. The test rig was composed of an IM, torque-measurement shaft, a module of bearing, a flywheel, and finally, a load motor ( Figure 1). Here, a frequency inverter was used to operate the 425 W permanent magnet synchronous motor (PMSM) with a switching frequency of 16 kHz. The model used for the PMSM and the frequency inverter are Type SD4CDu8S-009, Hanning Elektro-Werke GmbH and Co. KG, and KEB Combivert 07F5E 1D-2B0A, respectively [54]. used to operate the 425 W permanent magnet synchronous motor (PMSM) with a switching frequency of 16 kHz. The model used for the PMSM and the frequency inverter are Type SD4CDu8S-009, Hanning Elektro-Werke GmbH and Co. KG, and KEB Combivert 07F5E 1D-2B0A, respectively [54]. In the data acquisition phase, a total of 32 different experimental bearings were involved, among them, 6 were normal bearings, 12 damaged bearings where damage was induced in an artificial manner, and another 14 faulty bearings with accelerating lifetime tests. The artificial damage in the bearing is created using three methods: drilling, manual electric engraving with a damage length of 1-4 mm, and electric discharge machining. Here, the appropriate directions of geometrical sizes of the cracks in bearings were assigned according to the VD1 3832 (2013) standard. For the accelerating lifetime test, plastic deformation damage, damage by pitting, and fatigue damage techniques were applied for the inner race and outer race faults. In case of injecting faults in bearings, the fault measurements (the bearing geometry, location of fault, size of damage) followed the 15,243 (2010) standards, which makes the overall data acquisition process more reliable. Additionally, to make the overall data acquisition process robust and acceptable, different faults with a wide range of severity levels were tested several times with various operating conditions as given in Table 1. With a current transducer of model LEM CKSR 15-NP, the current signal for two different phases was measured for each of the operating conditions mentioned above. Among the 32 different signals available in the dataset, data from 17 bearings with 3 different conditions were considered in this analysis. There are 20 measurements for each of the bearings listed in Table 2, where each instance holds 4 s of recording. The signal is passed through a 25 kHz low pass filter and then sampled at a rate of 64 kHz. For our analysis, we divided 4 s of data into segments of 1 s, which makes the data dimension for In the data acquisition phase, a total of 32 different experimental bearings were involved, among them, 6 were normal bearings, 12 damaged bearings where damage was induced in an artificial manner, and another 14 faulty bearings with accelerating lifetime tests. The artificial damage in the bearing is created using three methods: drilling, manual electric engraving with a damage length of 1-4 mm, and electric discharge machining. Here, the appropriate directions of geometrical sizes of the cracks in bearings were assigned according to the VD1 3832 (2013) standard. For the accelerating lifetime test, plastic deformation damage, damage by pitting, and fatigue damage techniques were applied for the inner race and outer race faults. In case of injecting faults in bearings, the fault measurements (the bearing geometry, location of fault, size of damage) followed the 15,243 (2010) standards, which makes the overall data acquisition process more reliable. Additionally, to make the overall data acquisition process robust and acceptable, different faults with a wide range of severity levels were tested several times with various operating conditions as given in Table 1. With a current transducer of model LEM CKSR 15-NP, the current signal for two different phases was measured for each of the operating conditions mentioned above. Among the 32 different signals available in the dataset, data from 17 bearings with 3 different conditions were considered in this analysis. There are 20 measurements for each of the bearings listed in Table 2, where each instance holds 4 s of recording. The signal is passed through a 25 kHz low pass filter and then sampled at a rate of 64 kHz. For our analysis, we divided 4 s of data into segments of 1 s, which makes the data dimension for our final analysis 1320 * 64,000 for three different classes, named normal (class 0), outer race fault (class 1), and inner race fault (class 2).
Materials and Methods
A framework of the overall methodology is presented in Figure 2 for classifying three different conditions of bearing using the current signal of IM. In the beginning, a deep autoencoder (DAE) is trained only with the normal state-bearing data to generate a nonlinear function approximation of a system. After that, the residual signal is generated by the difference between the original current signal and the estimated current signal generated by the DAE using the learned nonlinear approximation with the normal state of the system. In this case, the anomaly detection mechanism of the autoencoder is applied to identify the deviations of the faulty signals from the original signal by generating a residual signal. At the last stage, the discriminative nature of the residual signal due to different bearing conditions is considered as the representation of the individual bearing states of the system and applied as the input of the CNN to classify three different conditions of the bearing, including one normal and two faulty in IM.
Materials and Methods
A framework of the overall methodology is presented in Figure 2 for classifying three different conditions of bearing using the current signal of IM. In the beginning, a deep autoencoder (DAE) is trained only with the normal state-bearing data to generate a nonlinear function approximation of a system. After that, the residual signal is generated by the difference between the original current signal and the estimated current signal generated by the DAE using the learned nonlinear approximation with the normal state of the system. In this case, the anomaly detection mechanism of the autoencoder is applied to identify the deviations of the faulty signals from the original signal by generating a residual signal. At the last stage, the discriminative nature of the residual signal due to different bearing conditions is considered as the representation of the individual bearing states of the system and applied as the input of the CNN to classify three different conditions of the bearing, including one normal and two faulty in IM.
Bearing Fault Frequencies
The rolling element bearings (REB) are generally used to make the rotor operation smooth by reducing the friction. They also have to operate for a long time under heavy load conditions. Because of this, the bearing fault is the most frequently occurring fault in IMs and requires thorough monitoring to avoid damage that can hamper the whole industrial operation. The REB contains four basic elements, the inner race, outer race, cage, and rolling elements. In bearings, the outer ring and inner ring are mounted on a rotating shaft, whereas the rolling elements are placed in a closed cage having the same distance from one another. Different types of faults such as pitting or flaking can be generated in these elements due to adverse operating conditions, such as improper installation and lubrication, material fatigue, and contamination in lubricating materials. Single element
Bearing Fault Frequencies
The rolling element bearings (REB) are generally used to make the rotor operation smooth by reducing the friction. They also have to operate for a long time under heavy load conditions. Because of this, the bearing fault is the most frequently occurring fault in IMs and requires thorough monitoring to avoid damage that can hamper the whole industrial operation. The REB contains four basic elements, the inner race, outer race, cage, and rolling elements. In bearings, the outer ring and inner ring are mounted on a rotating shaft, whereas the rolling elements are placed in a closed cage having the same distance from one another. Different types of faults such as pitting or flaking can be generated in these elements due to adverse operating conditions, such as improper installation and lubrication, material fatigue, and contamination in lubricating materials. Single element faults such as those of the outer race, inner race, or roller faults occur most often, but multiple faults can also be generated simultaneously in various elements. We consider the faults such as those of the outer race, inner race, or roller faults occur most often, but multiple faults can also be generated simultaneously in various elements. We consider the motor current signal for two faulty conditions ( Figure 3) along with the normal bearing condition in this work. When any fault generates during operation and the roller passes across the defect point in every rotation, a shock impulse is creating having a characteristic defect frequency. The damage frequencies of different elements can be calculated with the geometric parameters of the bearing and the rotational speed along with the help of the Equations provided as (1)-(4): Inner race fault frequency: Outer race fault frequency: Roller fault frequency: Cage fault frequency: Here, is the number of rolling elements (balls), is the ball diameter, is the cage diameter, is the angle measurement of the balls, and represents the rotational frequency.
When the bearing fault occurs, a radial displacement is created between the stator and rotor, and oscillations, as well as fault frequencies, are generated in the current signals because of the radial motion. Later, the load torque and the rotating eccentricity develop some fluctuations that result in variations in the values of inductance and cause amplitude, frequency, and phase modulation. The current equation due to the occurrence of bearing faults can be expressed as: where is the phase angle, and represents the angular velocity, and Here, is the pole pair number of the operating machine, and indicates the harmonic frequency of the current signal and can be written as, = | ± |.
Here, and denote the supply frequency and the harmonic index, respectively. However, can be either or .Therefore, by applying the frequency auto search algorithm, the approximation of fault frequencies can be possible [55]. In some cases, the harmonics generated due to the bearing fault and the noise frequencies become When any fault generates during operation and the roller passes across the defect point in every rotation, a shock impulse is creating having a characteristic defect frequency. The damage frequencies of different elements can be calculated with the geometric parameters of the bearing and the rotational speed along with the help of the Equations provided as (1)-(4): Outer race fault frequency: Roller fault frequency: Cage fault frequency: Here, N ball is the number of rolling elements (balls), D ball is the ball diameter, D cage is the cage diameter, β is the angle measurement of the balls, and f m represents the rotational frequency.
When the bearing fault occurs, a radial displacement is created between the stator and rotor, and oscillations, as well as fault frequencies, are generated in the current signals because of the radial motion. Later, the load torque and the rotating eccentricity develop some fluctuations that result in variations in the values of inductance and cause amplitude, frequency, and phase modulation. The current equation due to the occurrence of bearing faults can be expressed as: where φ is the phase angle, and ω C k represents the angular velocity, and Here, p is the pole pair number of the operating machine, and f bearing indicates the harmonic frequency of the current signal and can be written as, f bearing = | f s ± m f v |. Here, f s and m denote the supply frequency and the harmonic index, respectively. However, f v can be either f inner or f outer . Therefore, by applying the frequency auto search algorithm, the approximation of fault frequencies can be possible [55]. In some cases, the harmonics generated due to the bearing fault and the noise frequencies become almost similar, which later creates a problem in differentiating the actual fault frequencies [56].
The representation of three different conditions of the current signals at the time domain is given in Figure 4, where all the signals exhibit subtle differences if we consider the zoom view. almost similar, which later creates a problem in differentiating the actual fault frequencies [56]. The representation of three different conditions of the current signals at the time domain is given in Figure 4, where all the signals exhibit subtle differences if we consider the zoom view. The envelope analysis is considered effective in analyzing the fault frequencies of different bearing fault conditions [57]. In Figure 5, the envelope spectrums of three different conditions are presented to exhibit the supply and corresponding fault frequencies.
Here, the supply frequency (100 Hz) is visible for all conditions. However, for the inner and outer fault conditions, the envelope spectra of the current signal do not show a peak at the inner and outer frequency harmonics for all instances. The absence of indication of faults in the current signal is because of the damped signal condition, presence of noise as well as multiple disturbances, and the indirect transmission of the fault signatures in the drive train through torque variations. Therefore, the features extraction from the current signal becomes difficult and challenging, which makes essential the development of efficient feature learning approaches for diagnosis bearing faults with a current signal [58]. The envelope analysis is considered effective in analyzing the fault frequencies of different bearing fault conditions [57]. In Figure 5, the envelope spectrums of three different conditions are presented to exhibit the supply and corresponding fault frequencies. Here, the supply frequency (100 Hz) is visible for all conditions. However, for the inner and outer fault conditions, the envelope spectra of the current signal do not show a peak at the inner and outer frequency harmonics for all instances. The absence of indication of faults in the current signal is because of the damped signal condition, presence of noise as well as multiple disturbances, and the indirect transmission of the fault signatures in the drive train through torque variations. Therefore, the features extraction from the current signal becomes difficult and challenging, which makes essential the development of efficient feature learning approaches for diagnosis bearing faults with a current signal [58]. almost similar, which later creates a problem in differentiating the actual fault frequencies [56]. The representation of three different conditions of the current signals at the time domain is given in Figure 4, where all the signals exhibit subtle differences if we consider the zoom view. The envelope analysis is considered effective in analyzing the fault frequencies of different bearing fault conditions [57]. In Figure 5, the envelope spectrums of three different conditions are presented to exhibit the supply and corresponding fault frequencies.
Here, the supply frequency (100 Hz) is visible for all conditions. However, for the inner and outer fault conditions, the envelope spectra of the current signal do not show a peak at the inner and outer frequency harmonics for all instances. The absence of indication of faults in the current signal is because of the damped signal condition, presence of noise as well as multiple disturbances, and the indirect transmission of the fault signatures in the drive train through torque variations. Therefore, the features extraction from the current signal becomes difficult and challenging, which makes essential the development of efficient feature learning approaches for diagnosis bearing faults with a current signal [58].
Data Segmentation
In an industrial setup, the collection of a large-scale dataset with proper labeling is time-consuming, laborious, and makes the overall system design too complex. However, to implement deep learning-based methods, high dimensional training data improves the learning model to learn efficiently. In addition, in the case of implementing a 1-D signal in a convolution neural network approach, the input data dimension will influence the overall architecture of the model. As the input shape increases, the number of input nodes and hidden layers also increases. Such a large and deep structure may provide good performance, but it also requires a large amount of time to learn by the model and there is the possibility of overfitting. To resolve this issue and convert the data meaningfully, a resampling mechanism is applied on different states of the current signal before the autoencoder, which prepares a sequence of frames. Here, each frame contains the same number of data points collected at the time of each revolution period. Three steps, provided below, are followed for the data segmentation before using data as the input of the autoencoder.
1.
Determining the number of rotations by the bearing in one second as the number of revolutions accomplished in one second (RPS) can be estimated with the formula stated below: Rotations per second(RPS) = rotating speed in rpm 60 (7) 2.
Determining the time required for one complete rotation as 3. Finally, the total number of data points recorded during one revolution can be found by Equation (9).
Here, f sampling and F f rame_size represent, respectively, the sampling frequency and each frame length of the resampled signal in terms of numbers of data points.
Thus, when the speed of the bearing rotation is 1500 rpm, the parameters in Equations (7)-(9) are calculated as RPS = 15, TOR = 0.04, and F f rame_size = 2560 for 1-s data.
Deep Autoencoder (DAE)
The autoencoder based on a deep neural network is considered as one of the most robust unsupervised learning models of the last few decades. With the unsupervised model, it becomes possible to extract effective and discriminative features from a huge unlabeled data set, which makes this approach widely applicable for the extraction of features and dimensionality reduction [36]. Basically, an autoencoder consists of a fully connected three-layer neural network where the encoder contains input and hidden layers and the decoder part comprises hidden and output layers. The encoder transfers the input data with a higher dimension into a feature vector with a lower dimension. After that, the decoder converts the data back to the input dimension. One of the main priorities of the deep neural network is to build a complex nonlinear relationship among the input data, which also helps in the autoencoder to effectively reconstruct the output of the decoder. Therefore, the reconstruction error will be decreased simultaneously through the overall training period and significant features will be stored in the hidden layer. Finally, the hidden layer output will depict the efficiency of the feature extraction of the designed autoencoder. Figure 6 represents the configuration of the basic autoencoder. Sensors 2021, 21, x FOR PEER REVIEW 10 of 21 For the n-dimension input data samples, = , , . . . , , the output/activation of the hidden layer ℎ with m-dimension (m < n) can be calculated as Equation (10): Here, ( ) , ( ) and ℎ represent the weight matrix connecting the input and hidden layer, bias, and activation function, respectively.
After the decoding process, the reconstructed signal, at the output layer can be expressed as: Here, ( ) , ( ) represent the weight matrix and the bias vector of the output layer. The activation function used for both encoder and decoder parts is generally set as a sigmoid function, ( ) = 1/(1 + ) or any other activation function depending on the data type. The training process begins with some initial values of weights and biases. During the training process, the parameters need to adjust for minimizing the reconstruction error between the original input data and the reconstructed output. The reconstruction error is quantified by the mean squared error (MSE), as mentioned in Equation (12), which is applied in our analysis.
In other cases, if the input values exist between 0 and 1, binary cross-entropy loss will be calculated as the reconstruction error with Equation (13): In this analysis, a deep autoencoder (one which uses more than one hidden layer) is applied to find an approximation of the normal state of the bearing, whose architecture is provided in Table 3. For the n-dimension input data samples, X = [x 1 , x 2 , . . . , x n ], the output/activation of the hidden layer h with m-dimension (m < n) can be calculated as Equation (10): Here, W (1) x, b (1) and f h represent the weight matrix connecting the input and hidden layer, bias, and activation function, respectively.
After the decoding process, the reconstructed signal, x at the output layer can be expressed as: Here, W (2) , b (2) represent the weight matrix and the bias vector of the output layer. The activation function used for both encoder and decoder parts is generally set as a sigmoid function, f (t) = 1/ 1 + e −t or any other activation function depending on the data type. The training process begins with some initial values of weights and biases.
During the training process, the parameters need to adjust for minimizing the reconstruction error between the original input data and the reconstructed output. The reconstruction error is quantified by the mean squared error (MSE), as mentioned in Equation (12), which is applied in our analysis.
In other cases, if the input values exist between 0 and 1, binary cross-entropy loss will be calculated as the reconstruction error with Equation (13): In this analysis, a deep autoencoder (one which uses more than one hidden layer) is applied to find an approximation of the normal state of the bearing, whose architecture is provided in Table 3. The scaled exponential linear unit (SELU) is applied as the activation function for both hidden and output layers of the DAE in this analysis. The recorded current signal has both positive and negative values; since SELU is a non-saturating type of activation function, it is a good choice for the type of signal used here, and it also tackles the vanishing gradient problem that occurred in the deep network architecture. Additionally, the normalization properties of SELU help to make the training process fast by converging the deep neural network quickly. The SELU can be defined as Equation (14): Here, the coefficient values for λ and α are set to approximately 1.05 and 1.6731, respectively, according to [59]. In this work, the optimizer used for updating the weight is adaptive moment estimation (Adam). This optimizer technique is becoming very wellknown because of its ability to memorize prior gradients as well as prior squared gradients exponentially decaying average values of the loss function [60].
Generation of the Residual Signal
To generate the residual signal with the designed autoencoder to apply it as the input of the CNN is one of the most significant parts of this research. In the training phase, normal-state bearing data are fed to the DAE, and it learns the nonlinear behavior of the system. Once the training phase is over, current signals for different states of the bearing are provided as input to the DEA. In this case, all the signal instances are different from those that are used in the training phase. In response, the DAE provides an estimated signal for each instance of the input signal. Next, the difference between the original input signal and the estimated signal from DAE is calculated, which is known as the residual signal. The computation of the residual signal, rx(n) can be calculated as [40]: Here, x(n) andx(n) represent the raw time domain motor current signal and reconstruction signal with the model developed with the normal state data, respectively.
Finally, we arranged the residual signal with three different conditions of motor current signal for normal bearing, outer race fault, and inner race fault and used it as the input of the 1D-CNN in the next step to perform fault classification.
The reconstruction error will be different for signals of different bearing states. The DAE model is trained with normal bearing state data, so when it estimates a normal signal instance, the reconstruction error will be small. However, when DAE estimates for a current signal that contains a fault signature, there is a high possibility that the reconstruction error will deviate significantly from the previous case. Generally, various types of faulty condition signals contain different characteristics and amplitude levels, and the relative statistical parameters also vary according to the signal amplitude. As a result, the difference between the time domain faulty state signals and the estimated signal by the DAE (residual signal) will also vary depending on which type of fault is present. For this reason, the computed residual signal can be used as discriminative features not only to detect the present condition of any system but also to diagnose fault classification performance of IM bearing.
Convolution Neural Network (CNN)
A convolution neural network (CNN) is a deep learning-based supervised algorithm that combines feature extraction and feature classification approaches. In general, the CNN is a feed-forward, deep network having full connectivity through the adjoining layers and one that performs better in comparison with other general supervised techniques. The ability to automatically learn high-dimensional features and solve the overfitting problem in ML approaches makes CNN a very effective technique in large-scale applications. The CNN is built with an input layer, multiple convolution layers, pooling layers, a fully connected layer, and an output layer. Each layer performs distinct roles, which are performed automatically within the architecture [61]. Additional inclusion of optimization parameters, dropout layers, and batch normalization, help CNN to decrease its dependency on the training data [62].
The original time-series data or the images of interest are passed to the convolution layer from the input layer. The heaviest computational task occurs in the convolution layer when a set of feature maps is generated. In each convolutional layer, a kernel having a local receptive field is used to perform convolution operations with the input data. After that, a bias term is added and the result passes through a nonlinear activation function, such as rectified linear unit (ReLU), to generate the output feature map, which acts as the input for the subsequent convolutional layer. ReLU is the most used function due to its ability to make the nonlinearity of CNN quite high. The convolution operation can be defined as Equation (16): Here, the ReLU operation can be calculated as Equation (17), Here, X l j and X l−1 i represent the output and input layers of the convolution layer with i-th input feature map and j-th output feature map. Furthermore, l indicates the layer number, ω l ij represents the weight matrix, b l j is the bias matrix, and ReLU represents the activation function.
After the convolution layer, a down-sampling layer is added to merge similar types of features, which reduces the size of the feature map and reduces the computation time by maintaining the same invariance in the characteristic scale. Therefore, the pooling layer reduces the data dimension without updating the weights of the parameters. It is important to set the stride parameter carefully as it plays an important role in this layer to reduce the resolution and preserve the numerical information. Max pooling, average pooling, norm pooling, logarithmic pooling, and stochastic pooling are different types of pooling approaches used in CNN. The output of the pooling layer for the j-th channel of the t-length feature can be expressed as: P j (n) = max 0≤n≤ t s X j (nW, (n + 1)W (18) where X j , W, S represent the input, width of the pooling window, and stride size, respectively. After completing the multiple stacks of convolution and pooling layers, the outcome can be transferred to the final stage of CNN, named the fully connected layer. This layer utilizes the output from the last pooling layer to predict the classes of the provided data. Therefore, the input of this layer was generated by performing a weighted summation of a one-dimensional feature matrix expanded by all feature graphs. The output of this layer y i can be expressed as: Here, x i−1 , w i , and b i are the feature vectors of one dimension, weight matrix, and bias, respectively. The output of the fully connected layer is a probability for each class or category in case of the classification problem. The probability is computed by a softmax activation function. The categorical cross-entropy loss function is used to calculate the gradients, which are to be utilized to update the weights in the training phase involving the loss function, which is expressed as follows: Here, y i k andP K represent, respectively, the target and estimated probabilities for i-th instances in the dataset containing the output class label k. The Adam optimization used for the autoencoder is also used for training the CNN.
The architecture of the CNN model applied in this work is shown in Figure 7, which connects the input layer with two convolution and max-pooling layers and finally one fully connected and three output layers. After completing the multiple stacks of convolution and pooling layers, the outcome can be transferred to the final stage of CNN, named the fully connected layer. This layer utilizes the output from the last pooling layer to predict the classes of the provided data. Therefore, the input of this layer was generated by performing a weighted summation of a one-dimensional feature matrix expanded by all feature graphs. The output of this layer can be expressed as: Here, , , and are the feature vectors of one dimension, weight matrix, and bias, respectively. The output of the fully connected layer is a probability for each class or category in case of the classification problem. The probability is computed by a softmax activation function. The categorical cross-entropy loss function is used to calculate the gradients, which are to be utilized to update the weights in the training phase involving the loss function, which is expressed as follows: Here, and represent, respectively, the target and estimated probabilities for i-th instances in the dataset containing the output class label k. The Adam optimization used for the autoencoder is also used for training the CNN.
The architecture of the CNN model applied in this work is shown in Figure 7, which connects the input layer with two convolution and max-pooling layers and finally one fully connected and three output layers. The outline of the CNN model used in this fault classification task is presented in Table 4, where the details of layer type, output shape, and the total number of parameters are included. The outline of the CNN model used in this fault classification task is presented in Table 4, where the details of layer type, output shape, and the total number of parameters are included.
Fault Classification Performance Evaluation Parameters
As our proposed approach classifies bearing faults from the motor current signal, we used commonly used evaluation parameters for classification problems such as precision, recall, F1-score, and accuracy. These parameters can be calculated using Equations (21)- (24).
Experimental Results and Discussion
A pipeline process consists of two steps: the first one performs the approximation of the nonlinear function, and the second stage represents the decision-making process for classification, and these are applied in the proposed method for fault classification. The nonlinear function approximation was performed through a DAE, which was trained with only the normal state data of the bearing and, by using the model, the residual signals were obtained for one normal state and two different faulty states.
Initially, we started with one second of the current signal recording consisting of 64,000 samples, and after performing signal segmentation, we obtained 2560 samples for each frame. In the beginning, the normal condition segmented data samples of the motor current signal were only considered to train the DAE. Then, the same signal is again reconstructed with the trained model. From the difference of these two signals, the residual signal (also known as reconstruction error) was generated and used as the input of the CNN in the next step. In this training process of DAE, the segmented data are divided into training and test sets with an 80:20 ratio. Therefore, 2048 samples were used to train the DAE model, and the remaining 512 samples were used for testing purposes to calculate the validation error. After training the DAE model for 500 epochs, each 1-s data (64000 samples) segment of normal and faulty conditions was used to generate the residual signal. In the end, these residuals are applied as the input of the CNN, which helped the decision-making part in classifying the bearing conditions. Since the DAE model is trained initially with normal condition data, the residual signal generated due to normal state data is very low in magnitude with most values clustered around zero. Therefore, the small magnitude residual signal indicates a very small reconstruction error as the original data and the predicted data through DAE nearly resemble each other. Generally, when a fault occurred in a system, the faulty data showed some deviation from normal state data. Hence, when the residual signal is generated for signals of faulty condition, the amplitude of reconstruction error also becomes high as compared to the residual signal of the normal state data. We calculated the reconstruction error with mean square error and obtained the values 0.104, 0.386, 0.479 for normal, outer race fault, and inner race fault data, respectively. The raw signal, corresponding signal predicted by DAE, and residual signal for three different conditions are presented in Figure 8a-c. Additionally, 100 sample segments of residuals obtained for each of the signal types are plotted in Figure 8d to provide a clear visualization of the differences among the normal and faulty conditions in terms of the residual signal. After obtaining the residual signals for all three conditions, 80% of these residual signal samples are used to train the subsequent CNN model, and the remaining 20% are used for the testing phase. While training the CNN model, 64 samples were grouped as data batches, and the training process is continued up to 500 epochs. We utilized a 2-layer CNN as mentioned in Section 3.5 and, by optimizing the model parameters, 99.6% accuracy was achieved on the test dataset.
To validate the performance of our proposed method, we compare the result with some other partially modified approaches. For the first one, we kept the CNN model unchanged and made the original segmented current signal as the input (Raw + CNN) to investigate whether or not the nonlinear signal approximation technique by DAE is playing a significant role in improving the model performance. The accuracy obtained by Raw + CNN was 61.06%, which is significantly lower than the proposed method. We also wanted to investigate whether the familiar machine learning algorithms could perform a After obtaining the residual signals for all three conditions, 80% of these residual signal samples are used to train the subsequent CNN model, and the remaining 20% are used for the testing phase. While training the CNN model, 64 samples were grouped as data batches, and the training process is continued up to 500 epochs. We utilized a 2-layer CNN as mentioned in Section 3.5 and, by optimizing the model parameters, 99.6% accuracy was achieved on the test dataset.
To validate the performance of our proposed method, we compare the result with some other partially modified approaches. For the first one, we kept the CNN model unchanged and made the original segmented current signal as the input (Raw + CNN) to investigate whether or not the nonlinear signal approximation technique by DAE is playing a significant role in improving the model performance. The accuracy obtained by Raw + CNN was 61.06%, which is significantly lower than the proposed method. We also wanted to investigate whether the familiar machine learning algorithms could perform a good classification of the faults. For this purpose, a group of statistical features (SF) mentioned in Table 5 were extracted from the residual signal, and later, these features are used with support vector machine (SVM), random forest (RF), and k-nearest neighbor (KNN) individually. These three approaches are mentioned as RS + SF + SVM, RS + SF + RF, and RS + SF + KNN, respectively, in Table 6. Table 5. List of extracting statistical features from the residual signal.
RMS
: Kurtosis: Variance: Skewness: Crest factor: Form factor Finally, the evaluating parameters of all the mentioned approaches are listed in Table 6, and it is evident from the results that the proposed methodology with an accuracy score of 99.6% outperformed the other discussed methods.
To observe the repeatability of our proposed model, all the experiments were executed 100 times and the resulting accuracy distribution was provided as box plots in Figure 9a. The confusion matrix of our proposed method is presented in Figure 9b, where only one sample is incorrectly classified with the designed CNN. To observe the repeatability of our proposed model, all the experiments were executed 100 times and the resulting accuracy distribution was provided as box plots in Figure 9a. The confusion matrix of our proposed method is presented in Figure 9b, where only one sample is incorrectly classified with the designed CNN. From the boxplot representation, the accuracy value of our proposed method did not significantly vary from the mean and median values throughout the experiments that confirm the repeatability of the outcome. Not only that, other approaches that involve only the CNN architecture also showed a small deviation in accuracy, and no outliers have been observed. However, the accuracy is a little higher than the ML classifiers but much lower than our proposed method. Therefore, the inclusion of an autoencoder-based approach with CNN helps to achieve high accuracy where signals have nonlinear characteristics. In general, the time domain current signal possesses non-stationary characteristics and the statistical properties of the data containing the same class may vary with time, which results in less discriminative features among the classes. For this reason, when it is directly set as the input of CNN, the model fails to assign proper weights at the training phase to accomplish good accuracy on the testing samples. On the other hand, the other three approaches involving characterization of the residual signal with statistical features show high deviation from the mean and median values, a large number of outlier samples, and results in a comparatively low accuracy in fault classification performance with the current signal.
Finally, we made a comparison with some other existing approaches where the same dataset is being used for the bearing fault classification. An information fusion (IF) technique was investigated by Hoang and Kang [58], and three different classifier algorithms were applied for fault classification with the motor current signal. In their approach, first they convert the current signal into a 2-D gray image and then apply a 2-D CNN to classify the images. Hence, they mentioned the CNN structure, where the number of convolution and pooling layers was four. However, in our analysis, we first apply an autoencoder and later use the generated reconstruction signal in 1-D CNN to classify the fault. Here, we used a two-layer 1-D CNN with two convolution and two pooling layers. In general, the computational complexity of the CNN model is proportional to the number of layers and the parameters. Due to the nature of residual signal of bearing states, high classification accuracy can be achieved with the help of a simple two-layer CNN. If we consider both model type and complexity, our model seems easy to implement and requires less time to accomplish. Furthermore, a combination of wavelet packet decomposition (WPD) and particle swarm optimization-based SVM was applied by [54] and achieved more than 85% accuracy in classifying the bearing fault with the current signal. Hsueh et al. applied empirical wavelet transform to transform the 1-D current signal into 2-D grayscale images and later to classify the fault with CNN [63]. Table 7 presents the comparison of the mentioned existing works with our proposed method. From the comparisons among the experimental results and existing works, it can be concluded that the discussed data-driven-based method containing the DAE-CNN model can attain high accuracy in bearing fault classification with the motor current signal. In this two-step pipeline method, the DAE acts as an automatic and efficient feature learning approach, which also provides an approximation of the nonlinear behavior of the bearing system. The resultant residual signal enhances the fault diagnosis ability of CNN by providing discriminative features according to the bearing states. The overall method does not require any additional signal processing techniques for extracting features, which makes the approach less complex and less time-consuming. To make our method more reliable and robust, in our future analysis, we will consider changing the operating conditions of IM, including varying load and rotating speed. Therefore, the DAE-based CNN approach can be considered as an efficient and effective way to learn features and classify bearing faults by utilizing the motor current signal.
Conclusions
Due to the availability of sensor data, research has become more focused on datadriven based fault diagnosis techniques. Among the different sensor data available, analysis with the motor current signal data is considered a smart solution due to the advantages of low cost, easy access, and extensive technical support. In this analysis, a novel semi-supervised method is introduced to classify three different bearing fault states. The approach utilizes an unsupervised and supervised model simultaneously. In the beginning, the time-domain current signal is segmented considering the fundamental frequency, and then the deep autoencoder (DAE) is trained with the normal state data to estimate the function approximation of the system. After that, the residual signal is calculated from the difference between raw and estimated signals produced by the autoencoder for all conditions. In this step, the DAE helps to extract discriminative features from the current signal data without any labels. Lastly, with the residual signals, a two-layer CNN is constructed for identifying the bearing faults. The experiments were performed 100 times by randomly selecting the training/testing data set, and the result shows good stable convergence with high accuracy. This method does not require any previous knowledge about the system or any additional signal processing techniques for feature engineering. Furthermore, a comparison is presented with some reference approaches as well as some recent works to test the efficiency, which indicates that the DAE-based CNN method can be an efficient fault classification approach to classify different bearing faults. As proper data labeling is quite difficult in an industrial environment, this semi-supervised learning mechanism can be a promising alternative for supervised learning approaches in the fault diagnosis method. In our future work, we will try to improve the autoencoder model architecture to perform a better nonlinear model approximation to enhance the reliability and robustness of the system. In addition, a systematic hyperparameter tuning approach will be investigated for building an optimized CNN structure to make the decision-making approach more automated. | 13,382.6 | 2021-12-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Lung Adenocarcinoma Transcriptomic Analysis Predicts Adenylate Kinase Signatures Contributing to Tumor Progression and Negative Patient Prognosis
The ability to detect and respond to hypoxia within a developing tumor appears to be a common feature amongst most cancers. This hypoxic response has many molecular drivers, but none as widely studied as Hypoxia-Inducible Factor 1 (HIF-1). Recent evidence suggests that HIF-1 biology within lung adenocarcinoma (LUAD) may be associated with expression levels of adenylate kinases (AKs). Using LUAD patient transcriptome data, we sought to characterize AK gene signatures related to lung cancer hallmarks, such as hypoxia and metabolic reprogramming, to identify conserved biological themes across LUAD tumor progression. Transcriptomic analysis revealed perturbation of HIF-1 targets to correlate with altered expression of most AKs, with AK4 having the strongest correlation. Enrichment analysis of LUAD tumor AK4 gene signatures predicts signatures involved in pyrimidine, and by extension, nucleotide metabolism across all LUAD tumor stages. To further discriminate potential drivers of LUAD tumor progression within AK4 gene signatures, partial least squares discriminant analysis was used at LUAD stage-stage interfaces, identifying candidate genes that may promote LUAD tumor growth or regression. Collectively, these results characterize regulatory gene networks associated with the expression of all nine human AKs that may contribute to underlying metabolic perturbations within LUAD and reveal potential mechanistic insight into the complementary role of AK4 in LUAD tumor development.
Introduction
Amongst the most common cellular characteristics of lung cancer, hypoxia within the tumor microenvironment plays a pervasive set of roles that affect primary tumor development through altered bioenergetic metabolism [1][2][3]. To date, the most widely studied molecular driver of cellular hypoxic response is hypoxia-inducible factor-1 (HIF-1), a highly conserved transcription factor that exerts oxygen mediated control on glucose and oxidative metabolism through modulating global transcriptome expression [4][5][6][7]. Clinically, it is difficult to diagnose tumor hypoxia. However, the incorporation of transcriptome analysis has provided a new model for diagnosis. HIF-1 expression, and its gene targets, have shown to be molecular biomarkers for tumor hypoxia [8,9]. Therefore, it is important to understand both the progression of HIF-1 signaling throughout cancer and associations with HIF-1 and bioenergetic sensitive enzymes. Adenylate kinases (AKs) represent a family of bioenergetic sensitive enzymes that are emerging contributors to cancer etiology and progression [10,11]. Nine human AK isoforms (AK1-9) are known to date. The AKs are key enzymes that buffer adenine nucleotide ratios [2ADP ←→ATP + AMP]. Perturbed expression of AKs has been shown to modulate global energy-sensitive signaling pathways under hypoxic conditions in lung cancer cells and macrophages [12][13][14]. Collectively, understanding the global interactions between HIF-1 signaling and AK expression in lung cancer can provide new methods to detect the pathology and progression of the disease and provide new insights into studying the mechanistic interaction between HIF-1 and AK.
Alternatively, elevated reactive oxygen species (ROS) have been shown to inhibit HIF-1α destabilization, even under normoxic conditions [29]. This is thought to occur through ROS-mediated inhibition of PHDs, which interrupts oxygen-PHD interactions, effectively restricting PHD-induced hydroxylation of HIF-1α [30]. Additional HIF-1-inducing pathways have been described in a nonhypoxic setting, which involves cell supplementation with hormones and growth factors [31], or deficits in SIRT3 and JunD transcription factors [32,33]. Notwithstanding, these nonhypoxic avenues for HIF-1 signaling collectively depend on elevated ROS levels.
Modulation of AK isoform levels has also been demonstrated to impact ROS levels within the context of cancer metabolism. For example, deficits in AK2 levels are associated with increased ROS production and decreased glycolysis and ATP production [34]. Furthermore, overexpression of AK3 has been shown to augment ROS production in squamous cell carcinoma cells treated with cisplatin [35]. By contrast, within the context of colorectal cancer, AK6 has been shown to promote decreased cellular ROS via the Warburg effect [36].
Recently, the mitochondrial-localized adenylate kinase 4 (AK4) has been shown to augment intracellular ROS production to promote HIF-1α stability, and by extension HIF-1 signaling, in the context of macrophages, breast cancer, and lung adenocarcinoma (LUAD) [13,14,37]. Additionally, given that AK4 has previously been described as a relatively novel target of active HIF-1 [38], the relationship between AK4 and HIF-1 adheres to that of a positive feedback loop, where modulation of AK4 levels impacts cellular metabolism, ROS production, and HIF-1 regulation. Moreover, similar to HIF-1, AK4 has been shown to promote chemotherapeutic resistance in tumors and is regarded as an unfavorable prognostic marker for tumor metastasis and lung cancer patient outcomes [37,[39][40][41][42].
In this study, we use LUAD patient primary tumor transcriptome data across stages 1-4 to analyze AK co-expression signatures that reveal potential AK4-driven hallmarks of tumor development and metastasis. Co-expression analysis of all AK isoforms reveals AK4 transcript levels to increasingly correlate with a LUAD-specific hypoxia signature throughout early LUAD tumor development, leading up to metastasis. Furthermore, principal component analysis (PCA) of LUAD patient AK expression patterns reveal AK4 to carry a distinctive gene signature that-when grouped with the nearest clustering AKs-serves as an unfavorable LUAD patient prognostic marker. Finally, in order to identify conserved biological themes related to AK4 transcript expression in LUAD, a comprehensive LUAD tumor AK4 co-expression network was constructed of stage-specific tumor AK4 gene signatures, encompassing both early (stage 1/2) and late (stage 3/4) developmental milestones of LUAD tumor growth and metastasis. This interrogation across LUAD stages provides a richer analysis that is also more faithful to accepted LUAD stages compared to previous publications. Enrichment analysis of these signatures implicates increasingly perturbed nucleotide metabolism throughout LUAD tumor development. Moreover, the use of sparse partial least squares discriminant analysis (sPLS-DA) against LUAD tumor stage-to-stage interfaces within the AK4 co-expression network identified candidate genes that may contribute to LUAD tumor growth or regression. Additionally, LUAD patient survivorship analysis of these candidate genes largely validated the sPLS-DA results and revealed that even favorable anti-oncogenic prognostic markers could be associated with increased patient mortality when accounting for AK4 expression. Collectively, these results expand the scope of influence that AK4 exerts on pathological LUAD tumor dynamics.
Baseline Patient Sample Characteristics
This study assessed 526 patients with LUAD. Classically, with a data set this large, there can be an overrepresentation of a unique variable over another. Therefore, to obtain a better understanding of patient characteristics, an R-based function was used to tally the number of patient samples that belong to various patient characteristics ( Table 1). The patient characteristics used for this study were age, gender, vitals, tumor stage, sample type, and race. Moreover, 295 patients were over or equal to the age of 60, 128 patients were under 60 years of age, and 103 were not reported. For gender, there is a small difference in the number of females (n = 282), over males (n = 244). For vitals, there is a substantial difference in the number of patients that are alive at the time of biopsy (n = 336), compared to dead (n = 190). For tumor stage, a concomitant decrease in samples that represent progressed tumors was observed. Of the 526 patients, 59 submitted biopsies from peritumoral tissues, which were pooled together to form a control group Lastly, there was a dramatic difference among races with white representation making up the majority of the TCGA-LUAD cohort. Using a significance threshold of ≥ 2-fold change (FDR < 0.05), we found remarkable differential transcript expression across the LUAD tumor stage 1-4 transcriptomes relative to non-tumorous human lung tissue controls ( Figure 1A, Supplementary Table S1). Furthermore, while~40% of all detected differentially expressed genes were identified at all LUAD tumor stages, there were large sets of unique differentially expressed genes found exclusively at specific tumor stages or stage-stage intersections ( Figure 1B). Nonetheless, we found conserved and extensive perturbations across the KEGG HIF-1 signaling pathway for HIF-1 signaling readouts at all LUAD tumor stages ( Figure 1C, Supplementary Table S2). These readouts approximate upstream and downstream effectors of HIF-1 signaling, with downstream effectors, in particular, containing hypoxia response elements in their promoter-directly recruiting HIF-1 transcriptional regulation. Thus, perturbations in these readouts are consistent with elevated HIF-1 signaling through the transcriptional upregulation of well-known HIF-1-responsive genes involved in promoting increased anaerobic metabolism, such as glyceraldehyde 3-phosphate dehydrogenase (GAPDH), solute carrier family 2 member 1 (SLC2A1), hexokinase (HK), lactate dehydrogenase A (LDHA), phosphoglycerate kinase 1 (PGK1), aldolase A (ALDOA), and enolase 1 (ENO1) [43][44][45][46][47][48][49][50][51][52][53]. Likewise, we found significant transcriptional upregulation for readouts of augmented HIF-1 signaling that promotes increased oxygen delivery through erythropoiesis and angiogenesis, including erythropoietin (EPO), epidermal growth factor (EGF), and tissue inhibitor matrix metalloproteinase 1 (TIMP-1) [54][55][56], along with increased transcript expression of pyruvate dehydrogenase kinase 1 (PDK-1)-a known inhibitor of tricarboxylic acid (TCA) cycle metabolism [57].
With regard to down-regulated genes involved in angiogenesis within the canonical HIF-1 signaling pathway, angiopoietin 1 (ANGPT1)-a putatively favorable serum prognostic marker for non-small cell lung cancer [58]-had suppressed expression across LUAD tumor at stages 1-4. Similarly, we observed down-regulated transcript levels in the ANGPT1 receptor Tie2, which itself is regarded as a favorable prognostic marker for liver and renal cancers (Human Protein Atlas). Given the combined roles of ANGPT1 and Tie2 in negatively regulating angiogenesis and vascular permeability, suppressed expression of these transcripts, in the context of upregulated EGF and TIMP-1 transcripts, predicts pro-angiogenic signaling along the HIF-1 signaling axis that runs throughout the early and late stages of LUAD tumor development. Additionally, the HIF-1-regulated vascular tone-modulating endothelin 1 (EDN1) and heme oxygenase 1 (HMOX1) transcripts were significantly suppressed throughout the entirety of LUAD tumor development. Importantly, these transcript expression changes were largely consistent throughout LUAD tumor development, alongside significantly suppressed levels of intermittent hypoxia-mediating NADPH oxidase (NOX) transcripts. It is also important to note that this suppression coincided with modest, but insignificant, increases in the transcript levels of protein kinase C-α (PKC-α), another readout of intermittent hypoxia [59,60], only at LUAD tumor stages 2 (1.44-fold increase; FDR = 0.003) and 3 (1.310-fold increase; FDR = 0.025). Thus, these transcriptional perturbations describe canonical HIF-1 signaling that resembles an intratumoral state of sustained hypoxia in response to acute or chronic, as opposed to intermittent, oxygen deficiency.
AK Levels Positively Correlate with Hypoxia Scores throughout LUAD Tumor Progression
To determine the relationship between AK expression and hypoxia in LUAD, an established hypoxia-associated gene signature was curated and normalized to tissuespecific differences. The genes include: XPNPEP1, ANGPTL4, SLC2A1, and PFKP. This selection was obtained from a previous study by Mo et al., where they identified these genes as good predictors of hypoxia, poor patient outcomes, and tumor size [47]. Tissue type-specific gene expression distributions were mined and normalized using the same methods described above for AK isoforms and the hypoxia signature ( Figures S1 and 2A). We tested for differences in expression levels between the normal (N) and the tumor type (T). It was observed that seven of the nine AKs have significant differences compared to the normal control group ( Figure S1). The median expression of the four genes is classified as a "Hypoxia Signature." Similar to the study by Mo et al., the tumor hypoxia signature has more variation compared to the normal and its median expression value is significantly increased (Figure 2A). This signature was then used as a dependent variable to assess the relationship between HIF-1α and AK expression. Linear-based regression modeling was used to compare the significance between the two at each LUAD tumor stage, including the peritumoral normal tissue type ( Figure 2B). For this section, AK5/6 was omitted due to its non-normal distribution. Using Spearman's correlation coefficients, we found the hypoxia signature to correlate with seven of the nine AK isoforms across multiple stages of LUAD tumor development. The highest coefficient found among these interactions was an AK4-stage 4 specific interaction (R = 0.50, p = 0.0096) which suggests a biologically relevant relationship between AK4 and hypoxia. Lastly, a pair-wise comparison, derived from two-way linear regression modeling, between normal and stage-specific interactions was assessed to calculate the gene-stage specific absolute effect size between stages and its normal tissue type control ( Figure 2C). AK8/9 showed no significant differences in effect between normal and stage four tumor interactions with hypoxia. This was also seen for AK9 only in a stage one interaction. This data provides key insights into a potential mechanistic axis in which AK isoforms are reprogrammed in LUAD.
Two AK Clusters Predict Poor LUAD Patient Prognosis
Seven of the nine AK isoforms correlated with a previously described hypoxiaassociated gene signature. Therefore, it is necessary to elucidate AK gene expression patterns within LUAD and distinguish key differences across tissue types (normal vs tumor). To understand the relationship of AKs in LUAD, we scaled Log2 (TPM + 1) values into a z-score, clustered genes along the y-axis, and clustered 526 patient tumor samples along the x-axis. We identified two types of AK groups in LUAD: overexpressed (AK1-4/6) and low-expressed (AK5/7-9) ( Figure 3A). It was also observed that AK4 has a unique expression pattern that split tumor samples into two major clusters. This moved us to assess whether there were any significant changes in AK4 expression, along with the other AKs, within tumor stages. To do this, patients were grouped by tumor stage and an ANOVA analysis determined significant differences between stages for each AK isoform ( Figure S2A). Three of the nine isoforms demonstrated significant changes be-tween tumor stages (ANOVA, p* <0.05), with AK4/7 expression appearing to increase and AK9 expression to decrease with tumor stage. To further understand correlation patterns between these and other AKs, Pearson's correlation coefficients were calculated to assess the similarity of each pair of genes in normal and tumor tissue profiles ( Figure S2B). Hierarchical clustering using Euclidean distances of each gene further classified which genes cluster together among the normal and tumor tissues ( Figure 3B). The top three clusters are distinguished by color and the results demonstrate a shift in AK expression patterns moving from the normal to tumor sample types. To determine whether there is a clinical relevance to these clusters, Kaplan-Meier estimates were used to measure the fraction of patients who survive LUAD as a function of the median expression of each gene cluster. The upper 75th percentile of gene expression was used for the high expressed group and the lower 25th percentile was used to assess low expression. Interestingly, two of the three clusters had a significant interaction with patient survivability of LUAD ( Figure 3C). Normalized z-score quantification ( Figure 3A) suggests AK4 to have unique expression patterns across LUAD compared to other AKs. Dendrograms demonstrate a shift in expression from the normal to tumor tissues and the AK1-3 and AK4-6 cluster was shown to significantly predict poor survivability outcomes. Together, this data highlights the direction in AK expression patterns and provides new insights into what AK expression looks like in LUAD with a unique highlight on AK4 expression.
AK4 Expression Network Comprised of Perturbed Nucleotide Metabolism in LUAD
Given the previously reported biological significance of AK4 in lung cancer [13,14], along with unique isoform-specific expression patterns revealed in Figures 2 and 3, we sought to use an unsupervised clustering algorithm to parse AK expression in tumor tissue types ( Figure 4A). It was observed that the AK4/5/6 group, unlike the others, does not cluster when AK transcript expression is projected to describe AK-specific variance via PCA. In fact, AK4 expression appeared to be distant from other AK isoforms suggesting a unique expression pattern. Therefore, we sought to characterize AK4-related co-expression networks across LUAD tumor stages 1-4. Here, we adopt a modified approach towards characterizing LUAD stage-specific AK4 gene signatures based on Jan et al.'s use of Pearson's correlation coefficients [13], with the key differences being that our approach encompasses the entirety of LUAD tumor progression using TPM-normalized transcript estimates and additionally excludes genes with no stage-specific significant difference in transcript expression relative to control non-tumorous lung tissue. Using this approach, we constructed AK4 gene signatures across LUAD stage 1-4 differentially expressed genes, effectively creating a LUAD tumor AK4 co-expression network ( Figure 4B). Interestingly, this AK4 co-expression network contained many hypoxia response element-containing HIF-1 signaling readouts scattered throughout different LUAD tumor stages, including: ALDOA (r > 0. Table S3). Nonetheless, to characterize individual gene signatures, we additionally used each LUAD stage-specific signature as inputs for over-representation analysis (ORA) within two curated databases: KEGG Modules, and the Broad Institute's Molecular Signature Database C2 collection-both of which contain diverse gene sets related to metabolic processes. ORA of these gene signatures suggests the AK4 co-expression network comprises genes related to nucleotide metabolism, like previous reports ( Figure 4C). Indeed, further characterization of the perturbed expression of transcripts involved in nucleotide metabolism across LUAD stages 1-4 overwhelmingly reveals significant upregulation of transcripts involved in multiple facets of nucleotide metabolism, most of which further deviate from that of control tissue levels as LUAD tumor development continues ( Figure 4D). Gene expression values represent a z-score from −2 to +2 using a complete linkage method with Euclidean distance. Dendrograms were split into two top clusters across genes (rows) and patients (columns). (B) AK hierarchical clusters. Dendrograms represent the relationship between genes within normal (left) and tumor (right) data using complete linkage method using dissimilarity measures. Colors highlight three ranked gene clusters within the normal and tumor data and x-axis represents dissimilarity between gene expression. (C) Survival analysis based on normalized gene expression of the three AK clusters using a log-rank test and 95% confidence intervals. 75% and 25% quartile ranges were used to split the patient groups into high and low expression groups. HR represents hazard ratio and "n" represents the number of patients used for the analysis. stage-specific AK4 gene signatures were created by identifying genes that co-express with AK4 using a Pearson's correlation coefficient threshold of ± 0.3. (C) Over-representation analyses of these AK4 gene signatures reveals that AK4 co-expresses with genes involved in nucleotide metabolism at all stages of tumor development. (D) LUAD stage 1-4 differential transcript expression of genes involved in the four REACTOME nucleotide metabolism subgroups (FDR < 0.05).
AK4 Co-Expression Networks Identify Potential Drivers of LUAD Tumor Progression
Using these AK4 co-expression networks, we next sought to identify potential drivers of LUAD tumor progression using a staggered classification approach to predict genes that contribute to tumor growth or regression at tumor stage-to-stage interfaces ( Figure 5). Here, we opted to employ sPLS-DA using LUAD tumor stage-specific AK4 gene signatures to extract key genes capable of discriminating tumor progression or regression. sPLS-DA of merged LUAD tumor stage-to-stage AK4 signatures generated a list of gene candidates with ranked contributions to tumor advancement or regression. Across stage 1-2, 2-3, and 3-4 interfaces, the majority of key LUAD tumor stage discriminating genes were associated with cell cycle regulation and mitosis, DNA processing and repair, and chromatin remodeling processes. Unsurprisingly, the putative tumor suppressor gene death associated protein kinase 2 (DAPK2) was predicted to greatly contribute to tumor regression at the LUAD tumor stage 1-2 interface ( Figure 5A). Interestingly, at the stage 1-2 AK4 gene signature interface, the pyrimidine salvage protein thymidine kinase 1 (TK1) was predicted to be amongst the most influential for LUAD tumor progression from stage 1 to stage 2. Furthermore, the P2Y purinergic receptor 1 (P2RY1), which has been shown to serve as an unfavorable prognostic marker in renal and non-melanoma skin cancers ( [61], (Human Protein Atlas)), is also predicted to promote LUAD tumor progression at the stage 2-3 interface ( Figure 5B). Additionally, the use of AK4 LUAD tumor gene signatures is associated with the oncogenic protein RAD52 motif Containing 1 (RDM1) at the LUAD tumor stage 2-3 and 3-4 interfaces ( Figure 5C). This aligns with RDM1 s previously described role as a pro-oncogenic factor [62]. Notably, at the LUAD tumor stage 3-4 interface, RDM1 was predicted to contribute to LUAD tumor advancement from stage 3 to stage 4.
To independently validate the sPLS-DA results, we used the GEPIA2 web tool (http://gepia2.cancer-pku.cn/#index (accessed on 9 November 2020)) to perform a LUAD patient survivorship analysis using single genes (e.g., DAPK1, TK1, etc.) and pairwise AK4-containing gene signatures (e.g., AK4-DAPK2, AK4-TK1, etc.) as inputs ( Figure 5D-F). Overall, sPLS-DA predictions aligned well with the inferred survivability of each tested gene, as indicated by Log 10 -transformed hazards ratios. In particular, survivorship analysis of early LUAD tumor regression candidates at the stage 1-2 interface (e.g., DAPK2 and PLAC9) revealed these genes to serve as positive LUAD prognosis markers ( Figure 5A,D). Additionally, at all LUAD tumor stage-to-stage interfaces, there was an overwhelming amount of consensus between genes in the AK4 gene signatures that-via sPLS-DA-predicted LUAD tumor advancement and the survival analysis.
When pairwise AK4-containing gene signatures were used as inputs for LUAD patient survival analysis, all AK4 combinations resulted in poor patient prognosis ( Figure 5D-F). Interestingly, of the few genes that, via sPLS-DA, were predicted to contribute to tumor regression at certain stage-to-stage interfaces-including DAPK2, PLAC9, ABCB9, MELTF, CTHRC1, and OCIAD2, testing for LUAD patient survivability via AK4 combination resulted in a negative prognosis. This was especially true for DAPK2 and PLAC9, which individually associated with improved patient prognosis, but reverted to a poor patient prognostic marker when combined with AK4 for LUAD patient survival analysis.
Discussion
While it has long been known that limited intratumoral oxygen availability impacts tumor metastatic potential, in part through the transcription regulatory actions of HIF-1, both spatial and temporal intratumoral hypoxia dynamics impact tumor development, chemotherapeutic resistance, and cell seeding; providing dramatically different cancer patient outcomes [63]. In particular, the relationship between chronic or acute hypoxia and lung cancer tumor metastatic potential has specifically been investigated, revealing that acute hypoxia and concomitantly high HIF-1α stability most strongly increase tumor growth and metastasis [64]. Here, we report perturbed transcriptional regulation of hypoxia response element-containing genes within the KEGG HIF-1 signaling pathway that is largely conserved throughout LUAD tumor pathogenesis and consistent with chronic or acute, as opposed to intermittent, hypoxia ( Figure 1C, Supplementary Table S2). This is evidenced by two observations: the first being that differential transcript expression of HIF-1 signaling readouts was observed as early as LUAD tumor stage 1 and persisted throughout the entirety of LUAD tumor development. In particular, we observed HIF-1 pathway perturbations to fit the profile of a chronic hypoxia response and also reflect known hallmarks of HIF-1 signaling in lung cancer, including increased anaerobic metabolism and angiogenesis, and suppressed TCA cycle metabolism. The second observation relates to the significant suppression of NOX at all LUAD tumor stages ( Figure 1C). In the context of intermittent hypoxia, NOX has been shown to mediate HIF-1α activation, and HIF-1 has been shown to promote the expression of NOX [65,66]. Thus, the observed profile of HIF-1 hypoxia response element-containing readouts and significantly suppressed NOX expression suggests that intermittent hypoxia is not occurring within LUAD patient tumors and that, instead, chronic or acute hypoxia is featured throughout the early and late stages of LUAD tumor pathology.
The median expression levels of four target genes of HIF-1 were calculated as our hypoxia score. Individually, the expression of these genes is altered under anaerobic stress conditions in vitro. Here, like many studies, we show that there is elevated transcription of hypoxia-associated genes in solid tumors, such as in lung adenocarcinomas [67]. We used this model as a marker for tumor progression at a metabolic response level. For example, tumors with a higher propensity for growth and cell seeding often carry increased levels of a global hypoxic signature [63,64]. The expression of AK transcripts was revealed to be significantly different between normal and bulked LUAD tumor tissues and their expression in cancer correlates with this hypoxic score. Notably, an AK1/4 co-expressed signature was previously characterized in LUAD patients [12]. That study demonstrated an opposite correlation between AK1 expression and survival and AK4 expression and survival (high AK1 expression is correlated with increased survival while low AK4 expression is correlated with increased survival). In our present study, we confirm and extend these results by demonstrating two findings: (1) within the entire AK family, there are three clusters of AKs whose LUAD expression patterns closely associate ( Figure 3B); (2) these clusters may be clinically relevant as their combined expression patterns (low versus high expression within each cluster) are associated with significantly different survival probabilities ( Figure 3C). Our results also recapitulate the Jan et al. study in that the low expression of the AK1 cluster is correlated with increased survival while low expression of the AK4 cluster is correlated with increased survival. We (Lanning et al.) and others (Hao et al.) have also previously published that low AK4 expression correlates with increased survival in gliomas and pancreatic cancer. We also report comprehensive AK signatures that parse out the associations between individual AK isoform expression and a hypoxia score throughout control lung and LUAD tumor stage 1-4 tissues (Figure 2). In doing so, we reveal a dynamic association between AK transcript expression and hypoxia in the context of LUAD tumor development, specifically highlighting an increasingly positive correlation between AK4 levels and hypoxia as LUAD tumor development continues. This finding is consistent for stages 1, 3, and 4, but is variable for stage 2 ( Figure 2B). This finding, which aligns with previous LUAD-related AK4 research [13,68], also describes a progressively negative prognostic role for AK4 in LUAD. Interestingly, unsupervised clustering identified AK4 to have unique expression patterns independent of other AKs, including AK1, which highlighted the reason to further characterize AK4 in LUAD. AK4 was also shown to interact with HIF-1 signaling under hypoxic conditions in m1 macrophages which further supports the interaction to be a global response in a tumor microenvironment [14].
While disrupted glucose metabolism has long served as a hallmark of many cancers, including lung cancer [69,70], a systems-level focus on perturbed nucleotide metabolism in cancer and the underlying associated mechanisms has only recently obtained broader attention. By adopting an AK4-centric approach towards understanding the link between AK4 and its widely reported role as a negative prognostic marker, we report the AK4 co-expression network to potentially explain a facet of perturbed nucleotide metabolism in LUAD related to purine and pyrimidine synthesis, catabolism, and salvage ( Figure 4B-D). Using our prior knowledge of LUAD patient tumor stage, we additionally incorporated this network-comprised of individual AK4 gene signatures-to predict genes that contribute to tumor progression or regression via sPLS-DA ( Figure 5A-C). Here, validation of the sPLS-DA predictions was completed using single gene LUAD patient survivorship analysis, and by relying on prognostic information available in the literature. As a whole, the predictions made by sPLS-DA aligned well with the associated survivability for each tested gene, especially at early LUAD stage-stage interfaces, such as the LUAD tumor stage 1-2 interface, which includes DAPK2 and PLAC9 ( Figure 5A,D). The reason for this may be that sPLS-DA identifies key transcript determinants that drive discrimination of tumor stage interfaces, whereas survival analysis looks for overall survivability. Therefore, regardless of whether a gene is predicted to promote tumor regression at later LUAD tumor stage interfaces, this predicted regression still points in the direction of a later LUAD tumor stage, and thus, is marked as an unfavorable prognostic marker via survival analysis. Nonetheless, we found it interesting that when testing for LUAD patient survivability using an AK4 combination, all resulting survivorship hazard ratios were associated with increasing mortality (Figure 5D-F). This finding was especially intriguing when considering that the associated survivability of positive prognostic markers, such as DAPK2 and PLAC9, was essentially reversed when testing for LUAD patient survivability using an AK4-DAPK2 and AK4-PLAC9 signature. Thus, the use of AK4 expression in gene signatures for LUAD patient survivability may reveal more about the deleterious impact that AK4 has on the dynamic LUAD pathology.
Additionally, the use of AK4 gene signatures with sPLS-DA identified the nucleotide salvage gene TK1-another negative prognostic marker of lung cancer ( Figure 5A)-to co-express with AK4, and promote LUAD tumor progression as previously reported [71,72]. Interestingly, both AK4 and TK1 have been described to function within a larger nucleotideprotein interaction network [73]. When looking at LUAD patient survivability, high expression of TK1 and a TK1-AK4 signature are also associated with increased patient mortality ( Figure 5D). Given that AK4 has been shown to worsen lung cancer patient outcomes by suppressing levels of activating transctiption factor 3 (ATF3) [40], a transcriptional regulator of TK1 [74], our finding that TK1 promotes LUAD tumor progression within the larger AK4 co-expression network may expand on the mechanistic link between AK4, ATF3, and TK1 in the context of LUAD. In particular, suppression of ATF3 levels by AK4 may inhibit ATF3 regulation of TK1, permitting up-regulation of TK1 in LUAD. Figure 5C also identified RDM1 association with the AK4-driven signature across tumor stages. Thus, our AK4-driven signature identifies known pro-oncogenic factors, lending credibility to our signature, and also suggests previously unidentified tumor stage associations with these known pro-oncogenic factors.
Importantly, these findings are limited to the site of LUAD patient tumor biopsy retrieval, and the concomitant variation that arises when accounting for intratumoral heterogeneity [75,76]. Similarly, it is worth noting that the LUAD tumor transcriptomes available for use in this study were overwhelmingly from Caucasian participants, which surmised a majority of the available LUAD TCGA cohort. Nonetheless, this report expands on the associations between perturbed AK isoform expression and LUAD hypoxic status, and collectively reveals potential mechanistic insight into how AK4 serves as a negative prognostic marker in LUAD tumor development.
TCGA Patient Tumor Data Curation and Consolidation
TCGA data query was performed using TCGAbiolinks v2.17.4 and MultiAssayExperiment v1.14-0 packages to obtain patient tumor-matched RNA-seq and counts data through the NIH funded Genome Data Commons domain [77,78]. The biomaRt v2.44.4 package was used to match ensemble gene ID to gene symbols [79].
Gene Expression Query, Normalization, and Quantification
In this study, gene expression quantification data in fragments per kilobase of transcript per million mapped reads (FPKM) were queried using a TCGA harmonized database which aligns reads to the human reference genome 38 (hg38). All FPKM data were standardized to transcripts per million (TPM) to adjust for the measurement of gene expression as a proportion of transcripts in the total pool of RNA. The equation is recited below (1). Once data were standardized to TPM, a log 2 (TPM + 1) transformation was used to normalize the data into a normal distribution for appropriate statistical analysis.
Statistical Analysis
All statistical analyses, apart from the survival plots, were conducted locally in RStudio. All pairwise t-test comparisons, two-way ANOVAs, and Pearson's correlation analysis were computed using the ggpubr package. Statistically different groups scored a p-value less than 0.05. Linear regression and unsupervised clustering analysis were completed with base R functions: lm( ) and prcomp( ). Linear modeling tested the interaction between hypoxia signature and AK expression. AK5/6 were excluded from linear modeling for their non-normal distributions. Principal component analysis (PCA) was performed using prcomp( ). Principal components were calculated separately for each AK in tumor RNA-seq profiles. PC1 and PC2 explained 93.8% of the variation and were therefore selected for visualization. Survival plots were generated from a webbased expression interactive tool known as Gene Expression Profiling Interactive Analysis (http://gepia.cancer-pku.cn/). High and low gene expression groups were split using the top 75th and lower 25th percentiles. Visualization of hierarchical clustering output was performed using ggdendro v0.1.22.
Hypoxia Signature
The median expression of four genes, XPNPEP1, ANGPTL4, SLC2A1, and PFKP, was used in this study to estimate the hypoxic status of tumor samples. The four genes make up a hypoxia-associated gene signature and were selected amongst 200 other hypoxiaassociated genes to predict patient outcomes, tumor hypoxia, and pathological stage in lung adenocarcinoma samples as previously described [47].
Differential Gene Expression Analysis
TCGA participant-derived HTSeq counts were assembled into LUAD stage-specific and control collections using the cloud-based Galaxy environment (https://usegalaxy.org). Within this environment, differential gene expression for a specific LUAD stage relative to control was estimated using edgeR v3.24.1 [80]. Given the prior uncertainty in gene dispersion within the numerous tumor and control samples, we opted to use the quasilikelihood F-test edgeR parameter to determine differential expression [81]. We additionally filtered out lowly expressed genes with less than 10 total counts and applied a p-value adjustment to obtain the false discovery rate (FDR) using the Benjamini and Hochberg method [82]. For curated gene ID mapping, we used the Bioconductor clusterProfiler package v3.18.1 to map Ensembl IDs against Entrez IDs and excluded non-mapped values from further analysis [83]. Entrez-mapped differential transcript expression was considered significant if transcripts had ≥ 2-fold difference relative to control at an FDR < 0.05.
Co-expression Network Creation and Over-Representation Analysis
AK4 gene signatures were created using an adapted form of the gene co-expression construction guide presented by Contreras-López et al., whereby variance in TPM counts is standardized and summed up a unit value to avoid zeroes [84]. Furthermore, this approach included only significant differentially expressed genes for each LUAD stage group using the significance threshold described above. Finally, stage-specific AK4 gene signatures were created by taking the Pearson's correlation coefficient between AK4 and all other differentially expressed genes within a specific LUAD stage group and applying a threshold of ± 0.3. Collectively, we found this approach-motivated by the approach previously accomplished by Jan et al. [13], to discriminate AK4 co-expression networks by LUAD stage and additionally incorporate later progression of LUAD tumor development with the inclusion of stage 3-4 tumor samples.
Using genes within the stage-specific AK4 co-expression networks as inputs, we performed ORA against gene sets within KEGG Modules and the Broad Institute's Molecular Signature Database C2 Collection [85,86], each of which containing curated gene sets that are functionally related to biological process or state. All ORA were also performed using clusterProfiler v3.18.1.
Sparse Partial Least Squares-Discriminant Analysis
To identify candidate genes within the LUAD stage-specific AK4 co-expression networks, we use an extended sparse version of partial least squares regression analysis termed sPLS-DA within the mixOmics R package v6.14.0 [87][88][89]. Here, two sequential LUAD stage AK4 co-expression networks were incorporated in each sPLS-DA analysis to determine the mean contribution value of AK4 gene signature components to the LUAD tumor stage. All sPLS-DA parameter inputs were estimated using the mixOmics' parameter tuning functions. The optimal number of variables (i.e., genes) per component was determined using a data-driven one-sided t-test approach that evaluates changes in model performance as additional variables are incorporated into the model. This approach was validated under maximum distance via M-fold cross-validation at 50 repeats for each sPLS-DA analysis.
Data Availability Statement:
No data were created during the present study. The results here are in whole or part based upon data generated by the TCGA Research Network: https://www.cancer. gov/tcga (accessed on 9 November 2020). | 8,055.6 | 2021-12-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Early prediction of hemodynamic interventions in the intensive care unit using machine learning
Background Timely recognition of hemodynamic instability in critically ill patients enables increased vigilance and early treatment opportunities. We develop the Hemodynamic Stability Index (HSI), which highlights situational awareness of possible hemodynamic instability occurring at the bedside and to prompt assessment for potential hemodynamic interventions. Methods We used an ensemble of decision trees to obtain a real-time risk score that predicts the initiation of hemodynamic interventions an hour into the future. We developed the model using the eICU Research Institute (eRI) database, based on adult ICU admissions from 2012 to 2016. A total of 208,375 ICU stays met the inclusion criteria, with 32,896 patients (prevalence = 18%) experiencing at least one instability event where they received one of the interventions during their stay. Predictors included vital signs, laboratory measurements, and ventilation settings. Results HSI showed significantly better performance compared to single parameters like systolic blood pressure and shock index (heart rate/systolic blood pressure) and showed good generalization across patient subgroups. HSI AUC was 0.82 and predicted 52% of all hemodynamic interventions with a lead time of 1-h with a specificity of 92%. In addition to predicting future hemodynamic interventions, our model provides confidence intervals and a ranked list of clinical features that contribute to each prediction. Importantly, HSI can use a sparse set of physiologic variables and abstains from making a prediction when the confidence is below an acceptable threshold. Conclusions The HSI algorithm provides a single score that summarizes hemodynamic status in real time using multiple physiologic parameters in patient monitors and electronic medical records (EMR). Importantly, HSI is designed for real-world deployment, demonstrating generalizability, strong performance under different data availability conditions, and providing model explanation in the form of feature importance and prediction confidence. Supplementary Information The online version contains supplementary material available at 10.1186/s13054-021-03808-x.
Introduction
Fluid resuscitation and vasoactive therapy are essential in the management of hypotensive patients to support organ perfusion [1][2][3]. Current guidelines from the 2016 Surviving Sepsis Campaign (SSC) recommend early initiation of vasopressors targeting a mean arterial pressure ≥ 65 mmHg [4]. According to the guidelines, the need for initiation of vasopressor therapy should be assessed if there is ongoing hemodynamic instability despite fluid resuscitation. Although the SSC guidelines are not precise about the appropriate time to initiate vasopressors, recent studies have demonstrated Open Access *Correspondence<EMAIL_ADDRESS>1 Philips Research North America, Cambridge, MA 02141, USA Full list of author information is available at the end of the article that delayed initiation of vasopressors is associated with higher mortality, fewer vasopressor-free days, and longer time to achieve target mean arterial pressure [5,6].
Clinical decision support systems that are designed to continuously monitor and identify patients at a high risk of developing hemodynamic instability have the potential to improve the timely recognition of the need for immediate pressure support [7,8]. Early initiation of hemodynamic interventions based on these systems can potentially help avoid complications from organ hypoperfusion and reduce mortality. Commonly used single parameter measurements including blood pressure and heart rate are easily acquired at the bedside and can be used as a risk stratification tool for detecting changes in hemodynamic parameters [9]. However, single parameter monitoring does not fully describe the entire patient state and can potentially lead to misinterpretation and underestimation of instability. Multi-parameter scoring systems using machine learning to quantify associations between physiologic variables and adverse events have been proposed as a way to accurately stratify ICU patients.
Hemodynamic interventions including the initiation of vasopressors or inotropes, fluid administration, and blood transfusions are markers of significant hemodynamic instability in ICU patients. In this study, we aimed to (1) develop a multiparameter risk score that stratifies patients with a high probability of receiving a hemodynamic intervention. (2) Identify the important physiological parameters that contribute to the risk and quantify the confidence of the model predictions.
(3) Evaluate model performance on subgroups of ICU patients and on an independent validation cohort.
Methods
We developed a machine learning model using retrospective data from patients in the ICU to predict the onset of hemodynamic interventions one-hour in the future. The eICU Research Institute (eRI) database was used for the purposes of training and validation (Pollard et al.). The full dataset is comprised of 3.3 million patient encounters from 364 ICUs across the USA. To ensure that charting of hemodynamic intervention data was accurate in the training and validation cohorts, we restricted our analysis to patients admitted to hospitals with reliable infusion and ventilation charting data between 2012 and 2016. Hospitals were considered reliable if they had charted ≥ 7 infusion drug entries per patient per day, included patients with ≥ 0.75 ventilation and airway records per patient per day in the patient care plan, and ≥ 10 entries per patient per day in respiratory charting tables in eRI database. We further limited our cohort to adult patients ≥ 18 years old who did not have a do not resuscitate (DNR) indication in the ICU. This filtering step reduced the initial dataset size to 292,856 patient encounters from 54 ICUs (Fig. 1).
ICU patients were classified into stable and unstable groups. Stable patients did not receive any of the hemodynamic interventions in Table 1. Unstable patients received at least one of the interventions in Table 1 during the ICU stay, including the initiation of pressors or inotropes, administration of a significant dose of fluids in a short time period, or packed red blood cell (PRBC) transfusions. An intervention segment started when any of the intervention criteria was satisfied [10][11][12]. The intervention segment continued until there was a gap of more than 12 h between consecutive pressors or inotropes, fluid administrations, or PRBC transfusions. The last set of physiological variables observed 1-h before intervention was used as a positive class sample and a random time from a stable patient was selected as the negative class sample for model training. We did not include any samples from the first 6 h of the ICU stay in either the hemodynamically stable or unstable groups during training. A stratified subsample of 20% of the eRI data were held out and reserved for model evaluation, while the remaining 80% were used to train the model. Samples were stratified so that a patient appears in only one of the train or test sets, but not both. Additionally, we validated the model trained on eRI patients on an external dataset from an independent hospital, namely the MIMIC III database [13]. We extracted the stable and unstable samples from MIMIC III following the same process described above, however, the outcome label included only pressor or inotrope administration.
Clinical observations
We selected 33 variables that are routinely acquired in the ICU, including vital signs, laboratory measurements, blood gas measurements, and ventilation settings (Additional file 1: Figure S3). Variables were forward filled up to 2 h for heart rate and systolic blood pressure, and up to 26 h for laboratory measurements and ventilator settings. Invasive and noninvasive blood pressures were combined into a single variable with invasive blood pressure prioritized over noninvasive measurements when both are available. We require at least a heart rate and systolic blood pressure be available for the calculation of a risk score during training and evaluation. If a variable was missing because it was not measured or the forward filled value expired, the value was imputed using the training data population mean of ICU patients for all features except the three ventilation parameters: fraction of inspired oxygen (FiO2), mean airway pressure (MAWP), and positive inspiratory pressure (PIP). FiO2 is imputed to room oxygen level of 0.21, and MAWP and PIP are left as missing to avoid imputing ventilation settings for patients who were not mechanically ventilated.
Supervised learning of hemodynamic interventions
We trained an Abstain-Boost model [11], which is a powerful ensemble of univariate classifiers composed of decision trees of depth one that predict future hemodynamic status (stable or unstable) based on individual patient measurements. Each of the 33 classifiers (one for each physiologic variable) outputs a real value, with more positive values indicating a greater risk for hemodynamic interventions. Variable-wise risks are summed and sigmoid transformed for the final probability of hemodynamic intervention. The model was trained with 200 rounds of boosting with learning rate set to 0.1. The predicted probabilities are calibrated using Platt scaling [14] after model training to match the empirical instability rate observed in the data. We define the hemodynamic stability index such that higher probability indicates a lower risk of hemodynamic interventions (stability). We include the TRIPOD Checklist to report model development and validation steps.
We also calculate confidence intervals to quantify uncertainty in model predictions. Figure 2 shows the HSI score along with confidence intervals for an illustrative patient case. Uncertainty in model predictions can be decomposed as model uncertainty, which is the level of uncertainty derived from model underspecification (e.g., if the algorithm does not capture nonlinear relationships), and from feature uncertainty, which is driven by noisy measurements and missing variables. We quantify these sources of uncertainty to Table 1 Criteria used to define hemodynamic instability The fluid trigger criteria were derived from clinical consensus of a panel of clinical experts in fluid and hemodynamic management. Some are multiples of standard dosing regimens (10 cc/kg, 20 cc/kg) or multiples of the size of bags of solution that are used for fluid resuscitation (500 cc or 1 L). The starting bolus for an adult is 500 cc OR 10 cc/kg. For significant hypovolemia, this might be 1400 cc (20 cc/kg) or 1 L (the size of a 1-L bag of solution). The fluid triggers represent what was considered a significant intervention in response to hypovolemia. Additional details describing the rationale for each fluid trigger can be found in the Additional file 1
A segment was labeled "intervention" under any of the following conditions
Administration of any quantity of any of the following inotropic and vasopressor medications: Dobutamine Dopamine Epinephrine Norepinephrine Phenylephrine Vasopressin Administration of Fluid Therapy (colloid or crystalloid) in the following dosages: 2400 cc in 8 h 3000 cc in 12 h 700 cc in 1 h 1500 cc total in 4 h 500 cc twice in 4 h Administration of Packed Red Blood Cells (PRBCs) in either of the following dosages: 800 cc PRBC over course of 24 h 500 cc in two hours followed by fluid therapy within 12 h. (What qualifies as "fluid therapy" is described in this table, titled "Administration of Fluid Therapy. ") 500 cc PRBC not followed by fluid therapy within the following 24 h. (What qualifies as "fluid therapy" is described in this table entry titled "Administration of Fluid Therapy. ") calculate confidence intervals (see Additional file 1 for details). The model can abstain from making predictions based on the degree of overlap between the confidence interval and a critical threshold of the HSI risk score where patients transition from stable to unstable. A high degree of overlap between the confidence interval and the critical threshold indicates greater uncertainty about whether the patient needs an intervention and thus we can abstain from making a prediction. See Additional file 1 for technical details and experiments on abstention.
Clinical risk prediction models are susceptible to learning patterns of clinical actions and not just the patient physiology [15]. During model training, we attempted to remove the bias from clinicians' actions by (1) merging invasive and noninvasive blood pressure to remove the influence of the invasive measurement. The presence of invasive measurements indicates higher clinical concern, and the model would learn to assign higher risk simply based on the presence of the invasive variable. (2) Missing values were mean imputed with the population mean so the model is not learning from missingness patterns. (3) We experimented with adding missing variable indicators to the model and it improved model performance. However, we decided to exclude missing-variable indicators to learn purely from the physiology and not patterns of clinical practice.
Evaluation
We report model performance using the area under the receiver operator curve (AUC); sensitivity (Se) also known as recall, which is the model's capacity at predicting the hemodynamic interventions received by patients; specificity (Sp), which quantifies the false predictions of a hemodynamic intervention when the patient did not receive one; and the positive predictive value (PPV) also known as precision, which is the fraction of all predictions that truly resulted in an intervention. Performance metrics are reported at the breakeven point (BE), where precision equals recall, at 90%, and at 95% specificity. The model, trained using all 33 input variables, was evaluated under four distinct operating modes representative of realistic hospital deployment conditions with varying levels of integration of different data sources: (1) a "Basic" mode where the model has access to a small set of vital signs including heart rate, blood pressures, shock index and age, (2) a "Basic + Labs" mode where available laboratory measurements are used by the model in addition to variables from the basic mode, (3) a "Basic + Ventilation" mode where ventilator settings, when available, are used in addition to basic mode variables, and (4) an "All Features" mode where all available variables are presented to the predictive model. Operating modes were simulated by treating variables that are not included in the respective operating modes as missing values. We also report model performance on patient subgroups, including ICU stay type (e.g., stepdown, transfer from general ward, readmission), ICU unit type (e.g., Med-Surg, Cardiac), admission source (e.g., Floor, ICU), and ventilation status at the time of prediction.
Results
The cohort selection criteria (Fig. 1) Using the available measurements from all 33 physiologic variables, the HSI model has an AUC of 0.82 (Sp = 0.92, PPV = 0.52 at the breakeven point) on the held-out dataset from the eRI database when predicting all outcomes including pressors or inotropes, fluids, and PRBC administrations (prevalence = 15%) one hour before the event. The AUC improves to 0.88 (Sp = 0.95, PPV = 0.55) in predicting pressor administrations alone (prevalence = 11%) ( Table 2). HSI has high predictive accuracy even up to 12 h prior to the event and significantly outperforms single parameters like shock index and systolic blood pressure in predicting hemodynamic interventions (Fig. 3).
Model performance with missing variables
HSI was able to predict instability accurately under more restricted data conditions where some measurements were not available, as shown in in Table 3 (see "Evaluation" section in "Methods" for a detailed definition of operating modes). The AUC decreases to 0.72 (PPV: 0.39, AUPRC) when only age, heart rate, blood pressures, and shock index (Basic mode) are available and laboratory measurements and ventilation settings are treated as missing variables, still outperforming blood pressure and shock index. Laboratory measurements are responsible for an 8% increase in AUC when we compare the Basic mode to the Basic + Labs mode (AUC from 0.72 to 0.8; PPV from 0.39 to 0.48).
Model performance in patient subgroups
We verified that the HSI model generalizes well across different patient groups defined by ICU stay type, ICU unit type, admission source, and ventilation status. As reported in Additional file 1: Table S1, HSI performs significantly worse in stepdown units (PPV decreases from 0.529 to 0.146) where there was low prevalence of hemodynamic interventions and in neurological ICUs. Detailed analysis is given in Additional file 1.
External validation
HSI was externally validated on MIMIC III database, which is independent from the eRI database used for training. We identified 15,981 ICU stays matching our extraction criteria following the same procedure as in the
Feature importance
Global feature importance can be visualized as risk curves like in Fig. 4. HSI learns that early physiological signs of shock, including elevated heart rate and low blood pressure, increase the risk of a hemodynamic intervention. Lower than normal hematocrit levels, indicating an insufficient supply of healthy red blood cells, leads to a higher risk of hemodynamic interventions like blood transfusions. Figure 2 shows for an example patient the univariate risk scores for individual physiologic variables. Univariate risks (which are added to calculate the total HSI score) are used to identify the top features contributing to the risk and give caregivers context for the prediction as well as cues for how to react to it.
Discussion
The HSI model provides an early warning of hemodynamic instability by detecting the need for significant hemodynamic interventions. The major finding of the current study was that HSI, a novel, multi-parameter machine learning model, far outweighed traditional metrics such as shock index and systolic blood pressure at predicting the need for hemodynamic interventions.
Although the discrimination accuracy was best when used to predict hemodynamic interventions 1-h before the events, it was also highly predictive even at 12-h before the initiation of the hemodynamic interventions. Importantly, HSI learns clinically meaningful and interpretable relationships between physiological variables and the risk of a hemodynamic intervention. HSI generalizes well across most subgroups and in an independent validation cohort. On the external validation dataset where the outcome label included only pressors, HSI had the same AUC as our held-out evaluation data in the eRI database. However, the performance on patients in the stepdown unit was worse than other units. This is because there exists significant mismatch between feature distributions and label distributions of the stepdown units and those of the general ICU patient population. Specifically, we find the unstable patient group that received a hemodynamic intervention in the stepdown units are physiologically more stable with higher systolic blood pressure and lower prevalence of ventilation. These factors lead to a lower predicted risk of hemodynamic interventions in the unstable patients of the stepdown units (higher rate of false negatives), making the separation of the unstable patients from stable patients using HSI more difficult on stepdown patients.
Similarly, although HSI has strong predictive performance on most ICU units, it had a low AUC and PPV on neurological ICU patients. In the unstable group, neurological ICU patients have significantly lower risk of hemodynamic instability than patients in other care units, and as a result, the model has a lower true positive rate. Unstable patients in neurological ICUs have significantly higher systolic blood pressure, lower heart rate, higher hematocrit, and hemoglobin. Clinically, neurological patients are intentionally made hypertensive using vasopressors, which overlap with those used to define interventions for the model. The administration of vasopressors to neurological ICU patients does not necessarily indicate the onset of hemodynamic instability but reflects routine treatment patterns in patients admitted to the neurological ICU unit. Our work shares some similarities with prior work on early detection of adverse hemodynamic events where pressor administration was used as an outcome label as a surrogate marker of hemodynamic instability [8,[16][17][18]. In contrast to prior work, we defined hemodynamic interventions using a broader category of treatments including significant fluid administration within a short time and blood transfusions with PRBC in addition to pressor or inotrope initiation. Hyland et al. (2020), for example, defines circulatory failure using thresholds on lactate, mean arterial pressure, and administration of vasopressors or inotropes. Our definition of an adverse hemodynamic event captures a more general case. For instance, our labels include the case where resuscitation leads to a rapid increase in fluid administration in a short period of time before pressors are initiated. Other technical differentiators between HSI and prior work are that we achieve good predictive performance and generalization using a few commonly measured vital signs and laboratory measurements. In contrast to prior work, our model also provides confidence intervals, can abstain from making predictions when uncertainty is high, and is inherently interpretable because we use an ensemble of decision stumps. This contrasts with Hyland et al. (2020) where the final model uses an ensemble of deep decision trees with 4-levels of interactions and relies on post hoc explanation methods (Shapley values) to provide a global feature importance. The TREWScore is another alternative to HSI, designed to predict the onset of septic shock [19]. The TREWScore was developed on a cohort or sepsis patients, unlike HSI which is trained on a larger more heterogenous patient population, including patients with septic shock. We hypothesize that the adjunct analyses (operating modes, subgroups) and algorithm enhancements (confidence intervals, abstention, feature importance) we described will support deployment of HSI and similar decision support algorithms in real clinical settings.
HSI has been trained by learning from clinician's actions such as administration of vasopressors, inotropes, fluids, and PRBCs. The approach follows the rationale that clinicians' decisions to intervene consider broad and diverse information about the patient (part of which is not even captured or not captured timely in EMR systems), of which the experienced clinician makes sense of, due to their years of training and experience. By learning from clinicians' actions on thousands of patients rather than from arbitrary definitions of hemodynamic instability events based on physiological or laboratory measurements crossing a fixed, pre-defined one-size-fits-all threshold, HSI gets one step closer to personalized care. Additionally, HSI uses the result of a laboratory test instead of the presence (or absence) of a laboratory test to model patient physiology instead of the institutionspecific care pattern [15].
The present study has several limitations. First, our model is tested on retrospectively collected datasets only. However, using a training dataset that captures practice variations of ICUs all over the U.S. gives the algorithm a good chance of being generalizable. Furthermore, we show high external validity of the predictive performance on an external dataset and on patient subgroups, suggesting potential generalizability. Second, an advantage of HSI-using a limited set of physiologic variables-can also be considered a limitation because we lack advanced hemodynamic measurements like cardiac output, stroke volume and stroke volume variation, which would likely add predictive power to HSI and make HSI more applicable in assessing fluid responsiveness [7,20]. We also don't include medication information in our model. Intuitively, certain physiologic parameters could be conditionally dependent on medications. Future work will focus on prospective validation of HSI in the ICU setting to show that such a system can impact patient outcome.
Conclusions
We developed an accurate and automated early prediction algorithm to identify ICU patients at risk of developing hemodynamic instability using commonly measured physiological variables. The HSI model demonstrates generalizability across ICU units, patient subpopulations, institutions, and operating modes. Importantly, we develop the algorithm into a decision support tool that provides interpretable feature importance, measures uncertainty in real time, abstains from making predictions with high uncertainty, and gives actionable prompts to take new measurements based on a feature impact score. The analysis and supporting algorithms presented around HSI will be especially critical in real-world deployment scenarios that require good generalizability, handing of different data availability, and explanation of algorithm output in the form of feature importance and prediction confidence. | 5,294.8 | 2021-11-14T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Integrated metasurfaces for re-envisioning a near-future disruptive optical platform
Metasurfaces have been continuously garnering attention in both scientific and industrial fields, owing to their unprecedented wavefront manipulation capabilities using arranged subwavelength artificial structures. To date, research has mainly focused on the full control of electromagnetic characteristics, including polarization, phase, amplitude, and even frequencies. Consequently, versatile possibilities of electromagnetic wave control have been achieved, yielding practical optical components such as metalenses, beam-steerers, metaholograms, and sensors. Current research is now focused on integrating the aforementioned metasurfaces with other standard optical components (e.g., light-emitting diodes, charged-coupled devices, micro-electro-mechanical systems, liquid crystals, heaters, refractive optical elements, planar waveguides, optical fibers, etc.) for commercialization with miniaturization trends of optical devices. Herein, this review describes and classifies metasurface-integrated optical components, and subsequently discusses their promising applications with metasurface-integrated optical platforms including those of augmented/virtual reality, light detection and ranging, and sensors. In conclusion, this review presents several challenges and prospects that are prevalent in the field in order to accelerate the commercialization of metasurfaces-integrated optical platforms.
Introduction
Metasurfaces, two-dimensional (2D) arrays of subwavelength artificial structures (also referred to as metaatoms), have emerged as alternatives to conventional refractive optical elements (ROEs) and diffractive optical elements (DOEs). They exhibit the ability to construct compact form factors with arbitrary manipulation of outgoing light [1][2][3][4] . For example, metasurfaces provide aberration-correction 5 and diffractive limited resolution 6 for high-end imaging applications 7 . In addition, polarizationselective focal points 8 , and edge-detections 9,10 have been demonstrated by spatially engineered polarization profiles of output light from singlet metalenses. Similarly, various unprecedented optical phenomena have been demonstrated with metasurfaces that recode video by exploiting numerous orbital angular momentum (OAM) 11 , focus with nearunity efficiencies at the high-numerical-aperture (NA) 12,13 , and create arbitrary polarization states in three-dimensional (3D) spaces 14 . These sophisticated optical responses are enabled with well-designed artificial structures 15 and the development of precise nanofabrication methods [16][17][18] , opening new degrees of freedom for constructing high-end optical devices with compact form factors.
Recently, metasurface research has reached the path to commercialization through the integration of metasurfaces with transitional components such as light-emitting diodes (LEDs) 19 , organic LEDs 20 , vertical-cavity surface-emitting lasers (VCSELs) 21 , charge-coupled devices (CCDs) 22 , microelectromechanical systems (MEMS) 23 , liquid crystals (LCs) [24][25][26] , waveguides 27 , optical fibers 28 , and even conventional ROEs 29 . By integrating metasurfaces with those optical components, the performance of receiver/emitters has been improved with better receiving/emitting efficiencies. Tunable components offer an effective method for implementing reconfigurable electromagnetic wave manipulation. In addition, wavefronts and dispersion have been precisely manipulated by engineering the optical surfaces of in-couplers, out-couplers, and ROEs. These attempts confirm that metasurfaces can be inserted into current devices via the integration of other standard optical components. Moreover, they indicate several possible methods for constructing structures for practical applications with metasurfaces.
To boost the applications of metasurface-integrated optical systems, a review of integrated metasurfaces is required to pave the way for guiding promising options for high-end optical devices. Although many reviews have carefully organized recent advances in the fundamentals [30][31][32][33] , multifunctionality [34][35][36] , design approach 37-41 , fabrication 42 , and applications [42][43][44][45][46][47] , reports on the overall concept of integrated metasurfaces for near-future photonic devices are scarce. Certain reviews have focused on emitter- [48][49][50] , LC- 51,52 , MEMS- 51,52 , waveguide-53 , and optical fiber-integrated metasurfaces 28 ; however, they only focus on the functionality and performances of metasurfaces and do not comment on the practical usage of recent metasurfaceintegrated optical platforms. From this perspective, this review presents a careful selection of integrated metasurfaces that are expected to be employed in near-future optical devices.
Here, we introduce integrated metasurfaces that are used for hybridization with other standard optical components such as emitters, receivers, MEMS, LCs, heaters, ROEs, planar waveguides, and optical fibers (Fig. 1). In addition, this review covers several metasurface components used in practical photonic systems. In the second half of the review, we discuss the recent development of metasurface-integrated photonic platforms including virtual/augmented reality (VR/AR) 54 , light detection and range (LiDAR), and photonic sensors. In conclusion, we summarize the entire review paper and introduce the main challenge that should be overcome for reenvisioning of near-future optical devices.
Integration with emitters and receivers
Light emitters and receivers are key components for constructing photonic devices ranging from smartphone cameras to LiDAR. Over the past years, metasurfaces have improved the performance of the emitter and receivers in terms of efficiency and resolution. Enhanced efficiencies reduce the energy consumption of the total optical system, enabling more lightweight with reduced battery sizes. The increased resolution provides an immersive display with high-quality visualization. Furthermore, non-classical light sources such as single-photon and nonlinear emitters have been integrated with metasurfaces, increasing the manipulation capability of a single photon and nonlinear light. Metasurface-integrated non-classical light sources provide on-demand light emitters that have the potential to be used for next-generation optical systems including quantum and neuromorphic computing [55][56][57] . In addition, metasurfaceintegrated optoelectronic devices have been recently investigated, enabling flexible photodetectors 58 and environmental-friendly energy harvesting 59 . Here, this review introduces metasurface-integrated emitters and receivers, along with their performance and potential applications.
Metasurface-integrated light emitters
Light emitters, also known as light sources, are essential optical components for the construction of display systems. In general, a compact form factor is in high demand with the development of wearable devices; however, conventional systems are still bulky because several ROEs and DOEs are required to create the desired wavefront. Furthermore, the use of several optical components significantly decreases the efficiency of the emitters, which is not compatible with sustainability. Recently, metasurface research has been dedicated to creating desired emission profiles with VCSELs and improving the light extraction efficiencies from LEDs. These efforts provide a valuable possibility for realizing a compact displaying system with a highly immersive image. Also, metasurface-integrated non-classical light sources (e.g., second harmonic generation (SHG), and single-quantum emitters) have been proposed to enhance the capability of light sources. The various metasurface-integrated light emitters are described below.
One of the challenges of conventional LEDs is low light extraction efficiencies due to total internal reflection (TIR) at the encapsulation layers, whose critical angle is 30°. To suppress TIR, various nanostructures have been designed to encapsulate layers with random wrinkles 60 and photonic crystals 61,62 . However, early metasurface designs required a complex fabrication process (e.g., multiple deposition 63,64 , post-annealing 65,66 , and photolithography 67,68 ) that incurred high production costs for emitters. More recently, cost-effective index-matching layers with disordered Ag nanoparticles have been proposed for commercial GaN LEDs (Fig. 2a) 19 . The Ag nanoparticles can be manufactured with a single-step fabrication method which is gas-phase cluster beam deposition. The disorder and density of Ag nanoparticles have been optimized by adjusting fabrication parameters, improving light extraction efficiencies by a factor of 1.65 by extracting photons at an incident angle beyond 60°.
The efficiency of SHG emitters has also been improved by combining Ag metasurfaces with TiN/Al 2 O 3 epitaxial x axis (μm) 300 400 20 . Reprinted with permission from AAAS, d from ref. 21 Copyright © 2020 Springer Nature, e from ref. 83 Copyright © 2020 Springer Nature, f from ref. 84 Copyright © 2020 The Springer Nature multilayers coupled with multi-quantum wells (MQWs) (Fig. 2b) 69 . The epitaxial multilayers have a TiN/Al 2 O 3 thickness of 1.0/2.2 nm, producing SHG at 450 nm. The Ag metasurfaces have two plasmonic resonances, which corresponded to fundamental (920 nm) and SHG wavelengths (460 nm). At the resonance wavelength, the Ag metasurfaces generate the desired z-direction polarized light, increasing the energy transfer ratio of the incident light onto the coupled MQWs. These Ag metasurfaceintegrated SHG emitters record conversion efficiencies (10 −4 ) higher by several orders of magnitude under an incident pulse intensity of 10 GW cm −2 compared to the classical SHG emitters 70 . A similar approach has been continuously investigated with various shapes of artificial materials, such as plasmonic nanocross 71 , T-shaped resonators 72 , split ring resonators 73 , and high-index dielectric gratings 74 . Since metasurfaces can be fabricated on flat surfaces, metasurfaces that improve the efficiencies may be further applied to other types of SHG emitters with 2D materials 48,75 . Meta mirrors have been implanted in organic LEDs, offering ultrahigh pixel densities (>10,000 pixels) with twice the luminescence efficiency (Fig. 2c) 20 . Commercial organic LEDs comprise two mirrors for the Fabry-Pérot (F-P) cavity and an emissive layer located between the two mirrors. Nanostructures are implanted on a backplane mirror, which is named meta-mirrors, and alter the reflected phases depending on their dimensions 20,76,77 . By optimizing reflected phases, cavities are designed to have individual RGB resonances where luminance efficiencies are the maximum while having the same cavity thickness. Compared to conventional organic LED that has different optical F-P cavity thicknesses depending on its target color, the meta mirrors-integrated organic LED having the physically constant thickness facilitates ultrahigh pixel densities, allowing for a scalable, low-cost nanoimprinting method. With these advantages, it has the potential to be used for commercial VR/AR displays that require ultrahigh-resolution pixels with a low production cost.
Wavefront manipulation with illuminants has been achieved by integrating metasurfaces with emitters. Regardless of ever-increasing demands of precise beam shaping from sources, wavefront manipulation of emitted light had not been achieved with conventional ways (e.g., optical waveguides 78 , surface relief structures 79 , etc.). This problem is circumvented by adopting metasurfaces on the VCSELs 80 , collimating light with a divergence angle of 0.83° ( Fig. 2d) 21 . Furthermore, metasurface-integrated VCSELs have been used to construct various wavefronts, including those of Bessel beam generation 21 , vectorial holography reconstruction 81 , and beam steering 21,82 . Because both the VCSEL and metasurfaces are compatible with the same CMOS process, they may be easily implanted into a commercial wafer-scale manufacturing process.
Unidirectional luminescence from incoherent LEDs has been demonstrated by implanting InGaN/GaN quantumwell into patterned structures (Fig. 2e) 83 . Unidirectional light emission from LEDs is highly demanded since Lambertian light emission from conventional LEDs induces light losses when applied to paraxial approximated optical systems. Thus, unidirectional luminescence is essential to increase the efficiency of the total optical components, which are designed with commercial ROEs. Unidirectional luminescence has been achieved by InGaN/GaN quantum-well patterns with 100-fold external quantum efficiencies compared to photonic crystal layers. Also, the direction of emitted light can be controlled by varying the radius of patterned cylinders, achieving a steering angle of 80°.
The propagation direction of the nonlinear light has also been controlled using MQW-comprised metasurfaces ( Fig. 2f) 84 . Conventionally, heterostructures comprising stacked subwavelength layers can generate nonlinear optical responses with a compact form factor. However, their tunable nonlinear optical responses with high efficiency have rarely been reported. To solve this limitation, MQW-comprised metasurfaces are obtained by engraving patterns on MQWs composed of In 0.53 Ga 0.47 As and Al 0.48 In 0.52 As layers 84 , and implanting two metallic layers applied to bias voltages have been proposed. When the applied voltages are adjusted, the intensity and phase of the emitted SHG light are manipulated at a wavelength of 10 µm, thus providing free-space propagation tuning. This method can be further applied to optical encryptions 85,86 and nonlinear switching systems 87 with nonlinear wavefront shaping.
Metasurface-integrated single-photon emitter
Single-photon control has gained attention in the field of quantum communication, benefiting from its high speed and large information transfer capability compared to those of classical computers. Many computational methods have been developed using various schemes of quantum computation optics 56,88,89 . However, a major challenge encountered is the implementation of computational technologies for physically existing devices. Although photonic machines offering quantum manipulation have been recently proposed 89,90 , characteristics of photons (e.g., spin angular momentum, OAM, optical paths, and frequency) have not yet been fully controlled. Recently, single-photon characteristics have been controlled through the integration of metasurfaces with single-photon emitters, and exotic optical responses have been experimentally achieved using β-barium borate (BBO) crystals, quantum-dots, 2D materials, and nitrogen-vacancy (NV) centered diamonds.
Emitting multiple entangled photons, also known as multiphoton states-generation, is required to realize quantum computation systems 91 ; however, the current spontaneous photon emission is limited by the number of 20 92 . Through the integration of metalens arrays on BBO crystals, spontaneous photon-emitters have been demonstrated (Fig. 3a) 93 . The metalens arrays focus the incident pump laser (λ = 415 nm) on the inside of BBO crystals, triggering a spontaneous parametric downconversion that converts one high-energy single photon to two lower-energy photons. With metalenses-integrated spontaneous photon emitters, four-and six-photon generation can be achieved, and the emitted photons exhibit indistinguishability from different metalenses. Considering that spontaneous photon emitters have a more compact compactor compared to those of conventional multiphoton emitters, they may be useful for miniaturizing multiphoton-based quantum computing systems.
Metalenses have also been implanted with diamond NV centers, which are promising single photon emitters 94,95 . Consequently, three challenges of conventional emitters are circumvented: (1) limited collection efficiency of photoluminescence (PL), (2) TIR (θ c =~25°) from the host diamonds, and (3) lack of polarization control. Improved PL collection efficiency and suppression of TIR have been achieved through patterning of high-NA metalenses onto diamond surfaces, which collimates the emitted photons from the individual NV center located~20 µm beneath the surface 94 . Furthermore, polarization splitting with a diamond NV center has been demonstrated with patterned hydrogen silsesquioxane (HSQ) metasurfaces on Ag mirror substrates (Fig. 3b) 95 . The HSQ metasurfaces have HSQ patterns comprising circular nanoridges with azimuthally varied widths, which enable well-defined chirality and high directionality 95 . Using the circular nanoridges, right-, and left-circularly polarized single photons have been experimentally controlled.
Various single-quantum emitters with 2D materials have been integrated with metasurfaces for Purcell enhancements. Deformed 2D materials, which are wellknown single photon emitters have been fabricated using metasurfaces as substrates. For example, the precise and accurate positioning of quantum emitters has been demonstrated by depositing 2D materials onto nanopatterned substrates. When 2D materials (e.g., WSe 2 and WS 2 ) are placed on periodically arranged nanopillars, 2D materials emit light where it is distorted but not pierced by nanopatterns (Fig. 3c) 96 . They proved that metasurfaces can be used for scalable quantum emitters by deforming 2D materials, and this method has been further applied to other materials such as single-layered hBN 97 2 101 , and InSe 102 . Another 2D material, hexagonal boron nitride (hBN) has been integrated with a plasmonic nanocavity array, leading to an enhanced emission rate and reduced fluorescence lifetime (Fig. 3d) 103 . Plasmonic nanocavities support lattice plasmon resonance, which generates a strong localized field around the plasmonic structures. The plasmonic nanocavities exhibit resonance at a wavelength of 641 nm, which is the same as that of the photon sideband of hBN. By coupling plasmonic resonance with the band, plasmonic nanocavity-integrated emitters facilitate lifetime reduction with PL enhancement.
Metasurface-integrated receiver
Receivers, also known as detectors, are optical components that transfer light energy to electrical signals, thereby enabling the capture and detection of light information 104 . Recently, metasurfaces have been integrated with receivers to increase detection efficiencies, sort input light, and widen field of view (FoV).
The efficiency of CCDs has been improved by integrating metalenses that focus incident light into photosensitive areas 22,105 . In single-wavelength CCDs, metalenses have been used to focus incident unpolarized UV light onto photosensitive areas (Fig. 4a) 22 . When the focal point is located at the center of the photosensitive area, the detection performance of the devices is improved by 9.9%. Similarly, multiwavelength CCDs have been integrated with dispersion-engineered metasurfaces using interleaved GaN 106 , binary-type Si 3 N 4 107 , and tall Si 3 N 4 structures 108 . In a specific example, binary-type Si 3 N 4 achieves efficiencies of 58%, 59%, and 49% at red, green, and blue lights, respectively, which are twice the values of commercial Bayer color filters that filter out wavelength mismatched incident light with a photosensitive area (Fig. 4b) 107 . These metasurface-integrated CCDs innovate information acquisition systems 109 .
Metasurfaces have been integrated with optoelectronic devices to improve photoelectric conversion, enabling efficient energy harvest and information science 110,111 . For example, optoelectronic hybrid organic-inorganic perovskite (HOIP) films have been integrated into metasurfaces, enhancing their optoelectrical conversion in the broadband operating region from ultraviolet to visible (Fig. 4c) 110 . Metasurfaces are directly patterned on HOIP films, and a high refractive index of structured HOIPs provides Mie scattering with strong light confinement. Compared to planar HOIP films, the metasurfaceintegrated HOIP films exhibit 10 times higher photocurrent at the voltage of 1 V. Similarly, color-sensitive photodetectors have been proposed with silicon-aluminum hybridized metasurfaces 112 . The silicon-aluminum hybridized metasurfaces generate electron-hole pairs with high color selection, achieving submicron photodetectors.
Metasurfaces have given wavefront sorting functionality on CCDs. OAM has been sorted depending on the number of topological charges by metasurface-integrated CCDs 113,114 . Doublet TiO 2 metasurfaces have been designed for the metasurface-integrated CCDs for OAM sorters, which sort incident OAM light depending on topological charges from −3 to 3 (Fig. 4d) 113 . The first metasurface transfer donut-shaped OAM light to straight lines at Fourier domains. And the second metasurface, located on the Fourier plane, fan-out and focuses the light on CCD detectors. Although the OAM sorting concept had already been demonstrated with spatial light modulators 115 , the doublet TiO 2 metasurfaces have meaning that they suppress OAM sorting crosstalk by using submicron meta-atom sizes and provide compact OAM sorting devices by reducing the distance between Fourier plane and optical components.
Similarly, complex wavefronts (e.g., hand-written digits, alphabets) have been distinguished by integrating commercial CMOS sensors with serially composited metasurfaces (Fig. 4e) 116 . Polarization-multiplexed metasurfaces have been employed for recognizing complex wavefronts, and optical responses of the metasurfaces have been designed by conventional electronic neural networks. In experiments, triple-layered TiO 2 metasurfaces are used for recognizing input images, and the output light from the triple-layered TiO 2 metasurfaces is clustered at the desired spot. Also, metasurface-integrated CMOS sensors distinguished eight different images with high accuracy. This method is potentially applied for computer vision processing and image recognition in automobile cameras.
The CCD detection angle has been steadily improved through the integration of metasurfaces. Compared with single ROE-integrated CCDs, singlet metasurfaces can construct a compact wide-angle detecting optical system, which has single focal distances regardless of the varied incident angles 117,118 . However, early metasurface research suffered from a trade-off between resolution and detection angles. This problem has been circumvented by using metasurfaces composed of multiple apertures whose opening angle is oriented to specific angles (Fig. 4f) 119 . Depending on the incident angle, the multiple apertures vary spot position onto CCDs, and the multiple apertureintegrated CCDs recognize the position of the target by analyzing the spot position. The improved detection angles facilitate various applications in LiDAR and timeof-flight (ToF) cameras 109 .
Integration with electrically tunable elements
Although metasurfaces have received considerable attention owing to their great attention with potential to replace conventional bulky optics, achieving tunable iii Time-resolved photoluminescence (PL) measurement for comparison of emission rate between conventional (pristine emitter) and plasmonic nanocavity integrated 2D emitters (coupled emitter). a is reproduced with permission from ref. 93 . Reprinted with permission from AAAS, b from ref. 94 Copyright © 2020 Wiley-VCH, c from ref. 96 Copyright © 2017 Carmen Palacios-Berraquero et al., d from ref. 103 Copyright © 2017 American Chemical Society optical responses of metasurfaces has been challenging due to the static geometries of nanostructures, impeding their versatile applications 35,52,[120][121][122] . To circumvent these limitations, tunable metasurfaces have been extensively studied using various ranges of materials and systems, such as electrical [123][124][125][126][127] , optical [128][129][130] , and thermal 131 tuning mechanisms. Among those systems, the electrical tuning mechanism has been actively investigated with metasurfaces since the system enables their fast response time and high feasibility with conventional controllers. Many approaches have been developed to integrate metasurfaces with electrically tunable components including MEMS, LCs, and heaters.
MEMS-integrated metasurfaces
MEMS provides straightforward geometrical reconfigurability at the micro-scale and has been applied to changing the geometries of metasurfaces. Owing to their high compatibility with mature CMOS technology, MEMS-integrated metasurfaces have garnered significant attention in industries with increasing demands for more multifunctional devices. In this section, we discuss MEMS-integrated metasurfaces including MEMSactuated metalenses 132-135 , on-chip beam steering devices 23,[136][137][138] , and tunable structural-color pixels 139,140 .
Early MEMS-integrated metalenses have been developed by exploiting the mechanical deformation of the substrates, varying distances of adjacent metalenses 133 or nanostructures 141 . For example, MEMS-integrated doublet metasurfaces have been proposed for electrically reconfigurable focal distance (Fig. 5a) 133 . The doublet metalenses are composed of converging and diverging metalenses, and they achieve continuously tunable focal lengths from 635 to 717 µm by changing the distances between the two metalenses at the visible wavelengths. Furthermore, astigmatism and shift have been accomplished with the MEMS-integrated metalens (Fig. 5b) 134 . This MEMS-integrated metalens comprises a centimeterscale dielectric elastomer metalens with five reconfigurable voltage control electrodes to resolve precise misalignment correction. Similarly, many MEMS-integrated metalenses have been continuously demonstrated with various functions such as a focal length change of 68 μm at the infrared range 1550 nm) 135 The resonance wavelength and radiation phase profiles are manipulated, and they form arbitrary wavefront shaping, enabling a dynamic beam deflector 137 . In ii Experimental focal shifts using the middle electrode V 5 . Solid blue lines, blue circles, and red triangles represent the fit of focal points, focal length, and stretch, respectively, as a function of the applied voltage. c Dynamic beam steering device based on the integration of optical metasurfaces and piezoelectric MEMS mirror. c.i Operational schemes for 2D wavefront shaping of the device, specular reflection, and c.ii anomalous reflection. c.iii Image of monolithic integration between MEMS mirror and optical metasurfaces. c.iv Optical microscope images and c.v SEM images of the 30 × 30 μm 2 metasurface and 250 nm-period gold meta-atoms in the MEMS-mirror-based dynamic beam steering device. (Fig. 5c) 23 . This platform enables polarization-independent reflection angle control (0°, 7.7°, and 15.5°in air corresponding to first, second, and third diffraction orders, respectively) with a high operating speed (0.4 ms) and over 50% efficiency. Additionally, MEMS-integrated metasurfaces with Si-air-Si gap-controlled structures have been deployed in dynamic beam steering, resulting in a high tuning speed (>10 5 Hz) with full phase coverage (0-2π) 138 . It also covers the steering angle in the range of 2°-12°regardless of the very low voltage (~3.2 V). MEMS-integrated color metasurfaces have implemented various tunable structural colorations, achieving low-energy consumption with ultra-high-density resolution and vivid color. MEMS-integrated metasurfaces have been designed by changing the insulator thickness of MIM structures (Fig. 5d) 138 . The MEMS-integrated metasurface has two Si layers, and its distances can be manipulated with low voltage (~2.75 V). By changing the distance, its reflectance spectra are changed, and the dynamic reflective color is exhibited. Most recently, transmitted types of MEMS-integrated color metasurfaces have been designed with an electrically controllable cantilever, which can function as a controller of light that passes through plasmonic nanohole arrays (Fig. 5e) 140 . An ultra-high modulation speed (~800 Hz) with full-color coverage is achieved using this design.
LC-integrated metasurfaces
LCs have been steadily used for commercial display as electrically tunable waveplates owing to their large refractive index variation in the visible region (Δn = 0.2-0.4) 142 . In the field of metasurfaces, LCs have been recently applied for electrically tunable waveplates to be integrated with polarization-sensitive metasurfaces, and for background index changer of structural materials whose scattering are influenced by the optical index of the host medium. In this chapter, we focus on LC-integrated metasurfaces including tunable structural-color pixels 25,143,144 , spatial light modulators (SLMs) 142,145,146 , and multiplexed metalenses and metaholograms 8,[147][148][149][150] .
Tunable color pixels have been demonstrated with two types of metasurfaces: (1) LC-integrated plasmonic metasurfaces, and (2) LC-integrated dielectric metasurfaces. The former type has been designed in both reflectance 143 and transmissive variations 144 . Reflectivetyped LC-integrated plasmonic metasurfaces have covered RGB coloration with voltage regulation, and their reflectance varied according to the arrangement of electric field-sensitive LCs (Fig. 6a) 143 . In the case of transmissive types, polarization-sensitive color metasurfaces have been implanted with LCs, and they have demonstrated dynamic reflectance with low voltage (<5 V) 144 . The polarizationsensitive color metasurfaces have been applied for tunable color tags. On the other hand, all-dielectric metasurfaces can provide directional Mie-scattering without Ohmic losses at the visible spectrum, enabling vivid structural coloration with subwavelength pixels. With this advantage, dielectric metasurfaces have been integrated with electrically tunable LCs, providing gradient colors for photorealistic applications (Fig. 6b) 25 . These devices fully control the on-off states of each pixel and thus they cover the full RGB color gamut including white and dark black without a second polarizer by mixing the reflectance spectra, which has not been realized in the previous tunable structural color pixels. This approach achieves ultra-high-resolution tunable color printings and multicolor cryptographies.
LC-integrated metasurfaces have accomplished abrupt phase modulation via the application of spatially different biases on the LC, overcoming limitations on the pixel sizes of conventional spatial light modulators (Fig. 6c) 142 . One-dimensional transmissive metasurfacebased SLMs have been demonstrated, and they achieve a transmittance efficiency of 36% and an FoV of 22°with a small pixel size (~1 μm). However, they operate at a single monochromatic wavelength. To overcome this limitation, LC-coupled SLMs based on Fabry-Perot nanocavities have been realized to operate at RGB wavelengths ( Fig. 6d) 146 . They are composed of submicrometer LC cells, leading to a drastic improvement in the response time and interaction between adjacent pixels. These designs can be further expanded to metasurface-based 2D SLMs, thereby rendering the construction of an ultra-high resolution, large viewing angle device possible.
LC-integrated metalenses 8,147,148 and metaholograms [149][150][151][152] have been extensively studied to realize tunable focal spots and multiplexed images, respectively. In the case of metalenses, tunable bifocal metalenses have been designed using LCs that act as electrically tunable waveplates (Fig. 6e) 8 . The tunable bifocal metalenses experimentally demonstrate two variable focal lengths of 7.5 and 3.7 mm with 44% focusing efficiency. Similar results have been reported with graphene electrodes, wherein a broadband achromatic performance in the range of 0.9-1.4 THz is measured 147 . In the case of a metahologram, LC-integrated bifunctional metasurfaces that project color prints in white ambient light and exhibit holography under coherent laser irradiation have been demonstrated (Fig. 6f) 150 . This configuration is used as a photonic security platform through its encryption into a QR code with a color print and the encryption of numbers as polarization-sensitive vectorial holography 150 .
Heater-integrated metasurfaces
Heater-integrated metasurfaces exploit the thermooptic effect, which describes the change in the refractive index in response to temperature, to change their optical responses. Phase change materials (PCMs), such as vanadium dioxide (VO 2 ) and chalcogenide alloys, are typically used owing to their large variable refractive index range. When designing meta-atoms with these PCMs, the optical responses of metasurfaces are manipulated by changing temperature using electrical heaters. This chapter introduces the achievements of electro-thermal modulation of PCMs, heater-integrated VO 2 153-156 , and chalcogenide metasurfaces [157][158][159] . Heater-integrated VO 2 metasurfaces have been designed by exploiting the metal-insulator phase transition between 300 and 340 K 160 , providing various tunable optical responses including broadband tunable resonators 154 , switchable waveplates 153,156 , and phase modulators 155 . Specifically, these varied optical responses are defined by the geometries of the VO 2 structures, and their temperatures can be controlled using electrical heaters. For example, when designed as "L" shaped structures, heater-integrated VO 2 metasurfaces exhibit tunable functionality between halfand quarter-wave plates (Fig. 7a) 156 . The modulation speeds of the heater-integrated VO 2 metasurfaces approach 65 and 245 ms when heating and cooling, respectively. Furthermore, the modulation speeds can be varied depending on the geometries of the VO 2 metasurfaces.
LC Analyzer
Voltage z (mm) 2.5 f2 f1 x (μm) Chalcogenides are attracting significant attention as next-generation application materials owing to their nonvolatile characteristics. Ge 2 Sb 2 Se 5 Te (GSST) has been applied to heater-integrated tunable metasurfaces (Fig. 7b) 158 . It comprises nanostructured GSST on microheaters and achieves reversible tunability. Consequently, beam-steering and tunable reflectance have been experimentally demonstrated in the infrared region. Additionally, various PCMs (e.g., Sb 2 S 3 157 , and Sb 2 Se 3 157 ) metasurfaces have been actively exploited to realize programmable or tunable optical components with various types of heaters such as indium titanium oxide (ITO) in the visible 157 . Owing to the development of various PCMs for photonic platforms, heater-integrated metasurfaces are a promising option for practical optical platforms.
Integration with conventional optical elements
As the demand for high-end LiDAR and VR/AR technologies has increased, optical systems have been constructed using a series of optical components, including lenses, waveguides, and optical fibers. Metasurfaces offer improved functionality while reducing the size and weight of these optical devices for LiDAR and VR/AR applications. Furthermore, metasurfaces have expanded their application field to include optical communication and computing, which has gained attention due to its bright prospects for high integration density and reduced heat generation. This review introduces metasurfaceintegrated refractive optical elements and waveguides, along with their prospective applications.
Metasurface-integrated refractive optical elements
From telescopes to microscopes, ROEs are essential elements for various instruments and have aided in significant scientific discoveries and applications. Conventional optical elements (e.g., prisms and lenses) manipulate light paths by employing refraction to obtain the desired optical functionality for practical photonic devices. However, conventional optical components are unable to fully control the optical dispersion, resulting in spherical aberration, surface distortion of an image, and chromatic aberration. These are attributed to the limitation of the fabrication process of ROEs, which cannot construct an ideal curve of the interfaces. These limitations are solved by attaching metasurfaces on interfaces of ROEs, realizing unprecedented wavefront manipulation with spatially engineered dispersion.
Metasurfaces have been integrated with conventional cylindrical and spherical refractive lenses to decouple the optical function from the ROE geometries (Fig. 8a) 161 . Traditionally, rays of light are defined by Snell's law at the interfaces of ROEs; therefore, the function of ROEs is constrained by their physical geometries. Recently, conformal flexible dielectric metasurfaces have been used to convert cylindrical lenses into aspherical lenses for decoupling from physical geometries. The metasurfaces consisted of Al 2 O 3 -capped amorphous Si nanoposts embedded in a PDMS film. With the flexibility of PDMS film, the metasurfaces can be attached to arbitrarily shaped ROEs such as concave and convex reflecting lenses. The focal spot is adjusted as well, for example, 8.1 to 3.5 mm for a converging cylindrical lens (radius: 4.13 mm), and -12.7 to 8 mm for a concave glass cylinder (radius: 6.48 mm). Consequently, metasurfaces have contributed to decoupling geometries of ROEs by manipulating phase profiles of their surfaces, allowing for distortion correction of existing optical components even with a small volume and weight gain.
Aberrations that are typically handled by cascading several lenses are among the most important issues in refractive optics. Metasurfaces can also be utilized to control optical dispersion without stacking multiple optical components. For example, the aberration of prisms (Fig. 8b) 162 and refractive lenses (Fig. 8c) 29 have been corrected using metasurfaces. When metasurfaces are attached to the surfaces of ROEs, the desired phase profile is analytically derived 163 considering chromatic and spherical aberration. Consequently, 80% of chromatic and 70% of spherical aberration are compensated 29 . Thus, artificially engineered interfaces with metasurfaces may provide compact imaging devices through the replacement of multiple systems of ROEs.
Miniaturization has been attempted by replacing conventional ROEs with metasurfaces (Fig. 8d) 164 . Using metasurfaces, spaceplates with an effective thickness d eff larger than the physical thickness d have been designed. Spaceplates with d eff > d are realized based on the equation 165 φ SP (k x , k y , d eff ) = d eff (|k| 2 −k x 2 −k y 2 ) 1/2 , where k is the momentum vector, k x and k y are the propagation vectors of the k momentum vector, and φ SP is the imparted phase from the spaceplate. Two types of spaceplates are designed: (1) alternating layers of subwavelength silicon and silica to induce collective optical responses, named nonlocal metamaterials, and (2) a uniaxial birefringent medium with an ordinary refractive index larger than the extraordinary one. The metamaterial spaceplate exhibited a compression factor R of~5 and a polarization-independent response. In contrast, the uniaxial spaceplate shows a small compression factor of R = 1.12; however, it is found to be broadband in the visible, achromatic, and high NA with a high transmission efficiency. Because the reduction of air gaps between optical lenses gaining traction in the current consumer device market to miniaturize the total optical imaging systems 166 , this result shows that the metasurface can be a promising optical component in future imaging systems with a compact form factor.
Metasurface-integrated planar waveguides
Compared to electrical wires that are used for electrical computers, electromagnetic wave guiding offers the advantage of low heat generation and solves the integration density problem, enabling high-speed optic computing 167 . Thus, several applications have been reported, enabling high amounts and rates of data transmission with low power consumption 168 . Optical components for electromagnetic wave guiding further extended their ability through their integration with metasurfaces owing to the development of nanofabrication. This has resulted in the fabrication of various sophisticated photonic chips such as photonic-integrated circuits (PICs), waveguides, and metaphotonics 169,170 .
One example of using a meta-structure with a PIC is coupling guided waves to free space. When using a PIC for free-space wavefront manipulation, the conventional method involves the use of edge couplers 171 and surface gratings 172 ; however, they cannot cover full-2π phase to construct flexible wavefront control. In contrast, the arrays of a grating are more versatile, although they require considerable space, and high-order diffractions generate a loss. However, by placing subwavelength-sized Au/SiO 2 /Au with a metal-dielectric-metal sandwiched configuration on top of the waveguide, each meta-atom can extract and mold guided waves into the desired freespace optical modes (Fig. 9a) 27 . The phase of the extracted wave is calculated as ∅ 0 + β x + Δ∅(x), where ∅ 0 and Δ∅(x) are the initial phase of the incidence and the abrupt phase change by metamaterials, respectively; β x is the phase accumulation from the propagation of the guided wave. This type of structure can enable tightly squeezed optical components by reducing the sizes of light sources.
Using the wavefront manipulation ability of metasurface-integrated out-couplers, various multiplexing holograms have been demonstrated with on-chip 3D sliced holography (Fig. 9b) 173 . Three different letter images are reconstructed at different vertical distances from the metasurface chip. The desired phase profiles of the outgoing wavefront are encoded by changing the position of the α-Si meta-atom, which is placed on top of the Si 3 N 4 waveguide. The phase of the extracted wave is ∅ 0 + β(nΛ + d n ), where ∅ 0 is the initial phase, β is the propagation constant, Λ is the array period (360 nm), and d n is the displacement of meta-atoms. It has also been used to construct quad-fold multiplexed images depending on the input direction of light originating from different waveguides.
The creation of OAM light has been demonstrated using an out-coupler of the meta-grating (Fig. 9c) 174 . The out-coupler had two waveguide arms. The m 1 order OAM mode is produced by light from the left arm, and the m 2 order is produced by light from the right arm under ii Simulated electric field distribution (E y ) for the phase shift of 0, π/2, π, and 3π/2. b.iii Schematic illustrating novel on-chip meta-holography applications, including multiplane 3D holography, dynamic holography, and quad-fold multiplexed holography. c Schematic of the broadband multiplexed OAM emitter. d Waveguide-integrated plasmonic nanoantenna that enables mode-selective polarization (de)multiplexing. e.i Schematic of the chipintegrated orbital angular momentum generator and e.ii its top view. f Gradient metasurface on the waveguide inducing asymmetric transmission. a is reproduced with permission from ref. 27 1450-1650 nm). The metagrating is designed using a global optimization process, which combined annealing and a genetic algorithm to calculate the refractive index distribution. The proposed OAM emitters have also been used for a commercial frequency-division multiplexing system for new highcapacity communication applications. The integrated nanoantenna is used as a mode demultiplexer with high-bit rate signal transmission (Fig. 9d) 175 . The gold nanoantenna is placed on the silicon waveguide, which can couple x-polarized incident light from free space to the TM mode, and the other can couple ypolarized incident light to the TE mode. By coupling different light polarizations vertically to different individual waveguide modes, the segregation of optical signals with distinct polarizations has been demonstrated through the separation of the direction of light traveling. This method can be further applied to integrated quantum optics, whose polarization control is a crucial degree of freedom for producing entanglement.
Metasurface-integrated planar waveguides have achieved a coupling efficiency higher than those of the previous works 175 by 10 times (67%) by rigorously applying phase matching conditions (Fig. 9e) 176 . The phase-matching condition is derived from the Jones matrix model and generalized Snell's law. The nanostructures comprise Si nanoantennas on Si 3 N 4 optical waveguide in the vicinity of a telecommunication wavelength of 1.55 μm. The chipintegrated twisted light generator is also described to show the mode-control flexibility, which coupled free-space linear polarization into 1ħ OAM.
Although most previous studies on metasurfaceintegrated waveguides have focused on controlling the characteristics of light when it is going into or out of waveguides, nanoantennas can control guided waves 177 . For example, on-chip asymmetric propagation by phasegradient metasurface has been accomplished over a broadband THz spectrum (Fig. 9f) 178 . Depending on the propagation direction and polarization, it decreases or guides light through waveguides. The principle behind this is based on the asymmetrically imparted momentum at interfaces with phase discontinuities as represented in k x out = k x in -NΔΦ/Λ x , where ΔΦ and Λ x are phase difference and periods, respectively. This method can facilitate the development of THz-integrated functional devices. In addition, various waveguided-integrated metasurfaces such as spatiotemporally modulated metasurface 179 , superheterodyne metasurface 180 , and metasurface-assisted second harmonic generation 181 have been steadily proposed.
Metasurface-integrated optical fibers
Since optical fibers can guide electromagnetic waves with highly flexible forms, they have been widely used in various fields of photonic devices. However, the commercial optical fiber components are exceedingly large, preventing them from being compact in-fiber optical systems. To solve this problem, metasurfaces have been actively investigated for long-haul operation with effective light directing with low optical loss.
In-fiber polarization-dependent optical filters have been demonstrated by asymmetric nanostructure patterning a plasmonic metasurface on polarization-maintaining photonic-crystal fibers (PM-PCFs) and conventional singlemode fibers (Fig. 10a) 182 . Polarization-dependent transmission with an efficiency of up to 70% in the telecommunication wavelength has been experimentally demonstrated. This result shows that the metasurface filter can be implemented in standard optical fiber, enabling a wavelength-selective filter. This filter can be used for systems that require precise polarization control even in long distances or the existence of external perturbations such as bending of fiber or mechanical vibration.
Additionally, beam-steering metafiber modules have been proposed through the integration of metalens with fiber arrays (Fig. 10b) 183 . To construct a beam-steering metafiber module, metalenses are first fabricated on a SiO 2 substrate and then attached to the end of the singlemode fiber arrays. The outgoing light from different fibers is steered in different directions according to the phase profiles of the quadratic metalenses, achieving a large FoV of up to 60°at λ = 1.55 µm. Furthermore, a LiDAR application for parking-space monitoring using the proposed 2D beam steering metafiber module working in a scanning mode has been demonstrated. Moreover, by enlarging the fiber array and expanding the metalens, a larger scale with improved scanning precision can be achieved.
3D achromatic metalenses have been attached to the end-facet of a commonly used single-mode fiber (SMF-28) has been demonstrated (Fig. 10c) 184 . Conventional fibers suffer from dispersion when light is guided, and it is strongly important because fiber is mostly used for longdistance communication. The designed metafiber is polarization insensitive considering perturbation in the fiber and achromatic in 1.25-1.65 µm range of nearinfrared telecommunication wavelengths, covering the entire single-mode domain of the fiber used in commerce. The degree of freedom is increased by one dimension by varying the height of the nanopillar and it is fabricated by 3D laser nanoimprinting via two-photon polymerization using a femtosecond laser. The upper bound κ of the time-bandwidth product κ ≥ ΔTΔω in an achromatic metalens is significantly increased by the height degrees of freedom unlocked in a 3D nanopillar meta-atom (up to 21.34). This resulted in a broad group delay modulation range from −8 to 14 fs. As a proof-of-concept, by using this thin and flexible achromatic metalens-attached fiber for fiber-optic confocal imaging, clear and in-focus images under broad-band light illumination have been provided.
A metasurface integrated with fiber can be applied to endoscopic optical bioimaging (Fig. 10d) 185 . Specifically, metasurfaces integrated with fibers are used as optical coherence tomography (OCT) catheters. Standard commercial catheters such as the graded-index (GRIN) lensprism configuration 186 or angle-polished ball lens 187 have asymmetrical curvatures in the transverse plane of the cylindrical outer protective sheath, which causes aberrations such as astigmatism. Tangential and sagittal resolutions are influenced by depth; the smallest measured FWHMs are 6.37 (tangential) and 6.53 μm (sagittal). The application of a metasurface achieves near-diffractionlimited imaging by nullifying non-chromatic aberration and reducing the tradeoff between the depth of focus and transverse resolution. Considering that bioimaging with metasurfaces such as high-resolution tomography 188 has been continuously developed, fiber-integrated metasurfaces will be a promising option for a practical bioimaging platform.
In addition to bioimaging, a lab-on-fiber biosensor can be realized by integrated metasurfaces. Biosensing with a phase gradient plasmonic metasurface has been demonstrated to capture a wavelength shift by analyzing local variations of the refractive index when the sensing 185 Copyright © 2018 Springer Nature, e from ref. 189 Copyright © 2020 Wiley-VCH material is attached. The integrated metasurface results in a unique biosensing platform with extremely high sensitivity for the detection of biomolecular interactions of streptavidin (Fig. 10e) 189 . The phase gradient increases the coupling of the incident field to the plasmonic resonance, which allows a larger field enhancement to improve sensitivity with a low fabrication cost. Additionally, owing to its intrinsic compatibility with medical catheters and needles, liquid biopsy applications involving real-time diagnosis in various body regions are possible.
Optical platforms with composed metasurfaces
Metasurfaces comprising two or more metasurfaces can realize multi-functionality with various degrees of freedom, extending the functionality of singlet metasurfaces. Singlet metasurfaces suffer from many trade-offs when multi-functionality is encoded. For example, when multiple metaholograms and multiple focal lengths are encoded in singlet metasurfaces, the efficiency of each mode is significantly decreased. Further, in dispersion control, performance aspects of metalenses such as the diameter, efficiency, and NA are sacrificed to acquire the exact group-delay [190][191][192] . In this context, composed metasurfaces have been investigated to solve these problems, and variously composed metasurfaces have been customized by respectively changing the light properties of each surface. With a composed structure, various wavelength and polarization multiplexing processes have been demonstrated for practical spectroscopy 193 and polarimetry 194 application. In this chapter, this review introduces additional functionalities of composed metasurface in terms of multiplexing, dispersion control, and tunability.
Composed metasurfaces
Stacking the metasurfaces enables various multiplexing functionalities (e.g., varied focal length, and multiple metaholography), while they prevent the undesired diffraction and low resolution caused by interleaved design with large periods. Composed metasurfaces have been demonstrated for multi-wavelength holography, polarization-independent holography, and multifunctional nonlocal metasurfaces. Composed metasurfaces for wavelength-multiplexed holography show two independent holographic images under laser light of ultra-violet (UV) and visible wavelengths (Fig. 11a) 195 .
The first layer of metasurfaces has a manipulation efficiency of 18% in the visible range, although it cannot manipulate UV (manipulation efficiency of 0%). However, the second layer has high efficiency (72%) at λ = 325 nm with lower efficiency (3.4%) at λ = 532 nm. By stacking the two metasurfaces, UV and visible holography can be decoupled and utilized as optical encryption platforms. Another example is polarization-independent holography, which has been demonstrated by composing two metasurfaces 196 . Because composed metasurfaces can control two infrared wavelengths (1180 and 1680 nm) without efficiency degradation of singlet interleaved metasurfaces owing to space-filling limitations and cross-talk 197,198 , they have a wide phase map with high transmittance at two wavelengths. Consequently, two independent hologram images at two wavelengths can be produced with high efficiency of 48.1% and 50.3% at 1180 and 1680 nm, respectively (Fig. 11b) 196 . In contrast to the metasurfaces introduced above, non-local metasurfaces control both spatial and spectral lights 199 , breaking bound states in the continuum (BICs), where light interacts very strongly with the material and is confined by an infinite Q-factor. The non-local metasurfaces have been designed with symmetry-broken meta-atoms to create quasi-BIC (q-BIC) that allow for a leaky state, where confined light merges with phase delay at a specific wavelength. Using this, composed non-local metasurfaces manipulate the wavefront only at multiple resonant wavelengths and allow light transmission without modulation at nonresonant wavelengths, where multiple layers comprise independent meta-atom cells in the q-BIC mode (Fig. 11c). Non-local metasurfaces also make the stacking process easy with the selective response at a specific frequency, preventing it from degraded functionality from misalignment contrary.
As composed metasurfaces can fully control the polarization and phase of light without high optical losses, they have been applied for highly efficient full-Stokes polarimetric 194 and phase grating measurements 200 . Full-Stokes polarimetric measurements have been proposed with an ommatidium-like double-layer metasurface (ODLM) design, where each metasurface acts as a quarter-wave plate (QWP) and a linear polarizer (LP) (Fig. 11d). Full-Stokes polarimetric detection is realized on one chip via integration of two ODLMs with nanowire gratings in four different orientations. Random incident light is distributed with six polarization filters and the Stokes parameter is extracted by a photodetector (in Fig. 11d, Additional two filters are for another wavelength) 194 .
A miniaturized quantitative phase gradient microscope (QPGM) has been demonstrated with multilayer birefringent metasurfaces, where a phase gradient image is produced from quantitative phase data (Fig. 11e) 200 . Mimicking a birefringent material that separates the polarization states (TE and TM), the first-layer birefringent metasurface produces two images along the TM and TE modes and separates these images equally into three directions of the second layer composed of three birefringent metasurfaces. They have received split images from the first birefringent metasurfaces to form three different differential interference contrast (DIC) images via the utilization of different phase offsets of the received TM and TE images. Consequently, the QPGM obtains three DIC images simultaneously and produces a phasegradient image by combining these images.
Multiple layers facilitate dispersion control with low design complexity compared to singlet achromatic metalenses that require various meta-atom designs 201 . Recently, bilayer achromatic metalenses have been achieved with a simple meta-atom design with a high NA (0.8) and large diameter (1 mm) (Fig. 11f) 202 . The metalenses comprise cylindrical and cuboid nanostructures, which enable lower fabrication complexity, thereby obtaining RGB achromatic images (633, 532, and 488 nm) 108 .
Dispersion engineering has been exploited with a metasurface-integrated hyperspectral imager (HSI), which records spectral data for whole images (Fig. 11g) 193 . Fig. 11 Optical platform of composed metasurfaces for (a-c) wavelength decoupling, (d and e) polarization decoupling, (f and g) dispersion control, and (h and i) tunability. a Photonic encryption platform that is composed of ultraviolet and visible metasurfaces. b Bilayer metasurfaces independently control two infrared frequencies. c Composed non-local metasurfaces. c.i Schematics of composed non-local metalenses operating with different wavelengths. SEM image of c.ii diverging and c.iii converging non-local radial metalenses, respectively. d Full-Stokes polarimetric measurement setup using composed metasurfaces. d.i Ommatidium-like double-layer metasurface (ODLM) for circular polarization filters with the motorized stage. d.ii Comparison of detected polarization using metasurfaces with conventional analysis. e Quantitative phase gradient microscope (QPGM) with metasurfaces. e.i Schematics of the optical setup for QPGM. e.ii Image of the target object. e.iii Three differential interference contrast (DIC) images were obtained with multiple birefringent metasurfaces. e.iv Phase gradient images formed from DIC. f Polarization-independent doublet metalens for collecting chromatic aberration. g Hyperspectral imager (HSI) composed of four metasurfaces. h Tunable holography encryption system via cascaded metasurfaces. Metasurfaces are classified as master shareholder and deputy shareholder. Master shareholders can be combined with other deputy shareholders showing a different secret image along the combination. i Composed Moire metalenses for tunable focal length. i.i Negative rotation angles make negative tunable focal lengths. i.ii Positive rotation angles make positive tunable focal length. a is reproduced with permission from ref. 195 Copyright © 2022 American Chemical Society, b from ref. 196 Conventional HSI encounters challenges related to compactness 203 , low-throughput 204 , and heavy postprocessing 205 . A metasurface-integrated HSI system comprises three reflective and one transmissive metasurface fabricated by a single lithography step on a glass substrate. The input aperture has been placed in one gold mirror, and metasurfaces have been placed in another. After entering the light from an object through an aperture, the light is vertically dispersed by a reflective lens, whose functionalities are similar to the first-order blazed grating. The other two reflective and one transmissive lenses focus light into a 2D array detector. Finally, the light is dispersed in the horizontal direction along the incident angle and vertically along the wavelength after passing the HSI in the transmissive metasurface. In addition, these metasurfaces have been optimized by ray tracing and particle swarm optimization for the desired wavelengths (750-850 nm) and spatial range ( ±15 ).
Multilayered metasurfaces have been used to facilitate tunable information encryption both optically and physically. For example, shareholder metasurfaces have been designed, and they show various images when physically cascaded (Fig. 11h) 206 . If every shareholder is collected, a secret share is observed, which is independent of the information of each holder 207 . The secret image of the cascaded holographic metasurface is divided into two phase profiles via computational division (encryption process), representing two independent images pixel-wise. Each holder shows an independent holographic image. However, the combination of holders 100 nm apart reveals the secret image (decryption process). Furthermore, tunable hologram images are demonstrated by encoding multiple images across relative translational positions of two cascaded metasurfaces.
Cascaded metasurfaces have also been applied to Moiré metalenses, which provide a wide tunable focal length with respect to the mutual rotation angle between two metasurfaces (Fig. 11i) 208 . Moiré metalenses have been demonstrated at a NIR wavelength (900 nm) with insensitive polarization meta-atoms, which comprise amorphous silicon for a high index contrast, and they achieve the maximum NA of 0.5. It also shows a range of focal length tunability between ±1.73 and ±5 mm along the mutual rotation angle of ±90°. The fundamentals of Moiré metalenses lie in asymmetric phase distribution, and their combination of rotation becomes a symmetric lens phase distribution. Positive rotation angles make the total phase combination of the two layers resemble a convex Fresnel lens, whereas negative angles exhibit a concave-like distribution. Moreover, an increase in the rotation angle produces a higher phase gradient, resulting in a shorter focal length and higher optical power.
Perspective on integrated metasurfaces for the near-future applications
Recently, wearable displays (VR/AR), LiDAR, and bio/ chemical sensing have received considerable attention as practical applications of metasurfaces. Although various optical materials and devices have been developed in labscale experiments with free-from optics 161,209 , it requires many complex steps and efforts to result in actual commercialization. However, metasurfaces have continuously offered many advantages, such as compact size, wide FoV, Fig. 12 Metasurface-integrated wearable display system for (a) Virtual reality (VR) and (b) augmented reality (AR). a VR with cm-scale RGB achromatic metalens. a.i set-up image and a.ii its schematics of VR system. Grayscale VR images with a.iii red, a.iv green, and a.v blue. b AR with eyetracking supporting metasurfaces. b.i Actual configuration and b.ii schematic of the system. Scattered near-infrared light from the eye is reflected by GMR metasurfaces, and then captured by the camera. b.iii Poor decoupling is served in antireflection coating glass surface with a strong rainbow. b.iv Partial decoupling in GMR metasurfaces with 7-nm-thick p-Si grating. b.v Optimized decoupling in GMR metasurface with 3-nm-thick p-Si grating. a is reproduced with permission from ref. 213 Copyright © 2022, Zhaoyi Li et al., b from ref. 217 Copyright © 2021 Springer Nature and molecular level sensitivity, and they have steadily gained great interest in the industry fields of VR/AR, LIDAR, and bio/chemical sensing. Herein, we consider the function and achievement of a metasurface in a large application system.
Wearable display system (VR/AR)
VR and AR are crucial wearable display technologies for the metaverse. VR technology replaces the real world with virtual images, thereby providing users with an immersive experience 210 . AR integrates computergenerated three-dimensional (3D) images into the real world. Despite the rapid development of related technology, VR/AR technology has suffered from several problems such as chromatic aberration, narrow FoV, and spherical aberration, resulting in a large form factor. Consequently, current devices in the market are bulky and have performance flaws.
Recently, metasurfaces have been widely used to construct compact VR/AR systems. Metasurfaces have been attached to contact lenses for a near-eye display, and metasurfaces with spatially encoded phase maps project virtual information using a pixel-wise method 211 . Seethrough anisotropic metalenses correct chromatic aberration with a wide FoV 212 . These metalenses use both handednesses of the circular polarization state to achieve a see-through mode. However, they work only at green wavelengths with a low-quality hologram 211 and require additional optical components such as dichroic mirrors and circular polarizers 212 .
One promising miniaturization approach is to reduce the number of optical components by polarizationinsensitive achromatic metalenses. For example, largescale achromatic metalenses have been used as near-eye optical components in VR imaging systems (Fig. 12a) 213 . The achromatic metalens-integrated VR devices consist of Fig. 13 Metasurface-integrated light detection and ranging (LiDAR) with (a) electrically-tunable metasurfaces, (b) beam-steerers, and (c) point-cloud. a Electrically tunable metasurface-integrated LiDAR. a.i Electrically tunable metasurface reflects the light in varied directions depending on applied voltages, and cross-sectional view of meta-atoms of electrically tunable metasurfaces, which include two insulators and voltage gates. a.ii Schematic of the electrically tunable metasurface-integrated LiDAR system. This system detects the depth of objects in the middle image and the calculated depth data based on the ToF technique. a.iii Target objects and a.iv measured depth profile. b Metasurface-integrated acousto-optic deflector (AOD). b.i Schematic of fast active scattering system with metasurface-integrated AOD. b.ii Schematic of two strategies of depth reconstruction. b.iii Scanned depth information of human motion. c Point cloud metasurface-based depth sensor. c.i Schematic of point cloud metasurface-integrated SL system. c.ii Depth calculation method of stereo matching algorithm. c.iii Experimental demonstration of point cloud metasurfaces, which diffract high-density dot arrays over 180°field of view. c.iv Fabricated metasurface on curved surfaces of glasses by nanoimprint lithography. a is reproduced with permission from ref. 224 Copyright © 2021 Springer Nature, b from ref. 225 a laser-illuminated micro-LC display (µLCD), an eyeball model, and a meta-eyepiece. The meta-eyepiece corresponds to an achromatic metalens with polarizationinsensitive properties and a diameter of 1 cm. Consequently, compact optical systems have been realized with achromatic metalenses showing grayscale VR images at three wavelengths with arbitrary polarization states from the µLCD.
Moreover, metasurfaces have demonstrated the feasibility of wearable AR glass. Multiplexing hologram images have been demonstrated by metasurface hybridized with waveguides in AR projection systems 214,215 . Further, Huygens' metasurfaces demonstrate a continuous view of a 3D hologram in a near-eye display to solve vergenceaccommodation conflicts with large pixel counts and subwavelength pixels 216 . Coupled with these technologies, eye-tracking in non-local metasurfaces can realize the real application of AR glass.
An eye-tracking system that uses reflected light from the faces of glasses has suffered from performance degradation by the limitation of decoupling between visible wavelengths for the real world and NIR for tracking. To solve this problem, non-local metasurfaces have been applied in eye-tracking technology, facilitating a lowrainbow background and high transparency (Fig. 12b) 217 . Metasurfaces for AR systems have been designed with guided-mode resonators (GMRs) 218-220 and a high-Q factor that selectively reflects NIR wavelength for tracking. Polycrystalline Si (p-Si) strips with spectral dependency on the absorption depth are placed on the dielectric waveguide. In the visible spectral range, the low-Q factor light has dominant optical losses in poly-Si structures, which suppresses the undesired diffraction. Consequently, visible light almost passes through the metasurface without a rainbow effect while scattering at λ = 870 nm, where the metasurfaces exhibit resonance. Further, the NIR light operates a large internal electric field, enabling diffraction at the desired angle while being trapped and sufficiently guided in the waveguide, resulting in the reflection with over 10% efficiency of the first diffraction order. When the NIR LED illuminates the eye, the metasurface reflects the NIR light from the eye to the side camera which captures the front view of the eye image to analyze the motion of the eye.
High-performance LiDAR
LiDAR is a depth-scanning technology that determines the distance by analyzing the reflected light from target objects used for autonomous vehicles, unmanned aerial vehicles, or intelligent robots. Two representative depth scanning methods have been used for LiDAR: (1) indirect 221,222 or direct-ToF, and (2) structured light (SL). The direct ToF technique estimates the distance by analyzing the round-trip flight time t laser of the laser pulse, and the distance is then calculated as ct laser /2, where c is the speed of light. The ToF technique is generally classified into ii Schematic of bioassay where epoxy-silane immobilizes the mouse IgG, binding rabbit anti-mouse IgG. Bovine serum albumin (BSA) is deposited to control the areal molecular density by combining with epoxy instead of IgG. a.iii Resonant peak profile is compared with the reference profile where each profile is produced by sweeping the wavelength without or with the analyte. b Metasurface-integrated angle-multiplexed sensors. b.i Schematic of the system. b.ii Angle-multiplexed metasurface exhibiting different resonance wavenumbers along the incidence angle. b.iii Different resonance peaks along an incident angle. b.iv Normalized reflectance spectra after coating analyte that can be recognized by analyzing reflectance patterns. a is reproduced with permission from Ref. 234 Copyright © 2019 Springer Nature, b from ref. 237 Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science, Distributed under a CC BY-NC 4.0 license http://creativecommons.org/ licenses/by-nc/4.0/, Reprinted with permission from AAAS scanning or non-scanning ToF system; The classical scanning ToF suffers from the trade-off between framerate and FoV owing to the inertia of its mechanical moving component. Non-scanning ToF illuminates the entire scene and measures the flight time from multiple points of the scene in a single shot. Since the power of the incident laser is divided by the number of illumination points, it requires a highly-sensitive photodetector, such as a single-photonavalanche diode, in a 2D array, to acquire sufficient working distance. In the SL imaging technique, light is spread into predefined patterns (e.g., an array of dots, and lines) over a large FoV, and the surface profile of the object is calculated by analyzing distorted light patterns using a single camera or capturing structured light at different viewpoint using stereo cameras 223 . However, it suffers from low diffraction especially at large angles because of the large pixel size of conventionally used SL projectors (e.g., DOE, and SLM). Although there is an increasing demand for high-quality LiDAR systems, they face many challenges with conventional ROEs or DOEs, such as bulkiness, vulnerability to external impact, low scanning speed, and narrow FoV. Most recently, metasurfaces have improved these problems through incorporation within a LiDAR system 221,224,225 . Herein, we briefly introduce a metasurface-integrated LiDAR application in terms of scanning the ToF, and SL with a stereo camera.
Electrically tunable metasurfaces have been implanted with ToF scanning-type LiDAR system by exploiting their fast manipulation of wavefront 224 . To improve the LiDAR performance, a new electrical SLM is connected to the metasurface without conventional devices, such as LCs, and MEMS, which limits the reliability and speed. Electrically tunable metasurfaces for LiDAR have been designed with an Au nanoantenna, Al mirror as independent voltage gates, and ITO layer as ground; between the ITO and metallic layers, an oxide insulator is placed (Fig. 13a). The combination of each metallic gate voltage changes the reflectance coefficient by varying the charge accumulation/depletion layer at the interface between the ITO and insulator layers. Consequently, the varied reflectance coefficients cover the full 2π phases with an independent amplitude in the metasurface, where the ITO layer is exceedingly thin for a single gate to cover the full phase 124,226,227 . Electrically tunable metasurfaces successfully control the steering angle. Through the adoption of a receiver with electrically tunable metasurfaces, the depth profile has been estimated by measuring the light flight time. This ToF system achieves a switching speed of 5.4 MHz, which is sufficient for scanning in commercial LiDAR systems; however, it has a limited FoV of 6°× 4°with a diffraction efficiency of 1% for full phase control in the metasurface alone.
Another LiDAR method, metasurface-integrated acousto-optic deflector (AOD) has been also developed to enlarge the limited FoV 225 . It simultaneously creates a large FoV and high speed without compromising the framerates or large FoV (Fig. 13b) 225 . The AOD can rapidly repoint an incident laser at an arbitrary angle; however, it has a small FoV (2°× 2°) which is not sufficient for application in LiDAR. Its small FoV is enlarged to 150°× 150°by integrating metasurfaces with AOD, and the metasurface-integrated AOD also achieved highspeed alteration of the deflection angle up to 250 MHz. In addition, the additional AOD achieves two axial scans for 3D imaging. When the metasurface-integrated AOD is used to recognize and reconstruct a human motion, it can simultaneously detect multizone images with the peripheral (low resolution) and fovea regions (high resolution) using an irradicable zeroth-order beam from metasurfaces. Consequently, this system can mimic the human vision system and be applied to advanced driver assistance systems (ADASs).
A wide FoV has also been demonstrated with metasurface-integrated SL imaging systems [228][229][230] where electrical scanning control is no longer needed for closerange scans. Recently, metasurfaces have been designed as point cloud spreaders to create a uniform dot-array form with a high density of~10K dot at a wide FoV 230 . They are then used for SL imaging systems with two cameras for point capturing (Fig. 13c). After spreading the point cloud with metasurfaces, a stereo system reconstructs the 3D depth of the object by capturing the scattered dot arrays on the object surface using two cameras at different view angles. The points of two camera images are matched through comparison 231 , and the depth is calculated using stereo camera trigonometry with the location coordinates of matched pairs. If the metasurface is integrated with commercial glasses using nanoimprinting, it is expected to overcome the limitation of conventional bulky systems that require mechanical rotators, which decreases the framerate and robustness to external impact 232,233 .
Ultrasensitive bio/chemical sensors
Optical sensors infer the characteristics of a target by analyzing scattered or absorbed light. Label-free biochemical detection is an important technique that does not modify molecules with fluorescent or radioactive dyes. Thus, this technique, when applied to bio-analytics or diagnostics, is inherently noninvasive and highly sensitive. When a non-local metasurface with the q-BIC mode is applied to a biochemical sensor, it provides label-free detection via highly surface-sensitive resonance, enabling high sensitivity and chemical specificity. The metasurface can also provide compactness to the sensing system by realizing a spectrometer-less system. In this review, we investigate biochemical sensors that offer the advantages of metasurfaces.
An HSI system has been demonstrated with non-local metasurfaces for tracing biomolecules (Fig. 14a) 234 . Non-local metasurfaces confine electric fields on the surface of meta-atoms, enabling extreme responsibility for the local refractive index change from the spatial overlap of individual biomolecules 235,236 . When biomolecules are placed on the sensor, a change in the local refractive index shifts the resonance peak along the quantitative value of the biosamples. The system's sensing performance has been verified using a biorecognition assay with immunoglobulin G (IgG). It has been found to achieve a higher molecule per area sensitivity compared to that of the conventional ensemble-average method.
The angle-multiplexed resonance of q-BIC metasurfaces has been exploited to detect the molecular absorption signature associated with the vibrational mode and absorption band of each molecular chemical bond, thereby enabling molecular categorization (Fig. 14b) 237 . Angle-multiplexed q-BIC metasurfaces are designed to exhibit continuous resonance wavevector that changes along the incidence angle with asymmetric structure in the x-axis, and the resonance wavevector covers the range of 1080-1820 cm −1 . While illuminating light at variable incidence angles with a moving mirror and spectrometer, molecules adsorbed on metasurfaces cause attenuation of resonance line shape by making near-field coupling. Consequently, the absorbance spectrum of high sensitivity and spectral selectivity is characterized by diverse molecules. The absorption spectrum of thin polymethyl methacrylate (PMMA) has been demonstrated by obtaining a result similar to that of standard IR reflection absorption spectroscopy (IRRAS). The bioassay process to detect human odontogenic ameloblastassociated protein (ODAM) with a spectrometer-less sensor, including a broadband source, has shown a detection limit of 3000 molecules per µm 2 and a surface mass sensitivity of 0.27 pg mm −2 .
Conclusions
In summary, metasurface-based optical systems have accomplished great success by presenting high-resolution receivers, polarization-controlled single-photon emitters, and tunable wavefront controllers. Also, by combining classical optical components, the performance of planar waveguides, optical fibers, and ROEs has been extended. Composed metasurfaces have provided spatial wavefront control 199 , high-end optical security 195 , and polarization analyzer 200 . In the perspective section, this review provides the future direction of metasurface-integrated photonic applications with examples of recent works including VR/AR, LiDAR, and sensors.
However, from our perspectives, three challenges have remained for the commercialization of metasurfacesbased optical systems. In general, metasurfaces are mostly manufactured with the CMOS process, and many researchers argue that the CMOS process is advantageous to commercialize when metasurfaces are implanted for commercial devices. However, it is generally not feasible with optic module manufacturing processes such as injection molding or milling. Also, the CMOS process is a higher-cost production method than that of ROEs and DOEs, increasing the total production costs of a metasurface-integrated optical platform. Another challenge is related to the low efficiencies of metasurfaces. Although metasurface efficiencies reach over 90% at a single wavelength, achromatic metalenses at visible 238 still have lower efficiencies (40%) than those of conventional ROEs (>95%, Thorlabs, mounted achromatic doublet). Since the efficiencies of the total optical system cannot exceed the minimum efficiency of its optical components, the use of metasurfaces is not suitable for applications requiring efficient light manipulation. The other challenge lies in the quantification methods of metasurfaces. Compared to conventional ROEs and DOEs, the metasurface quantification method has not been unified yet 7 . For example, in the case of metalenses, various groups define different definitions of efficiencies and use different measurement systems 7 . The different figure of merits prohibits them to be compared with not only metasurfaces but also other conventional optical systems when they are integrated with metasurfaces.
Regardless of these challenges, we believe that metasurfaces will be essential components to designing future optical platforms such as detectors of automobile vehicles 221 , displays of wearable devices 54 , and healthcare monitors for precise diagnostics 239 . Nanofabrication methods have been developed to be more compatible with advanced optical materials, and their efficiencies have continuously increased with the bandgap engineering of optical materials. For example, particle-embedded resin [240][241][242][243] and large-area, low-loss dielectric deposition methods [244][245][246][247][248] have been recently demonstrated only for metasurface manufacturing, achieving low-cost production of large-area metasurfaces with near-unity efficiencies. These methods enable the cost-effective process of metasurface manufacturing, and its production cost will be compatible with conventional ROEs and DOEs. Moreover, considering most recent reports have proposed unified quantification methods of metasurfaces, we believe that metasurface-integrated optical devices will be a promising option to construct near-future photonic platforms, enabling a broad range of applications for metasurfaceintegrated photonics in everyday life.
It has been already proven that metasurface offers an effective and feasible way to engineer electromagnetic waves, while there is increasing demand for compressing sizes and extremely manipulating light. To bring metasurfaces into real-life devices beyond the current prototypes, many approaches should be conducted by integrating metasurfaces with transitional components rather than competing with conventional optical systems. | 16,136.4 | 2023-06-20T00:00:00.000 | [
"Physics"
] |
Deciphering the Fine Details of C1 Assembly and Activation Mechanisms: “Mission Impossible”?
The classical complement pathway is initiated by the large (~800 kDa) and flexible multimeric C1 complex. Its catalytic function is triggered by the proteases hetero-tetramer C1r2s2, which is associated to the C1q sensing unit, a complex assembly of 18 chains built as a hexamer of heterotrimers. Initial pioneering studies gained insights into the main architectural principles of the C1 complex. A dissection strategy then provided the high-resolution structures of its main functional and/or structural building blocks, as well as structural details on some key protein–protein interactions. These past and current discoveries will be briefly summed up in order to address the question of what is still ill-defined. On a functional point of view, the main molecular determinants of C1 activation and its tight control will be delineated. The current perspective remains to decipher how C1 really works and is controlled in vivo, both in normal and pathological settings.
INTRODUCTION
In 1897, at the very early period of nascent immunology, the Nobel price winner Jules Bordet discovered a heat-sensitive serum effector triggered by immune complexes and absolutely required for the lysis of Ab-coated erythrocytes or bacteria. At that time, it was named "alexine." As discovered later on, this effector mechanism is very complex, involving many proteins, namely the complement system (C) triggered via the classical pathway (CP) (1, 2) (see Figures 1A,B). Deciphering the fine structural mechanisms governing this CP-activating function of the first C component C1 remains experimentally difficult and has progressed through iterative steps, which will be briefly summarized here.
Why is it important to decipher C1 structure and C activation mechanism? One obvious aim is to improve the C1-mediated effector mechanism in antibody therapeutics (8). C1 plays indeed a crucial role in the efficient elimination of Ab-coated targets, as confirmed by the disease susceptibility of patients affected by the deficiency in components C1q, C1r, C1s, and C4, all involved in the CP activation (9). Another hallmark of these deficiencies is the very large propensity to develop autoimmune diseases such as lupus erythematosus, which underlines that other essential functions are provided by the CP activation (9)(10)(11)(12)(13). On the other side, non-physiological activation of the CP or interferences by foreign substances such as carbon nanomaterial (14,15) or a defective control of CP activation can also be strongly detrimental. Such undesirable activations can happen for example in cases of transplantation, neurological disorders, and rheumatoid arthritis (16) and thus new strategies to specifically inhibit the CP are awaited. On a more general standpoint, the functional impact of the complement system appears now far broader and more essential than initially assumed (17,18).
INITIAL STUDIES AND FIRST LOW RESOLUTION FUNCTIONAL C1 MODELS
Very active pioneering investigations were performed during the 1963-1987 period (1)(2)(3)19). The sequences of the C1q, C1r, and C1s subcomponents, their fixed (C1q:C1r2s2) stoichiometry, as well as the calcium-dependency of the interaction between C1r and C1s have been deciphered. Biochemical experiments revealed that C1r and C1s are sequentially activated ( Figure 1A) and their unique Arg-Ile activation cleavage site has been precisely identified (3). In both cases, a disulfide bridge maintains a covalent link between the catalytic serine protease (SP) domain and the preceding modules. Careful protein biochemical analyses detailed the numerous C1q post-translational modifications such as proline and lysine hydroxylations and hydroxylysine glycosylations, which were mainly confirmed recently (20). The main functional domains were isolated by limited proteolysis of the serum-derived proteins and their shape studied by several biophysical methods such as small angle X-ray or neutron scattering and electron microscopy (21-23) (see Figure 1C). C1q is a very flexible 450 kDa molecule, partly stabilized by the associated protease tetramer (24). Catalytic and interaction domains were identified for each C1r and C1s protease ( Figure 1C). In an apparent paradox, a very elongated shape was observed by neutron scattering for the protease tetramer in solution (larger maximum radius of gyration Rg of 17 nm) in contrast to the measures for C1q (Rg of 12.8 nm) and for the C1 complex (Rg of 12.6 nm), which suggested a substantial conformational change of the tetramer and/or C1q upon association ( Figure 1C) (3,25). The other intriguing feature was about the symmetry level inside the complex since the C1q hexamer associates with a proteases tetramer (19,24). Several "low resolution" models were proposed for C1 at that time, the main www.frontiersin.org The multimeric C1q molecule is associated to the C1s-C1r-C1r-C1s tetramer. When C1q binds to an activating target surface, a conformational change triggers the auto-activation of the associated C1r protease (converting the pro-enzyme into an activated form, black circular arrow), which then activates C1s (black arrow). C1s activates C4 and C4b-bound C2 (red arrows), leading to the assembly of the classical C3 convertase C4b2a. Green arrows are used for the activation cleavage of C2, C3, and C4, with the release of a small fragment. Details of the consecutive AP amplification loop are not given for sake of clarity. It involves C3-C9 components and mediates rapid opsonization, signaling events, as well as eventually formation of the lytic pore. The initial steps are numbered from 1 to 5. The first two steps occur inside C1 and depend on C1q conformational change and the consequent C1r activation. Steps 3 and 4 depend on C1s proper positioning and catalytic activity. (B) Current hypothetical schemes on similar interaction modes between C1q and IgM or IgG hexamers, the best CP activators. The new scheme proposed for IgG is in contrast with the traditional old scheme (right) depicting one C1q molecule interacting with two distant IgG molecules, each antigen-bound through its two Fab arms. (C) The "C1 paradox" and initial low resolution C1 models. C1 is a 30 nm high multimer resulting from the association of the flexible recognition protein C1q with the flexible C1s-C1r-C1r-C1s tetramer, which appears more elongated (S extended shape) in solution than in the complex (thus the initial "paradox"). C1q (yellow) has a hexameric shape, built from 18 chains. Interaction (I) and catalytic (C) domains of C1r and C1s are labeled and colored on the right side. The asterisks show the position of flexible hinges in C1q. The low resolution model on the left and the proposed tetramer conformational equilibrium on the right are derived from (3). (D) Modular structure of each C1q chain type. A, B and C chains associate as a hexamer of ABC heterotrimers. Kink indicates the position of disruptions in the triplets occurring only within collagen-like sequences of the A and C chains and probably inducing flexible hinges. The disulfide bridging between chains A and B is illustrated. The C chain has no covalent link with A and B chains, but covalently associates pairs of ABC trimers through a C-C disulfide bridge. The two lysines crucial for C1 assembly are shown in pink. (E) Modular structure and associated functions of C1r and C1s. The catalytic domain includes the C-terminal serine protease (SP) domain as well as the preceding Complement Control Protein (CCP or sushi) modules. The interaction domains of C1s and C1r involve their N-terminal CUB-EGF-CUB modules. The corresponding functional implications are mentioned. The same color coding is used in (F,G) and in the right panel of (C). "CUB" means initially found in Complement C1r and C1s, Uegf and BMP-1. (F) C1 is a large complex made of small building blocks of (mainly) known structures. The displayed C1s is a composite structure obtained after superposing the PDB structures 1ELV (4) and 4LMFA (5) onto 4LOT (5) (see details in Table S1 in Supplementary Material). The color code used is the same as in (C,E). The chains ABC from the C1q globular domain [2WNV (6)] are shown on the same scale. (G) Example side view of a partial composite C1 model, refined using the results of differential accessibility in C1q and C1 using chemical lysine labeling followed by mass spectrometry (7). The C1r and C1s proteases interact with C1q through their interaction domains aligned on the same plane (which corresponds to the position of LysB61 and LysC58 in C1q). This part of the model is mainly confirmed by recent complementary experimental studies (8). The position of the catalytic domain of C1s is more uncertain and probably variable.
differences being the speculations about its activation mechanism and on how the proteases are tightly packed inside C1, and whether they are fully kept inside the C1q cone or not (3,19,24,26).
THE MAIN MOLECULAR PLAYERS INVOLVED IN C1 ACTIVATION AND ITS TIGHT CONTROL
The C1r and C1s proteases are produced as inactive precursors (called zymogens), and thus need to be activated "on the spot" by a specific Arg-Ile proteolytic cleavage in response to a triggering signal. This activating cleavage induces a conformational rearrangement, as classically described for the proteases of the trypsin-like family. C1-inhibitor, a protease inhibitor of the serpin family, exerts the main physiological control on these C1r and C1s proteases activity, by both inhibiting their activation and dissociating them from activated C1. C1 auto-activation can be observed in vitro in the absence of C1 inhibitor or through heating, which induces large conformational changes and also probably kills the C1-inhibitory effect (19). The adverse effects related to uncontrolled C1 activation are thus mainly linked to unbalanced C1-inhibitor control. C1-inhibitor is a multipotent serpin, controlling also some proteases of the fibrinolytic system, and contact/kinin system of coagulation in addition to the C1r, C1s, and MASP complement proteases and thus its deficiency leads to severe diseases such as hereditary angioedema (27).
IgM or IgG immune complexes are the best physiological C1 activators identified to date, especially in the presence of C1inhibitor. Although it has been known for long that C1q binds to IgG Fc domain, and that activation requires multivalent binding, the details of how this can happen had remained poorly understood (8). IgG mutations are known to strongly influence C1q binding and C activation (28)(29)(30)(31). Of note, these mutation studies did not fully confirm the originally predicted E-x-K-x-K IgG C1q-binding consensus motif (28), which remains, however, still used by some teams as a C1q-binding predictive tool.
A recent study has shown how IgG surface clustering through Fc-dependent hexamers could lead to very efficient C1 activation (8) (Figure 1B). Interestingly, this mode of hexameric clustering is far more similar to the pentameric/hexameric IgM assembly than to what was traditionally proposed ( Figure 1B). It has long been described in text books that C1 activation involves binding to at least two IgG molecules, each one bound to the surface through its two Fab segments (Figure 1B). In contrast, in the recently proposed hexameric IgG assembly, each IgG seems to have only one Fab arm on the target surface, the other Fab arm lying on the same central plane as the clustered Fc platform (8). This recent breakthrough brings new clues about how to enhance the complement-dependent cytotoxicity of IgG, since the E345R mutation was described as a general C1 activation enhancer for all IgG isotype variants (8). The recent structure of the deglycosylated IgG4 Fc further supports this hypothesis of a possible generic hexameric Fc assembly, which is stabilized by this E345R mutation (32). The IgG1 and IgG4 Fc form quite similar hexameric rings of 175 Å diameter, which is of the same range of magnitude as the 180 Å diameter estimated for the comparable IgM Cµ3-Cµ4 hexameric platform (32). Local differences are observed between the different IgG isotypes in their hexameric interface composition and surface loop conformations (32). Of note, the IgG4 homologous C1q-binding loop is flexible, with at least two different conformations observed. The major conformation observed in native IgG4 prevents C1q binding, which correlates with the strongly reduced level of CP activation by native IgG4 hexamers (32).
CURRENT STRUCTURAL KNOWLEDGE ON C1 BUILDING BLOCKS AND KEY PROTEIN-PROTEIN INTERACTIONS
Although the first C1r crystals were obtained in 1981 [cited in Ref. (26)], X-ray crystallography analyses were initially limited, probably because of molecular flexibility. The C1 complex and most of its components look indeed very flexible (Figure 1C). A dissection strategy has thus been set up to determine the highresolution structures of the main functional blocks (33) and of several structural joints, as detailed in Table S1 in Supplementary Material (Figures 1D-F). For the C1q molecule, only the X-ray structure of the C-terminal globular domain could be obtained (34), alone or in complex with minimal recognition motifs, such as deoxyribose for DNA, which gave insights into its recognition properties [reviewed previously in Ref. (35)].
More X-ray structures of C1r and C1s protease domains have been determined (Table S1 in Supplementary Material). The structures of all C1s modules are now known ( Figure 1F). Detailed insights about conformational rearrangements were obtained by comparing different X-ray structures, for example between proenzyme and active states of the SP domains (36,37), as well as some variations in inter-modular orientations (5,38). The structure of the SP domains also revealed the main structural determinants of their restricted substrate specificity (4,37,38). However, C1s SP domain alone is not able to cleave C4 efficiently (39). C4 cleavage, which is the first step of both the classical and lectin activation pathways, appears thus to be more stringent since it requires additional exosites (40). The fine structural details about exosites in MASP-2 (the equivalent of C1s in the lectin pathway) and their interaction with C4 were unraveled recently (41). The functional implication of the homologous CCP exosite in C1s could be confirmed by mutational analyses (41). The structure of the C1s exosite at the CCP1/CCP2 interface was then solved recently (37). Interestingly, both the zymogen structure and surface plasmon resonance interaction analyses suggest that the C1s exosites are partly hidden in the pro-enzyme state (37).
Structural details of protein-protein interactions relevant in terms of C1 assembly were also unraveled during this structural dissection, such as the head-to-tail interaction of the C1r catalytic domains. Such a dimeric interaction has been observed three times by X-ray crystallography and the butterfly-like side view (Figure 2A) can also be recognized at the center of early electron micrographs of the proteases tetramer (23,36,42). This interaction is maintained through contacts between the CCP1 module of one C1r subunit and the SP domain of its partner (36). One of the functional consequences is the larger than 90 Å distance between the active site of one monomer and the scissile bond of its partner, which prevents spontaneous mutual activation in this dimeric context (36). This auto-inhibited assembly looks like a "resting" state, which requires a conformational change to trigger C1 activation (36,43). This interface between the catalytic domains of C1r is really specific of the CP activation, with no equivalent in the complexes activating the lectin pathway. Another structural www.frontiersin.org The central EGF calcium-binding sites stabilize both the inter-and intra-monomeric CUB-EGF interfaces (highlighted by gray rectangles). Since interface residues are mostly conserved in C1r (compared to C1s), we can assume that this head-to-tail packing observed with C1s homodimers also stands for the C1s-C1r heterodimer. This typical shape can also be recognized on some rare electron micrographs performed on the proteases tetramer (23). Yellow sphere, calcium in EGF; green sphere, calcium or magnesium in the C1s CUB1 module [PDB code 1NZI (45)]. (C) Calcium-dependent interaction between C1s CUB1 module and a lysine-containing collagen-like peptide [PDB code 4LOR (5)]. The main structural determinants are highlighted. The lysine side chain directly interacts with Glu45, Asp98, and Ser100. Asp53 is an essential component of the calcium-binding site. Mutations of Glu45, Tyr52, and Asp98 strongly alter C1q-binding properties [reviewed in Ref. (46)].
feature of the C1r zymogen is the inactive occluded conformation of its primary binding site (44).
Calcium-dependent C1 assembly is controlled by the proteases CUB and EGF modules (47). The structural details governing these interactions have been mainly deciphered, although slightly indirectly. The C1r/C1s calcium-dependent interaction is mediated by their CUB1 and EGF modules, which form a head-to-tail dimer under the control of their EGF calcium-binding site (45) (Figure 2B). The calcium ion is tightly bound to the C1s EGF module in the context of the CUB1-EGF C1s dimeric interface, since it could not be replaced by lanthanides during soaking experiments used to solve the X-ray structure (45). This head-to-tail interaction can also be recognized on some early electron micrographs of the proteases tetramer (23). Unexpected calcium-binding sites are present in the CUB domains and govern the interactions between the proteases and the C1q collagen-like stems (45,48). The calcium ion associated to the C1r CUB2 modules appears to be quite labile, although it greatly enhances the structural stability of these modules (49). Site-directed mutagenesis offered a very effective tool to confirm and detail the essential contributions of several amino acids in the full-length molecules: (i) It identified residues essential for C1q binding in C1r: E49, Y56, and D102 in CUB1; D226, H228, Y235, and D273 in CUB2. Other mutations severely affecting the C1q interactions were observed for E45 and Y52 in C1s CUB1 (46,48). (ii) Conversely, the lysines B61 and C58 in C1q were identified by site-directed mutagenesis as essential protease-binding residues (50). These lysines are very close to the patient mutation GlyB63Ser resulting in a C1q functional deficiency including defective CP activation (12).
Similar CUB and EGF calcium-dependent interactions have then been observed in the MASPs-defense collagens complexes initiating the lectin complement pathway, as well as in other unrelated molecular systems (46,51,52). The structure of the C1s CUB1-EGF-CUB2 fragment in complex with a collagen-like fragment containing the OGKLGP sequence (O standing for hydroxyproline, Figure 2C) confirmed such a generic mode of association but reveals a different orientation of the CUB2 module as compared to MASP CUB1-EGF-CUB2 fragments (5).
WHAT REMAINS STILL ILL-DEFINED?
Only the C1r CUB modules and the C1q collagen-like domain structures have not yet been solved at atomic resolution, but we know at least their overall shape and scaffold through homology and experimental analyses such as electron microscopy. The structure of the C1q recognition domain where the three subunits ( Figure 1C) tightly interact with each other in a ACB clock-wise order (as seen from the collagen stem) has also indirectly given some clues about the relative ordering of the three chains in the preceding collagen-like stem (34,47).
The isolated fragment X-ray structures or models can be combined into hypothetical C1 models (47). These C1-like models illustrate hypotheses in the 3D space about possible modes of C1 assembly and activation, which can then be further tested by site-directed mutagenesis (48). These models are idealized since, for example, C1 is always displayed as a symmetrical molecular complex although we know that it is highly flexible, which disrupts most of its symmetrical conformation in response to the environment. These models also aim to provide a synthetic overall representation consistent with accumulated experimental evidences (7). For example, the model depicted in Figure 1G accounted for the differential accessibility of lysine's residues in Frontiers in Immunology | Molecular Innate Immunity C1q and C1 derived from mass spectrometry comparative analyses as well as previous experimental knowledge (7). However, such a dense C1 complex cannot be easily seen on electron microscopy images (unpublished results), and thus the corresponding C1 model remains an "in silico" interpretation (as well as most of C1 models).
Part of the "C1 paradox" has thus been elucidated since we know most of the building block structures and also key residues involved in C1 assembly, with now six C1q-binding sites in the protease tetramer (48). Nonetheless, details on how a flexible protease tetramer associates with such a flexible recognition molecule, and how C1 activation proceeds and is controlled remain ill-defined. In contrast to the in vitro studies, C1q and C1 can be found in vivo under flow conditions, both in the circulation and in the extravascular fluid, where shear stress could affect C1 assembly and activation (53). Moreover, observing fine structural details within C1 still represents a real experimental challenge because of its great flexibility and modular composition. The following questions are thus partially unanswered: How flexible is each inter-modular junction in vivo? Is the C1r CUB2 module only partially saturated by calcium in vivo, and thus possibly marginally stable within C1 (49)? What is the role of the charged and flexible long insertion in C1r EGF (54)? Which chain is at the leading, medium, and edge position in the native C1q collagen heterotrimeric stem? What are the relative positions between these native C1q stems and the proteases CUB domains? Do the proteases stably stay attached to C1q or is there a fast assembly/disassembly equilibrium? What drives the spectacular conformational change of the proteases from their elongated flexible shape in solution toward the assumed compact C1-associated conformation? How can we observe, describe, or deduce the details of the conformational changes involved during C1 activation? How can we observe the transmission of the triggering signal from C1q recognition to C1r activation? How can we characterize the required C1q conformational change(s)? How is C1r activation propagated to the successive C1s, C4, and C2 activations? How does C1-inhibitor finely control these processes? What about C1 activation by non-immune targets in a physiological or pathological context? How do differences in antigenic structures and surface density precisely modulate the levels of CP activation by the Ab-coated targets? How can we predict the classical C activation outcome when C1q binds to ligands through its globular heads? How do pathogens interfere with C1 activation?
PERSPECTIVES
Over the years, detail after detail, the image describing the immunoglobulins/C1 interaction is gradually emerging. But the flexibility of the C1 molecule and its thin flexible building elements such as the collagen-like stalks make its fine details difficult to observe. Even electron microscopy performed on C1 bound to hexameric IgG surface clusters on liposomes did not fully overcome the limitations due to C1 flexibility, since only four (out of the six expected) globular densities probably corresponding to C1q recognition domains could be consistently observed on top of the hexameric IgG assembly (8). The collagen stems are also too thin, fragile, and flexible to be seen on averaged density maps. Only the position of the larger N-terminal collagen stalk remains visible after averaging. Visible density also remains after averaging for the region probably corresponding to the interaction domains of C1r and C1s, which fill a continuous section inside the C1q cone.
In conclusion, although refining the structural details of C1 assembly and activation remains a difficult challenge, this mission does not sound definitively "impossible." The scientific community will probably find out new solutions to further decrypt the fine structural details, for example by matching X-ray structures and electron density maps obtained from new developments in electron microscopy and associated computing strategies. The use of recombinant C1 fragments (C1q, C1r, C1s) will be useful to further check in detail their structure/function relationships.
ACKNOWLEDGMENTS
The experimental C1 dissections aimed toward structural investigations have been initiated in Grenoble under the leadership of Gérard Arlaud. This work has been generally supported by CNRS, CEA, University Grenoble Alpes, by the "Programme Transversal de Toxicologie du CEA" and by grants from the French National Research Agency (ANR-05-MIIM-023-01, ANR-09-PIRI-0021). | 5,349.4 | 2014-11-06T00:00:00.000 | [
"Chemistry"
] |
© 2007 Science Publications Gas Turbine Ontology for the Industrial Processes
The activity of supervision and control of the industrial processes is a very complex spot and require a great experiment because of the dynamic characteristics of the process. This experiment was acquired with the passing of years. What makes departure of an expert in retirement a great loss of the know-how. The problem thus consists to capture this know-how and allows experiment to be cumulate with an aim of construction of an enterprise memory. We propose an approach based on ontology to capture this know-how. In the dynamic situations are distinguished three classes from situations: situations of normal walk, situations of degraded walk and situations of incidental walk. The work presented in this article, was developed in the division of production at SONATRACH. It relates to the supervision and the control of the industrial process of a compressor station witch constitutes a typical case of dynamic situation. Among the three classes of quoted dynamic situations, we concentrate on the situations of degraded walk. These situations of nature different compared to the usual situations from normal walk, subject the operator to a workload at the same time complex and stressing. The work presented in this article enters within the framework of a doctoral project whose the principal objective was the development of an intelligent system of expertise and of decision-making aid in the domain of the industrial maintenance for the compression stations. It relates to ontological engineering and more particularly the use of ontologies in the knowledge-based systems. We try in this work to build an ontology concerning the domain of the industrial maintenance. This ontology was not operational yet because it does not included mechanisms of reasoning. It was independent of any context of use.
INTRODUCTION
A compression station is a unit or a series of compressors witch that aspires the fluid to a rather low pressure and rejects this fluid with a pressure definitely higher. Its role is thus to reduce volume and to raise the pressure of gas. This compression is operated in two times by the two groups which make the station. The first group consists of a gas compressor on two floors and the second comprises two distinct compressors with gases assembled on the same tree. These compressors are pulled by turbines. The activity of supervision and control of such industrial processes is a very complex spot and require a great experiment. This experiment risks to be lost with each departure of an employee on the one hand and to be dispersed between several distant experts on the other hand, what deprives the company to benefit from this experiment and to exploit it in an effective way.
Our work enters in the context of a shared system where experts of domain need to share and exchange knowledge, from a distance, in an aim of collaboration for the assistance in the diagnosis. What allows, on the one hand, to gather competences and on the other hand to have an experience feedback with an aim of capitalizing know-how.
To communicate, the agents must thus share an ontology. So in order to be a shared knowledge, an ontology must be explicit and be expressed in a language or a shared formalism.
In IA, one calls ontology the specification of the objects, concepts, classes, functions and relations of a domain independently of a particular application like the semantic networks and the conceptual graphs. They are used by people, data bases and applications needing to share information relating to a domain [1] . Ontology is thus the support of the acquisition of knowledge and it is also a useful tool to interface the software agents and the human agents. The work presented in this article consists of the conceptualization of ontology in the domain of the industrial maintenance. This ontology is developed for two reasons: * To allow the agents of maintenance to share the common comprehension of the structure of information concerning gas turbines and their maintenance.
* To allow the re-use of the know-how on this domain.
Model based on ontologies:
The models of representation of knowledge used in ontological engineering can be gathered according to conceptual paradigms' which they reify. They are thus distinguished: * Models based on Frames [2] ; * Models based on Logics of Description; * Models of the Conceptual Graphs.
The model of frames: The model of Frames was initially proposed, like language of representation of ontologies, by T. GRUBER. The principle of this model is to decompose knowledge into classes (or frames) which represent the concepts of the domain. To a frame, a certain number of attributes (slots) is attached, each attribute can take its values among a whole of facets (facets). Another way of presenting these attributes is to regard them as binary relations between classes whose first argument is called domain and the second range. Instances of the classes, corresponding to the extension of each concept, can be added, as well as functions which are particular types of relations binding a whole of classes to a computed value starting from the values of the attributes of the classes. The specification of conceptual properties of the attributes (or relations) resorts to formulas of first order logic. The semantics of subsumption is purely extensional: a frame F1 is more specific than a frame F2 if any instance of F1 is instance of F2. Several languages bases on frames exist. Let us quote as example: F-Logic which is the most known example of operational language containing frames. Knowledge Interchange Format (KIF) is an example of language, nonoperational, implementing the model of the frames in first order logic for the representation of knowledge. Protégé2000 is another example of language, nonoperational, implementing the model of the frames. It uses the model of knowledge OKBC as bases for its proper model. The model of the conceptual graphs: Introduced by J. SOWA at the beginning of the Eighties, the model of the Conceptual Graphs (CG) belongs to the family of the semantic networks. The semantic networks model the knowledge in the form of graphs; nodes are associated to concepts and edges to relations. This model lends itself well to pictorial displays of knowledge but it presents certain particular problems of representation.
The model of CGs decomposes into two parts: * A terminological part dedicated to the conceptual vocabulary of knowledge to represent, i.e. types of concepts, types of relations and instances of the types of concepts. This part corresponds to the representation of the conceptual model but also integrates knowledge on the hierarchic of the types of concepts and relations; * An assertional part dedicated to the representation of the assertions of the domain knowledge studied [8] .
Construction of an ontology: Sharing common understanding of the structure of information among people or software agents is one of the more common goals in developing ontologies [6] thus the role of an ontology is to consign a set of definitions of terms which corresponds to a conceptualization shared by the actors of a domain. The ontology contains the terminological primitives of the domain (the conceptual vocabulary structured in a set of concepts and a set of existing relations between these concepts) as well as the semantics of handling of these primitives expressed using axioms. A concept is characterized by: a term, an extension (objects or instances of the concept handled through this concept) and an intension (the whole of the properties specifying the semantics of the concept); A relation is characterized by: a term, an extension and an intension. It is used to describe a relationship among two or more terms. If a relation represents a relationship between only two terms, it is called a slot or a binary relation. If the relation describes a relationship among n terms such that there is a unique nth term corresponding to any set of the first n-1 terms, then the relation is called a function [7].
An axiom is a sentence in first order logic that is assumed to be true without proof. In practice, we use axioms to refer to the sentences that cannot be represented using only slots and values on a frame. They are specific in ontologies and distinguish them from the thesauruses, who present only terminologies. The axioms can represent common properties related on the concepts and the relations. These properties are called "diagrams of axiom". Example: the property of subsumption (the relation "sort-of" between concepts or relations); generics of concept (concept generic or abstract which does not admit an instance). Certain axioms appear in all the formalisms of representation of knowledge used to describe ontologies. Properties specific to the domain knowledge considered can also appear and included in ontology.
Conceptualization Ontologization Opérationnalization
Formal representation Informel Semi-formal Formal Fig. 1: Process of construction of an exploitable ontology within a knowledge-based system Raw data, constituting a corpus (expressed a priori in natural language), integrates all knowledge of the domain which one wishes to formalize.
Step 1: Conceptualization: It consists in identifying precisely, starting from the corpus (a set of documents generally expressed in natural language which must cover the whole of the domain knowledge considered), conceptual objects specific to the considered domain (concepts, relations and axioms). An informal conceptual model then is obtained (because it is expressed in natural language) or an informal ontology. This model is used as support with the co-operative work of construction of ontology between the actors of maintenance. It constitutes moreover a privileged mean to diffuse ontology near people who would wish to reuse it for their own application or research tasks.
Step 2: Ontologization: It consists of a partial formalization of the conceptual model, without loss of information. It is a question of transcribing knowledge in a language or paradigm of representation of ontology (the Frame model, the entity-relation model, the model of conceptual graph or semantic network…).
Step 3: Operationnalization: It consists of the integration of knowledge in a knowledge-based system. This step consists in the complete formalization of the ontology obtained previously within the framework of a formal and operational language of representation of knowledge.
Whereas the construction of ontologies is the object of many researches, the operational use of ontologies was still few studied [3] .
Description of our ontology: Study of case
Presentation of the real system: Our process is a compressor station among other similar stations. Several components such as: turbines, compressor, balloons of torches… compose this station. The suspension of one of them following a breakdown can produce enormous damage for the company. The separation and treatment unit receives oil and crude gas on behalf of several areas (satellite) for their carrying out the operation of separation. This operation is based on the compression of the compressor which turns thanks to the turbine and the effect of heat. This implies that if the treatment unit breaks down, all the rough one coming from the satellite will be lost and directed towards the torches for a combustion in the case of a gas et in terrestrial surface in the case of oil, this to preserve the continuous operating condition for the satellites.
The breakdown of the turbine leads thus to crises on the level of the consumption of gas, oil and electricity generated by the turbine, thus the cost of the material is very high especially that most of repairs and the revisions are carried out by means of strange teams. All these consequences show the need for having a tool of assistance to accumulation of all the case of breakdowns to envisage them then to avoid them. It is about predictive maintenance.
We are interested in our work on the control of the gas turbine. These turbines have a cardinal importance in the production process and their costs of maintenance are very expensive.
The gas turbine Description:
The gas turbine is a motor machine with rotary movement and internal combustion, provided with a compressor with air and a combustion chamber able to produce a fluid under pressure and with very high temperature. This fluid, while slackening in the stages of the turbine, releases the mechanical energy to the external. The turbine is made up of three principal parts: 1. An axial compressor: the principal function of the compressor is to compress the atmospheric air with a higher pressure. 2. Combustion chambers: the compressed air coming from the compressor is mixed with fuel and the mixture is lit. The product of this combustion is a hot gas vein with high pressure. 3. The wheel of the turbine: the hot gases with high pressure slacken by producing work to actuate the compressor of the turbine, on the one hand, and on the other hand to actuate the load.
Functioning:
The atmospheric air, aspired by the axial compressor, is compressed then driven back in the combustion chamber where the fuel is introduced; the desired mixture (compressed air and gas under pressure) is obtained.
A spark provided by candle causes combustion. The heat produced in the combustion chamber and the energy released by the combustion product are directed towards the first wheel of the turbine where this thermal energy is transformed into mechanical energy.
Part of the power developed by the turbine is used for the drive of the axial compressor (after its uncoupling of the engine or turbine of launching). The other part of the developed power is converted into energy usable, i.e. being used to actuate the receiving machine (gas compressor, in our case).
Electric motor
Axial compress or
Turbine of power
Combustion chamber Reducer or multiplier.
Energy
Entry of the combustion gas
Fig. 2: Diagram of a gas turbine group
Our model of ontological representation: The role of our ontology is to describe the equipment to maintain which is a gas turbine with all its components and all its information concerning its supervision and its control. The sources of texts which are used to extract knowledge of domain, in order to build our ontology, are: experts of domain, books, scientific articles, handbooks of the manufacturers, CDs… The model of representation of knowledge used for the representation of our ontology is based on frames.
The conceptualization
Concepts: the concepts which constitute our ontology are approximately in the number of 700 concepts of various types: classes, slots and instances. The relations between concepts: * Is-a: this relation allows leading to taxonomy of concepts. * Part-of: this relation makes possible to determine subcomponents of a component. * Produce: this relation shows that a concept (parameter) produced of other concepts (alarm, complete stop). * Control: this relation shows the concepts controlled by another concept * Supervise: this relation allows showing that a component is supervised by another component (instrument). * Occur because: this relation shows that a concept « alarm/ complete stop» occurs because of other concept « causes probable ».
* Have: this relation shows that a concept "causes probable" has one or more remedies. Axioms: among the detected axioms let us quote: * An instrument is a component. * The gas turbine is an equipment * If a parameter (temperature, pressure, vibration…), given by an instrument, is lower or higher than a normal value, an alarm is announced or a complete stopping of the equipment is carried out. Protecgé-2000 allows the capture of structured data by using windows of acquisition of knowledge to acquire information of the instances. When the user defines a class, it attaches slots to him. The protégé-2000 system automatically generates a window to acquire the instances of this class. This window can be personalized by the user [5] .
The class hierarchy of our ontology:
The class hierarchy
Example of definition of the slots:
Definition of the slots of the instrument class
Example of instances of the thermocouple class:
An authority of the thermocouple class
Instances of the thermocouple class
Example of definition of the slot « composant du system »: Definition of the slot « composant du système » Evaluation Qualitative evaluation: Two types of evaluation are considered: * Evaluation by the users (with criteria based on the satisfaction of the users) * Strategic evaluation by the manager (with criteria based on the income on the investment) Among the criteria of such an evaluation: facility of recovery of information, adequacy of recovered information, possible confidentiality of information… From a technical point of view, the transfer of know-how inside the company seems to have an obvious benefit, but the real transfer depends on a real use of the knowledge capitalized on the level of the company (changes of the practices in individual and collective work).
Quantitative evaluation: Evaluation by the experts of domain in term of cover of the domain considered. * The validation of our ontology by the experts showed that our ontology covers the entire domain considered. Our ontology is extensible from the point of view modeling and instanciation. * The steps of validation and evaluation are iterative and must be continued at the time of the use in concrete situation.
CONCLUSION
We tried in this work to present an ontology in the domain of industrial maintenance. After the stage of conceptualization, we used the tool protégé2000 to represent our ontology in a form easy to handle by the machine. The goal of our work was thus the conceptualization and the ontologization of ontology in the industrial domain. This ontology can be used in several applications. It is intended mainly at a community of experts in the domain of the maintenance of the industrial process using the gas turbines. | 4,083.8 | 2007-02-28T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
NNT: Nearest Neighbour Trapezoid Algorithm for IoT WLAN Smart Indoor Localization Leveraging RSSI Height Estimation
Indoor localization as a technique for assisting, or replacing outdoor satellite and cell tower localization systems, has taken a toll in the recent Internet of Things (IoT) era. This IoT drive has prompted increased research towards indoor localization, where fi ngerprinting, radio mapping as a cost-e ff ective and e ffi cient scheme, is emerging as the best enterprise entrepreneurs choose. However, indoor complex environments comprise of trackable devices (TD) at various heights, such as child trackers, dog tags, TD on the table, TD ’ s in the pockets, and situations such as pedestrians talking on the phone: that is at the height of the ear, amongst others. This paper fi rst investigates and analyses “ experimentally ” the impact of received signal strength indicator (RSSI) fi ngerprinting height to construct radio maps for indoor localization. Secondly, it proposes the novel trapezoid path loss model for RSSI estimation and fi nally the nearest neighbour trapezoid (NNT) algorithm for IoT smart indoor localization leveraging and mitigating the impact of height considered during the o ffl ine signal fi ngerprinting. We further propose approximately 1 meter above the fl ooring of the target space as the e ff ective fi ngerprinting height for indoor localization approaches.
Introduction
Realization of Internet of Things (IoT) technology in recent years for smart cities, smart homes, and integrated government infrastructure services have raised the applicability of location-based services (LBS). Localization fingerprinting in complex Wi-Fi indoor environments as a technique to achieve precise indoor positioning has attracted affordable and reliable accuracy ever since the introduction of the RADAR [1,2] and Horus indoor localization systems [3]. Indoor localization usage covers a wide range of aspects, such as emergency response [4], location-based targeted advertising [5,6], indoor robotics navigation [7], and capability that falls short to outdoor satellite navigation systems indoors due to signal fluctuations within complex indoor environments [8]. Indoor positioning technologies such as Bluetooth [9], ultra-wideband [10], radiofrequency technologies [11], microelectromechanical systems [12], wireless local area net-works [13], computer vision [14], magnetic field [15], ultrasonic [16], and infrared signal have been proposed [17].
In this paper, we first experimentally evaluate, test, and analyse the effect of fingerprinting height to indoor localization accuracy using machine learning nearest neighbour in signal space approaches such as nearest neighbour (NN) and k-nearest neighbour (KNN) algorithms with various radio maps constructed at different heights. The KNN algorithms return the location estimate as the average of the coordinates of the k neighbours corresponding to the smallest RSS distances to the query RSSI values [2]. Then, we propose and test the performance of the trapezoid path loss model to estimate the RSSI and the trapezoid nearest neighbour algorithm to accurately estimate the mobile location with minimal distance error. Experimental results show better localization accuracy performance by the proposed trapezoid signal distance model than the Euclidean distance deploying the proposed NNT algorithm than the native NN algorithm.
The contributions of this paper can be summarized as follows: (1) The proposed system deploys the RSSI-based approach, which requires no additional hardware and easily implemented on/off the shelf mobile devices equipped with the 802.11 family chipsets. Other than relaying of the internal inertia sensors, our proposed technique utilizes the received RSSI to estimate the height of the trackable device (2) We propose trapezoid signal distance, instead of the Euclidean distance of the received RSSI signal and the fingerprint, as the evaluation function that proves better positioning accuracy We propose a novel trapezoid model for signal prediction in free space based on proposed trapezoid signal distance other than the log distance model (4) We propose the nearest neighbour trapezoid algorithm (NNT) for indoor complex fingerprinting localization. Furthermore, we state that the proposed model can be used in any other location system. It provided better and robust localization Indoor localization systems are discussed in Section 2. Empirical fingerprint construction is presented in Section 3. In Section 4, we describe our proposed trapezoid construction process, the trapezoid path loss model, and the localization algorithm. Section 5 presents the experimental evaluation. Finally, Section 6 concludes this paper.
Related Works
Indoor localization research has seen a great deal of interest over the decade cutting across various architectures. Several solutions have been proposed by multinational industries and researchers, some requiring dedicated infrastructures such as infrared [18], ultrasound [19], and radiofrequency identification (RFID) [20,21], thus increasing the cost of deployment. However, RFID emerging techniques to resolve collision detections, such as the enhanced collision detection (ECD) [22], can improve identification rate, time, and slot efficiencies at low cost, whereas some solutions leverage already existing sensor infrastructures, such as Bluetooth [9,23], frequency modulation (FM) [24], GSM cellular [25], and wireless fidelity (Wi-Fi) signal strengths [2,3,26,27]. They deploy techniques such as the angle-of-arrival (AoA) leveraging the angle of incidence of the received signal vectors [28], time-difference-of-arrival (TDOA), the time-offlight (TOF) [29] leveraging the arrival time sequence to measure the delay of the signal, and signal strength fingerprinting. Fingerprint techniques map indoor propagated signals to a specific reference point, without the need to know the transceiver's location and transmit power, as opposed to techniques that rely on building signal propagation models for localization. WLAN indoor localization system based on fingerprinting comprises of basic two stages. The first stage is radio map construction during the offline surveying stage: During this stage, site survey calibration is carried out to obtain specific reference points at which RSSI sample vectors are sampled at a predetermined height and then saved in the localization server. The second stage is the localization query stage: During this stage, sensor device queried fingerprint vectors at unknown locations are compared to fingerprints in the radio map database in the localization server and then returns the corresponding location estimate that minimizes the mean errors in accordance to the localization algorithms' criterion. Indoor complex environments comprise devices tractable at different heights, such as child trackers, dog tags, mobile devices on the table or in pockets, and pedestrians on phone talking, amongst other height orientations. These diverse height orientations would have an impact on RSSI signal fluctuations; thus, different localization accuracy estimation results during online localization; this is because in most cases offline fingerprints are sampled and constructed at a specific height. This paper, therefore, presents an indepth experimental evaluation to validate this height effect, proposes a novel trapezoid path loss model, and finally proposes novel trapezoid nearest neighbour localization approach.
Fingerprint Localization
The fingerprint-based localization process consists of offline data collection, radio map construction phase, and online localization estimation phase. Offline construction of the radio map is initialized by the site survey, with grid formation calibrating of the target indoor environment. At each calibrated reference point (RP), we use a prerequisite Wi-Fi enabled tractable device (TD) to scan and sample the received signal strength indicator (RSSI) value from hearable transmitter access points (APs) in a predefined time stamp. When a discoverable number of APs are less than 3, the fingerprint signature at that specific RP point is not viable for complex indoor localization environments; thus, the fingerprint surveyor should take note of the AP population within the target environment signal coverage.
Let N be the number of RPs and L be the total number of APs deployed in the signal coverage target floor. We denote the RSSI value form AP l at RP i as f l i (dBm). We sample multiple random fingerprint signals at each predefined RP and then averaged the signal values to find the mean RSSI f l i at each RP i from AP l denoted as where f l i ðsÞ is the s th RSSI sample (in dBm) at RP i from AP l, and S l i is the total number of RSSI samples collected within the predefined time stamp. Then, the fingerprint at RP i is defined as
Journal of Sensors
Forming an interactive radio map matrix as Secondly, for each predefined time stamp, we calculate the mean and standard deviation of the RSSI per AP to store in a radio map at the back end of the server. Let σ l n (dBm) be the corresponding standard deviation of S l n collected RSSIs. Similarly, given the online query target measured RSSI R l from AP l, the RSSI vector at the target, denoted as ϑ, can be defined as During the data processing, to differentiate the RSSI values within indoor environment, mW, instead of dBm, is used when we consider the random signal level mean, i.e., which transforms RSSIs from smartphones to values for better signal differentiation. Correspondingly, we also transform RSSI values t l′ s in ϑ from dBm into mW.
NNT Algorithm
As IoT indoor environments are comprised of localization of persons of interest and transceivers at diverse heights, we propose a novel approach to solve the disparity of the pairwise cluster of the fingerprint RSSI and receiver RSSI by the trapezoidal area between the fingerprint perpendicular heights. Assuming a transceiver Tx at height H1 on ðx0, y0 Þ coordinate location in the equal space calibrated indoor space, fingerprint Fp at height H2 on ðx1, y1Þ coordinate and receiver Rx at a height H3 on ðx2, y2Þ coordinate, as shown in Figure 1 and the two-dimensional distance relationship in Figure 2.
In order to measure the degree of neighbour's closeness at varying heights, we propose minimizing the proposed trapezoidal distance, which is further achieved by several other minimizations such as the hypotenuse signal distance, the floor distance between the fingerprint and the location device, the differential height between the two. From Figure 1, where 4.1. RSSI Distance Model. Natively, during the offline calibration stage, the average RSSI at various fingerprint reference points at different distances C1 from the Tx transceiver antenna can be associated with the RSSI distance model as in where A l is the average RSSI at a 1 m distance from Tx. Assuming a close neighborhood from the Tx antenna nodes in the wireless fidelity network, the transmission indoor space-dependent parameter n remains the same. Thus, n can be determined by During online, the target received RSSI R l of the surrounding transceivers are compared to the fingerprint database RSSI values. Similarly obeying the anticipated RSSI distance model as in The distance at which the receiver is anticipated is obtained as Further, we derive the height estimate at which the RSSI is estimated to be recorded as 4.2. Trapezoid Path Loss Model. In this section, we propose and define our trapezoid path loss model for indoor radio propagation. Existing models have approximately predicted the RSSI fingerprints, though extremely challenging due to multipath effect and environmental site-specific parameters. From Figure 2, we can derive the following relationship:
Journal of Sensors
From Equation (16), The trapezoid nearest neighbour area A is defined as The proposed trapezoid path loss model that leverages the trapezoid distance can be defined as where averaged Rx l is the estimated received RSSI, C2 is the estimated distance between the transceiver and receiver, f l i is the fingerprint RSSI, and h2 is the proposed trapezoid factor to signal distance that affects the signal between the transceiver and the receiver.
where the proposed trapezoid signal distance as in Equation (20) between the reference fingerprint and the observed test fingerprint is calculated as
Experimental Evaluation
Raw data fingerprinting experiment was carried out on the office floor of our faculty administration building of Chongqing Posts and Telecommunications University (CQUPT), a test bed of about 66 m wide by 17.1 m long, whose schematic experimental test floor plan is shown in Figure 3. Several stages include the experimental setup and data acquisition steps in Section 5.1, the initial experiment to determine the effect of height on radio fingerprinting, the proposed trapezoid path loss model, and the proposed trapezoid localization algorithm based on accuracy results and cost of construction.
Setup and Data Acquisition.
In the test bed setup, initial prescans discover various RSSI readings that could be resulting from tethered devices in various offices, which in turn could increase the preprocessing computational cost of the collected data due to increased discoverable TD's. To minimize the processing, we filter out and simplify the process by setting up 5 D-Link APs (DAP 2310) operating at 2.4 GHz IEEE 802.11b/g/n Wi-Fi standard as baseline testing Journal of Sensors transceivers. Installed at a height of 1.68 m on the adjustable Fidck SPs-502 speaker stands, whereas the sampling TDs are at respective heights as described in Figure 4. During the one-time offline training phase, we calibrate RPs on the floor at respective scaling intervals of 0.8 m forming a grid. At each grid RP, we sample 120 RSSI values (in dBm) as fingerprints within a 1-second time interval using our developed android application facing the northing direction, installed it on a prerequisite Samsung Galaxy GT-S7568, and averaged and stored as fingerprints in the localization server, at different heights of 1.51 m, 0.97 m, 0.34 m, respectively, as illustrated in Figure 4, as well as Figure 5 showing the real image of the various height setups in the test bed. Meanwhile, we further calibrate test RP's, at which respective 120 RSSI values are sampled, averaged, and stored for online unknown locality testing.
Considering the amount of time for the surveyor to interact with the developed APP, our developed android platform sampling APP is simpler and user friendly than in [30], requiring only the reference point name, the interval at which we sample the RSSI. On scan initialization for a predefined time stamp, it records the interval, the RSSI value followed by MAC address of the source transceiver AP. Saving the sampled data into a text file (.txt) format in the security digital (SD) card, from which we later extract RSSI values using MATLAB R2015b running on 64-bit Windows 7 Ultimate desktop equipped with i3 4160<EMAIL_ADDRESS>GHz processor and 4 GB RAM to form an interactive matrix, thus empirical RSS database.
Height
Effect on Localization Accuracy. We experimentally evaluate the localization accuracy of various Wi-Fi fingerprints constructed at different heights for indoors localization to enable use for determining the extent to which the height factor impacts our fingerprint. We use nearest neighbour algorithms of NN and KNN, where NN is a special case of the KNN algorithm when the number of neighbours in the localization formulation is equal to one (k = 1).
H1 vs. T1 represents radio map fingerprints at height 1 (H1 = 0:34 m) versus testing fingerprints at height 1 (T1 = 0:34 m); this notation is adopted for all the possible height combinations used in the testing (Figure 6), summarized in Table 1. We can see a radio map constructed with fingerprints at heights H2, and test fingerprints at height T 3, that is "H2 vs. T3" performing better than other comparison fingerprint height combinations. This confirms that when a localization device is at the height of 1.51 m, location fingerprints constructed at H2 result in reduced localization mean error greatly than H1, and H3 fingerprints, respectively, with the same online query test fingerprints at height T3. We further observe that, under different values of the well-known parameter k used by the KNN, the attained localization accuracy differs greatly, with k = 3 resulting in better performance with combinations of H2 vs. T3. When the number of neighbour's used during the localization algorithm is equal to 1 (k = 1), resulting results are attributed to the NN algorithm. From this point, we select and proposed H2 as our fingerprinting height for the analysis of the proposed trapezoid path loss model and finally the proposed localization technique.
Trapezoid Pass Loss Model
Analysis. Efficient, applicable RSSI signal prediction is a key to indoor localization in the era of IoT, however, due to the diversity of the height of location-based devices, accurate signal modelling at different receiver heights on the same location possesses a challenge due to multipath effect, refraction, diffraction, and reflection of the signal by the complex indoor environments. Considering the lobby area (Area 3), we perform a comparison to the choices of minimization factors from the following enabling features such as the proposed trapezoid signal distance (blue colour), the signal distance (red colour), the floor distance (green colour), and the trapezoid area (black colour). From 5 Journal of Sensors the comparison, we observe overall superior localization accuracy performance of the proposed prediction path loss model leveraging height estimation than peers, as seen in Figure 7. Furthermore, analysing the localization accuracy by the NNT algorithm that leverages trapezoid path loss model at each TP location in Area 3, as seen in Figure 8. We observe robustness with NNT localization mean errors of range 0 meters (floor) such as TP location 6, TP location Journal of Sensors 16 and 9 meters (ceiling), with a median of 2 meters compared to NN range of 2 meters (floor) to 15 meters (ceiling) with all RSSI test RP's.
Localization Accuracy Analysis.
We evaluate the accuracy of the proposed NNT localization algorithm by computation of the mean error with the well-known NN algorithm in different environments, as seen from the Figure 9, below for each areas subsection, and the general total floor, we observe the NNT algorithm (blue) outperforming the native NN algorithm (red), both in room base localization and total floor base localization Figure 10.
Conclusions
As the demand increases for indoor LBSs in the IoT era, Wi-Fi-based fingerprinting as a key low-cost approach to precise indoor navigation and positioning keeps increasing. This paper firstly presents an extensive experimental analysis on the effect of the height, chosen by the offline site surveyor for sampling RSSI data in one-sided heading, as a minimum measure during the fingerprint radio map construction. We further note that pedestrians and traceable valuables indoors can be found at diverse heights; thus, the same could Journal of Sensors reversibly affect the localization radio map, directly impacting the localization accuracy. From this evaluation, we observe the effect of AP's installation heights versus the localization height in complex environments and further propose that radiofrequency-based indoor fingerprinting to be sampled at approximately 1 m above the floor of localization interest. Secondly, we propose a novel trapezoid path loss model to better estimate indoor RSSI fingerprint characteristics due to changing height. On the same basis, we finally propose a novel trapezoid-based nearest neighbour indoor localization scheme that leverages online RSSI to dynamically predict and update the RSSI in real time. Comparing with the classic nearest neighbour algorithm for localization accuracy performance, the experimental findings clearly show that the proposed algorithms can better predict RSSI with dynamic indoor coverage and improve the positioning accuracy.
Data Availability
The data is available on request.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper. | 4,379.2 | 2021-07-31T00:00:00.000 | [
"Computer Science"
] |
India Allele Finder: a web-based annotation tool for identifying common alleles in next-generation sequencing data of Indian origin
Objective We built India Allele Finder, an online searchable database and command line tool, that gives researchers access to variant frequencies of Indian Telugu individuals, using publicly available fastq data from the 1000 Genomes Project. Access to appropriate population-based genomic variant annotation can accelerate the interpretation of genomic sequencing data. In particular, exome analysis of individuals of Indian descent will identify population variants not reflected in European exomes, complicating genomic analysis for such individuals. Results India Allele Finder offers improved ease-of-use to investigators seeking to identify and annotate sequencing data from Indian populations. We describe the use of India Allele Finder to identify common population variants in a disease quartet whole exome dataset, reducing the number of candidate single nucleotide variants from 84 to 7. India Allele Finder is freely available to investigators to annotate genomic sequencing data from Indian populations. Use of India Allele Finder allows efficient identification of population variants in genomic sequencing data, and is an example of a population-specific annotation tool that simplifies analysis and encourages international collaboration in genomics research.
Introduction
Whole exome sequencing (WES) has revolutionized genomic diagnostics and is a key tool in identifying the causal genes underlying rare Mendelian disorders [1][2][3]. A critical strategy in post-sequencing analysis involves screening a proband's exome variants against exomes from reference individuals matching the ethnic makeup of the proband. While these data are widely available for individuals from European and African American descent [4,5], such reference data is less accessible when analyzing exomes from individuals from India. We present India Allele Finder (IAF), an online database table of allele frequencies of individuals from the Indian subcontinent.
The 1000 Genomes web browser (http://www.ncbi.nlm. nih.gov/variation/tools/1000genomes/) effectively presents complete allele frequencies, but rapid queries are more difficult, and annotation of local variant call files (vcfs) is not possible. In contrast, the IAF website and its accompanying command line tool are focused only on the South Indian population, and allow researchers to easily annotate their own exome data sets. Clinicians who want a more ordered method of browsing 1000 Genome data will find the query-based website intuitive to use, while bioinformaticians who work with vcfs will easily adopt the IAF command line tool into their workflow.
Accessing 1000 Genomes data
Fastq data of individuals specific to Indian populations (flagged with "ITU" indicating Indian Telugu ancestry) available via the 1000 Genomes Project [6] were aggregated via ftp from the 1000 Genomes Project, and combined into two fastq files per individual, one per paired end read. We downloaded 100 fastqs out of 118 available ITU individuals from the 1000 Genomes data set. Automated shell scripts facilitated the downloading of fastq files, while an aggregator written in Python concatenated fastqs of the appropriate paired end such that each individual had two fastq files of equal size.
Data analysis
Fastqs were mapped with the Burrows-Wheeler alignment (BWA) tool 0.7.9a to hg19. The resulting bam files were then analyzed with SAMtools 0.1.19, Picard 1.114, and the Genome Analysis Toolkit (GATK) 3.1.1. Annotation of resulting vcfs was performed with Annovar. A command line Python script, indiaAlleleAnnotator. py, takes as its input a tab delineated vcf and outputs a modified vcf with an additional column representing the allele frequency among the Indian Telugu population.
Database schema
The vcf generated from the analysis was converted into structured query language (SQL) format, and imported into mysql v.14.14 database as one table. The database is accessed on-line via a Perl Catalyst front-end. The files for this implementation, including the raw SQL file, are available at https://github.com/Paciorkowski-Lab/ IndiaAlleleFinder.
IAF allows query of variants through its web-based database, as well as providing a command line tool to annotate exome vcfs. Accepted formats for the webbased query include gene symbol, variant genomic location, or rsID number. The command line annotation tool identifies variants that are present in the IAF data set, and therefore likely to be population variants that may be excluded from further analysis in disease gene identification studies. The IAF workflow is represented in Fig. 1. Fig. 1 Workflow of analysis of publicly available ITU fastqs from 1000 Genomes used to construct the IAF dataset. Users wishing to annotate exome results with frequency data from IAF may do so using web-based or the command-line interface
IAF use case study
Subjects MP14-001a1, MP14-001a2, two siblings presenting with achalasia-addisonianism-alacrima syndrome (AAAS), as well as the father and mother, were selected for study. Saliva-derived DNA underwent WES using the Agilent Sure-Select 50 Mb whole exome capture kit, and 100 basepair paired-end reads were generated on an Illumina HiSeq 2500 machine at the University of Rochester Genomics Research Center. Sequence was aligned, analyzed as described previously. De novo, autosomal recessive, and X-linked variants were identified and common variants in the database of single nucleotide polymorphisms (dbSNP) version 137 excluded. We then used IAF to identify and exclude variants found in the 100 Telugu Indian individuals from 1000 Genomes. After filtering by pedigree hypothesis, candidate variants were reduced from 84 to seven when using IAF. We found that MP14-001a1 and MP15-001a2 were homozygous for c.43C>A/p.Q15K variant, a known AAAS sequence variation [7]. Their mother and father were both heterozygous for this variant.
The analysis of exome data from populations other than European and African American can be challenging due to difficulty accessing appropriate normal population data sets. This can result in an excess of candidate variants in disease gene identification studies. We have designed IAF to fit into existing workflows.
There are differences between results reported in 1000 Genomes vs IAF. Overall, the IAF data set reports fewer variants, likely due to our use of the newer version GATK v3.1.1 versus v2.4 [8]. Additionally, we sampled from a smaller group of 100 individuals. 1000 Genomes overall collected data from 2535 individuals from 26 different populations for their phase 3 study. As a result, 1000 Genomes aggregated over 5.2 million entries for chromosome 5 alone. Our data set for chromosome 5 contains 8520 entries aggregated from 100 individuals. We anticipate more variants will be represented in IAF as more exomes from the Indian continental population are added.
Limitations
IAF is a proof of concept implementation of a filtering mechanism based on population-derived variant frequencies. It is a unique tool to further annotate vcfs for the specific purpose of analyzing WES data from individuals of Indian subcontinent descent. We anticipate a proliferation of reference databases for populations that are not of European origin. Additional features are planned for the IAF website, including the ability to input multiple variants, and access a subset of the vcf output corresponding to the genes and/or variants queried. Further exome data sets from individuals of continental Indian ancestry will be added in the future as they become available. | 1,586.2 | 2017-06-27T00:00:00.000 | [
"Biology"
] |
The impact of invisible-spreaders on COVID-19 transmission and work resumption
The global impact of coronavirus disease 2019 (COVID-19) is unprecedented, and many control and prevention measures have been implemented to test for and trace COVID-19. However, invisible-spreaders, who are associated with nucleic acid detection and asymptomatic infections, have received insufficient attention in the current COVID-19 control efforts. In this paper, we analyze the time series infection data for Italy, Germany, Brazil, India and Sweden since the first wave outbreak to address the following issues through a series of experiments. We conclude that: 1) As of June 1, 2020, the proportion of invisible-spreaders is close to 0.4% in Sweden, 0.8% in early Italy and Germany, and 0.4% in the middle and late stages. However, in Brazil and India, the proportion still shows a gradual upward trend; 2) During the spread of this pandemic, even a slight increase in the proportion of invisible-spreaders could have large implications for the health of the community; and 3) On resuming work, the pandemic intervention measures will be relaxed, and invisible-spreaders will cause a new round of outbreaks.
I. Introduction
The global impact of coronavirus disease 2019 (COVID-19) has been unprecedented. As of Jun 1, 2020, an outbreak of COVID-19 has resulted in 6,016,976 confirmed cases and 370,153 deaths [1,2]. In order to control the pandemic, many measures have been taken, including: The Closure of workplaces, schools and universities; The social distancing of entire populations; Voluntary home quarantine; and Lockdown [3]. However, these measures cannot eliminate the problem of false-negative individuals.
As the peak of the pandemic has passed in some countries, an imminent trend to resume work is expected. Therefore, the above control measures in these countries may consequently be relaxed. However, there is a serious problem that people may not realize: there are false-negative individuals in the population, whose impact on COVID-19 remains to be studied. The false-negatives mentioned in this article are not caused solely by incorrect test results in the traditional sense. We will define false-negative individuals in a broader sense as invisible- spreaders: when a person is infected with COVID-19, but for some reason cannot be effectively detected and isolated, and always remains infectious toward other people, he/she will be defined as an invisible-spreader.
According to the literature, there are two main sources of invisible-spreaders [4,5]: 1. Nucleic acid detection: According to the recommendations of the World Health Organization (WHO), the main diagnostic method for COVID-19 is nucleic acid detection in fluid secretion collected via a pharyngeal swab using RT-PCR [6]. However, nucleic acid detection based on pharyngeal swabs has an accuracy rate of 71% [7]. Several factors might have contributed to these false-negative results, such as the sampling technique, transportation process, or limited gene(s) detection [8].
2. Asymptomatic: Evidence has emerged from certain countries like Italy that indicates that some coronavirus infections do not result in symptoms [9]. These asymptomatic infections cannot be recognized without nucleic acid detection. Before nucleic acid detection, these asymptomatic infections have been infectious to susceptible people. However, it is unrealistic to carry out nucleic acid detection for the whole population, so some people with asymptomatic infection become super-spreaders.
Undoubtedly, these invisible-spreaders will have a direct implication regarding the control of the pandemic, and may even lead to the resurgence of the pandemic [10]. In this article, we explore the following issues: Proportion: The impacts of different proportions of invisible-spreaders on the pandemic differ. Therefore, to evaluate more accurately the development of the pandemic, we estimated the proportion of invisible-spreaders in various countries through machine learning.
The impact of invisible-spreaders: During COVID-19, those with confirmed infection were quarantined. However, invisible-spreaders cannot be quarantined in time due to the lack of effective means. Therefore, they would infect their close contacts while infectious, which accelerates the spread of the virus. In order to explore the effects of invisible-spreaders, we designed a series of controlled experiments. By changing the parameters of the proportion of invisiblespreaders, several pandemic simulation experiments were performed to explore the various effects of different proportions of invisible-spreaders on the pandemic.
New outbreak: The peak of the pandemic has passed in some countries, like Italy and Germany. Italy relaxed its outbreak control measures twice (on April 14 and May 4, 2020) [11], and Germany reopened shops and Bundesliga football as the lockdown was relaxed on May 6, 2020 [12]. Nevertheless, due to the existence of invisible-spreaders, it is uncertain how the pandemic will develop. Therefore, we have simulated the development of COVID-19 in these countries following the relaxation of the control measures to provide a reference for these countries to formulate reasonable plans.
We summarize our contributes below: • We tried to conduct a quantitative analysis of COVID-19 and use data to illustrate the various effects of different proportions of invisible-spreaders on the pandemic.
• We define a modified SIR model to simulate the spread of COVID-19 among five different countries. More in detail, we use model parameters that change over time and include invisible-spreader as a new variable in our model.
• We propose a simple method to estimate the proportion of invisible-spreaders.
• Based on the existing pandemic data, we used experiments to predict the development of the pandemic after the intervention measures were relaxed.
The rest of this work is structured as follows. In Section II, we first introduce the SIR model [13,14], which describes its formulas and concepts; then, we will focus on explaining our model. In Section III, we presented the results of each experiment and appropriately analyzed them. In Section IV, we discussed more sources of false negatives and future research directions. In Section V, we concluded the article.
A. Data sources
The first wave global COVID-19 data were collected from the Johns Hopkins University's Report [15]. Five countries (Italy, Germany, Brazil, India, and Sweden) were chosen to represent a variety of different pandemic situations. Italy, Germany, and Sweden were the three more severely affected countries in the first wave of pandemic in Europe, and the control measure adopted by Sweden was group immunization [16]. Brazil and India were chosen as countries where the epidemic is still on the rise. And all data was completely anonymized and deidentified before access and analysis.
To estimate the impact and proportion of invisible-spreaders, we considered time series information of the first wave pandemic in each country from January 22 to May 31, 2020. However, in the early stage of the pandemic, the medical institutions had not yet developed a complete system for reporting cases, which will lead to data deviation. In addition, after a period of outbreaks, countries undertake interventions against COVID-19 which will make the daily infection factor show a downward trend. During this period, the impact of invisible-spreaders will be more prominent because, following interventions, positive patients will be detected and quarantined, but invisible-spreaders can still be an effective source of infection. Therefore, we selected data for a reasonable time period (Table 1). Since there were only 24 new patients in India on April 3, which is close to a twentieth of the increase on other days in the previous week, we believe that there are some deviations in the statistics for that day, so we used Lagrange interpolation to complete the data for the day [17].
B. Models
Many mathematical models have been applied to the study of COVID-19, such as the SIR model [13,14], and the SEIR model [13]. However, these models all adopt fixed parameters without considering the interventions and time decay during the pandemic progression. For this reason, we adopt a modified SIR model. 1) Standard SIR Model: In this model, the pandemic transmission is treated as a geometric random walk process. This means that, at the same time, the probability of infection is considered constant in this model. The standard SIR epidemiological model divides individuals into
PLOS ONE
The impact of invisible-spreaders on COVID-19 transmission and work resumption three classes, as follows: susceptible (S), infected (I) and removed (R).
Where S(t), I(t) and R(t) are the number of susceptible, infections and removed individuals at time t; N is the population size, N = S(t) + I(t) + R(t); R 0 is the basic reproduction number [18]; D I is the mean infection period; β is the infection factor, γ is the removed factor. The infection factor β (The product of the contact rate and the infection rate) and removed factor γ are constant in the SIR basic model. However, these factors are not static in real life. They change along with the development of the pandemic, and are dependent on the pandemic prevention strategies adopted by the government. For example, the infection factor can be decreased if people are requested to maintain a safe distance from others in order to slow the spread. Moreover, the basic model does not consider invisible-spreaders, which harms the control of the pandemic.
2) Modified SIR Model: The modified model divides the population into the following four classes: susceptible (S), confirmed (C), invisible-spreaders (F) and removed (R). In order to adapt the model to real-life situation, we use an infection factor that changes over time. The infection factor is assumed to be a random variable, described by a function β(t).
a, b and c are the bias terms of the model relative to the time dimension, referring to the bias terms in the neural network to prevent the model from failing to converge. The data we obtain is based on daily reports and, once the cases are reported, this means that they have been quarantined. Isolated cases will lose their infectivity to susceptible people, just like recovered people. Therefore, we don't need to consider the recovery rate γ, and the process of daily case addition will be regarded as Markov process [19]. We add a new invisiblespreader factor to indicate the probability that the infected patient is an invisible-spreader, and assume that the recovery time of an invisible-spreaders is 14 days.
Where S(t), I(t), F(t) and R(t) are the number of susceptible, infected, invisible-spreaders and removed individuals at time t; P is the number of the existing population in the system, P = S(t) + I(t) + F(t); α is the invisible-spreader rate; Since the infection process is a Markov process, D I equals to 1, R 0 equals to β(t).
We first use the data for Italy, Germany, Brazil, India, and Sweden according Table 1 to train our model and use the stochastic gradient descent (SGD) to calculate the appropriate parameters [20]. Then, four groups of control experiments were carried out using the trained model. In the first group of experiments, we used the number of new infections of real data as ground truth, and machine learning to fit ground truth, then the loss function L is: where � CðtÞ is the real data of new infections, n is the number of training data, t is the duration of the pandemic.
III. Result
In order to calculate the proportion of invisible-spreaders and verify the influence of invisiblespreaders, we conducted multiple experiments. Through SGD, we can approximately calculate the proportion of invisible-spreaders in real-life situations. Secondly, since the proportion of early invisible-spreaders in Germany and Italy is close to 0.8%, and the proportion in Italy, Germany and Sweden, after the peak of the pandemic has passed, is close to 0.4%, simulation experiments were carried out under the assumption that the proportion (α) is equal to 0%, 0.4%, and 0.8%. Under these three proportions, we simulated the pandemic's progression in five countries, and evaluated the impact of invisible-spreaders through the differences among the three proportions. Thirdly, with the improved situation in some countries regarding the pandemic, they gradually relaxed the pandemic interventions. Due to the existence of invisible-spreaders, there is a risk of a new pandemic outbreak if these interventions are relaxed. Therefore, to evaluate the damage of the new outbreak, we used data for June 1 as the first day data to initialize the model, and conducted a simulation experiment of the pandemic.
A. Infection factor
The model simulated the pandemic's progression in Italy, Germany, Brazil, India, and Sweden over time. Then, by establishing a regression model, taking the number of daily infections as ground truth, and fitting the predicted value of the model to the ground truth, an approximate curve for the infection factor was obtained (Fig 1). Along with the development of the pandemic, Italy, Germany, Brazil, India, and Sweden have gradually adopted several interventions. Therefore, the infection factors should be inversely related to time. As the figure shows, although the infection factors in these five countries follow a downward trend, there are always fluctuations in the curves. After entering May, moreover, the outbreak in Italy, Germany, and Sweden shows a rebound trend. This is because Italy and Germany both relaxed their pandemic intervention measures in May, and the herd immunity measures adopted by Sweden cannot effectively control the impact of invisible-spreaders.
B. Proportion
Our experimental results show that the proportion of invisible-spreaders was less than 1% in these five countries. Among these five countries, although Sweden has the lowest proportion of invisible-spreaders, its curve is the most volatile. The reason is that herd immunity does not
PLOS ONE
The impact of invisible-spreaders on COVID-19 transmission and work resumption cut off the spread of invisible-spreaders. The situation in Italy and Germany is somewhat similar, with 0.8% in the early stage and stable to 0.4% in the middle and late stages. India and Brazil are experiencing the early conditions in Italy and Germany, and the proportion is still rising. This may occur since Brazil's total population is far larger than that of Italy and Germany, and they are still in the spreading period of the pandemic.
C. Impact of invisible-spreaders
To estimate the impact of invisible-spreaders, we assumed the presence of 0%, 0.4%, and 0.8% invisible-spreaders among infected patients, and conducted five simulation experiments. The experimental results show that even a slight increase in the proportion of invisible-spreaders could have large implications for the health of the community ( Table 2). If the proportion of invisible-spreaders increases to 0.8% from 0.4%, the total number of infections as of June 1 will increase to doubles or triples, and will also lead to a longer duration of the pandemic. This result is due to the fact that invisible-spreaders remained active in the model system until recovery. In the meantime, susceptible populations will continue to be infected by these invisible-spreaders, resulting in a higher number of daily infections compared with populations without invisible-spreaders (Fig 2). Our results suggest that invisible-spreaders enhance the spread of the pandemic, even in the context of prevention and control measures. Figs 2 and 3 show that the proportion of invisible-spreaders in Germany is slightly higher than in Italy, and the population of Germany is also denser than that of Italy. Therefore, we might assume that the pandemic would be more severe in Germany, yet the reality is the
PLOS ONE
The impact of invisible-spreaders on COVID-19 transmission and work resumption opposite. The number of infections is far greater in Italy, and the inflection point also appears later. One of the reasons for this difference is that the intervention measures adopted by the two countries are different. On March 22, Germany began to implement interventions, including banning parties, closing restaurants, and imposing a stay-at-home order, which significantly
PLOS ONE
The impact of invisible-spreaders on COVID-19 transmission and work resumption reduced the contact between people and cut off transmission by invisible-spreaders. Italy only implemented the banning of large gatherings and the closure of schools, so local community transmission and family cluster transmission may be increased when people have nowhere to go [21], which led to the pandemic being more affected by invisible-spreaders in Italy. The difference between these two countries indicates that a stay-at-home order enhances pandemic control. The pandemic in China and Germany was significantly controlled through the adoption of this measure. According to the current data (Table 1), the number of daily infections in Brazil and India is still increasing, so these two countries might learn from the pandemic interventions (like the stay-at-home order) implemented by countries like Germany. For Sweden, the proportion of invisible-spreaders is almost below 0.4% (Fig 3), and its population density is only about one-tenth of Germany's and one-eighth of Italy's. However, its final infection rate was close to 0.37%, almost twice that of Germany's 0.22%, and close to Italy's 0.38%. This is because Sweden adopted a group immunization approach. According to the experimental results, group immunization has not achieved an ideal pandemic control effect.
D. New outbreak
According to the curves for the infection factor (Fig 1), after the beginning of May 2020, the pandemic tended to rebound. To examine the potential for new outbreaks, we used our model to simulate new outbreaks with 0.4% invisible-spreaders. As the pandemic interventions are relaxed, the daily infection factor will return to the mid-pandemic level. Therefore, We use June 1, 2020 as the first day to initialize the model parameters. We ran a simulation from June 1st to June 15th, and these five countries eventually reached a large number of new infections ( Table 3). This result shows that, if the pandemic interventions are relaxed, then even if the proportion of invisible-spreaders is only 0.4%, the impact of the new outbreak will be catastrophic (Fig 4).
IV. Discussion
In addition to the two main sources mentioned in this article, there are several other sources of invisible-spreaders. In order to handle this pandemic more effectively, China has developed a health code application, which is one of the ways to generate invisible-spreaders. In China, people can apply for a health code online using Alipay. After providing health-related information and confirming whether there has been any contact with a confirmed or suspected patient within the past fortnight, a color code is generated, after passing the audit. A person who receives a green code is safe and does not need to go into quarantine while a person who receives a red or yellow code should be quarantined according to the regulations until the health code changes to green. However, this self-reported information may not accurately reflect people's health conditions, which means that some positive patients in the population will receive a green code, implying that some people who should have been isolated can travel freely. As far as the current situation in China is concerned, the health code is of great help in controlling the pandemic, and invisible-spreaders do not seem to have a negative impact. However, once China implements lockdown and only allows people with green codes to travel between cities, there will remain imported patients in various provinces and cities. Based on the experimental results discussed in this article, we believe that the reason why the health code does not appear to be having a negative effect in China is that China's pandemic situation is developing positively. Once the peak of the pandemic in Italy, Germany, Brazil, India, and Sweden has passed, these countries also need to find a method similar to the health code to provide a logical basis for permitting travel as people resume work. If the health code is applied in these countries, where the pandemic situation is serious, it will exert a highly negative impact.
In addition to studying the issues outlined in this article, some studies believe that the infection power of asymptomatic cases is lower than that of symptomatic cases [3]. Therefore, it is necessary to classify invisible-spreaders. We plan to consider how to divide invisible-spreaders into three categories in future research: 1) false-negative for the nucleic acid test, 2) asymptomatic infection, 3) false-negative for the health code. Based on this classification, more reasonable interventions will be proposed.
There remains further research to be done. When modeling data from one country, the results show that the impact of different proportions of invisible-spreaders differs greatly and that a difference of only 0.4% will double the damage of the pandemic. The comparison of the data for Italy and Germany reveals a big gap between the impact of the same proportion of invisible-spreaders. Even if the proportion in Germany is greater than in Italy, Italy will suffer more losses than Germany. This is because the early interventions adopted by these two countries differ-the stay-at-home-order caused this difference. In Sweden, the proportion of invisible-spreaders is the lowest among the five countries, and its population is far less dense than that of the other four countries, but its final infection rate is close to that of Italy. This means that the effect of group immunization on COVID19 is not ideal. Moreover, the United Kingdom, which also proposed group immunization in the early stage, has also abandoned this measure. The above results show that to control the pandemic, it is vital to select and implement reasonable intervention measures. Therefore, further quantitative research on various interventions is required.
Many countries, such as Italy, Germany, Brazil, India, and Sweden, have adopted various interventions to reduce the damage caused by the pandemic. Moreover, the number of daily infections in Italy and Germany has gradually decreased from March 27 and April 3, respectively, which means that, in these two countries, the pandemic has been controlled to some extent. Under such circumstances, the resumption plan will enter the agenda of these two countries. However, there remain invisible-spreaders in the population, which will impede this plan. Unless we take measures to deal with false-negative individuals, they will lead to a new outbreak of the pandemic. Therefore, in order to reduce the negative impact of invisiblespreaders, we consider that the corresponding measures should be taken to combat the following three aspects: 1. Nucleic acid detection: The development and communication of clear, risk-stratified protocols to manage negative COVID-19 test results are needed [4]. Individuals should be divided into low-risk individuals and higher-risk individuals, based on whether they are from a hardest-hit area or have had close contact with an infected person within the previous fortnight. For low-risk individuals, a negative result for nucleic acid detection and green health code will be sufficient but, for higher-risk individuals, these results will be insufficient. After obtaining a negative result, these individuals may still need to quarantine and take further tests. A CT chest scan--A computerized tomography scan uses computers and rotating X-ray machines to create cross-sectional images of the body--and clinical symptoms should also be used to assist the detection of false-negative individuals.
2.
Asymptomatic: It is difficult to implement nationwide nucleic acid detection. However, extensive nucleic acid detection in high-risk areas, like Wuhan, Codogno and North Rhine-Westphalia, is still necessary. From May 14, 2020, Wuhan initiated centralized nucleic acid detection citywide, which detected a total of 300 infected yet asymptomatic people [22]. This nucleic acid detection can dispel people's concerns, which will provide an excellent foundation for the resumption of work in Wuhan. It also provides a feasible scheme for other countries to ensure the further combatting of the pandemic by applying nucleic acid detection in the hardest-hit areas.
3. Health code: In order to solve the problem of false-negatives caused by the health code, we refer to "The Privacy-Preserving Contact Tracing" launched by Google and Apple [23]. This enables the health code app periodically to communicate with other health code apps around it. In this way, it is possible to record each person's close contacts within a certain period, which may provide an important basis for the classification of the health code.
V. Conclusions
During the past few months, the COVID-19 pandemic has swept the world. We hope to promote research on the pandemic by proposing a quantitative analysis method. It can be seen from Fig 1 that the daily infection factor is showing a downward trend as a whole, but there is always a fluctuation in the curve. We believe that invisible-spreader are one of the reasons for this fluctuation. And our second experimental results show that the proportion of invisiblespreaders in the five countries mentioned in this article does not exceed 1%, and the degree of influence caused by invisible-spreaders varies in different countries. This reminds these countries that appropriate intervention measures should be taken to control the pandemic. In addition, the third experiment shows that there are indeed some invisible-spreaders in the current population, and only a small number of invisible-spreaders will result in a large number of cases in the community. And it will be difficult to end the pandemic unless effective measures can be taken to detect invisible-spreaders. When entering the stage of work resumption, pandemic intervention measures will be gradually relaxed. During this period, the impact of invisible-spreaders will be more significant, and it is likely to cause a new outbreak. Therefore, these countries need to take corresponding measures to ensure the normal progress of work resumption. | 5,762.4 | 2022-01-12T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Economics"
] |
New Approach in Human-AI Interaction by Reinforcement-Imitation Learning
: Reinforcement Learning (RL) provides effective results with an agent learning from a standalone reward function. However, it presents unique challenges with large amounts of environment states and action spaces, as well as in the determination of rewards. Imitation Learning (IL) offers a promising solution for those challenges using a teacher. In IL, the learning process can take advantage of human-sourced assistance and/or control over the agent and environment. A human teacher and an agent learner are considered in this study. The teacher takes part in the agent’s training towards dealing with the environment, tackling a specific objective, and achieving a predefined goal. This paper proposes a novel approach combining IL with different types of RL methods, namely, state-action-reward-state-action (SARSA) and Asynchronous Advantage Actor–Critic Agents (A3C), to overcome the problems of both stand-alone systems. How to effectively leverage the teacher’s feedback—be it direct binary or indirect detailed—for the agent learner to learn sequential decision-making policies is addressed. The results of this study on various OpenAI-Gym environments show that this algorithmic method can be incorporated with different combinations, and significantly decreases both human endeavors and tedious exploration process.
Introduction
Reinforcement Learning (RL) in various decision-making tasks provides effective and powerful results with learning for an agent from the stand-alone reward function. However, it suffers with large amounts of environment states and action spaces, and with high implicity concerning rewards for real complex environments. The complexity, which is due to the high dimensionality and continuousness of real environments, leads to the RL needing a large number of learning trials in order to understand and learn the environment [1]. A promising solution for the limitation is addressed by Imitation Learning (IL) and exploiting teacher feedback. In IL, the learning process can take advantages of human assistance and control over the agent and the environment. In this study, the human is considered as a teacher who teaches a learner to deal with the environment and to tackle a specific object.
A teacher can express his feedback to improve a policy using two main methods, namely, direct dual feedback and indirect detailed feedback. While in the first method the teacher evaluates the agent's actions by sending back rewards (positive or negative), in the second method he can demonstrate the way to complete a task to the agent by actions [2,3]. One of main limitations of existing IL approaches is that they may expect extensive demonstration information in long-horizon problems. Our proposed approach leverages integrated RL-IL structures (see Figure 1) to overcome the RL and IL limitations simultaneously. Moreover, the approach considers both cases where the agent does or does not need human feedback. Our key design principle is a cooperative structure, in which feedback from the teacher is used to improve the learner's behavior, improve the sample efficiency and speed up the learning process by IL-RL integration (See Figure 2). Teacher assistance considers both direct dual feedback, with positive and negative reward, and indirect detailed feedback, with access to action domain feedback using an online policy IL process. Management of teacher's feedback in the "feedback management" block is one of the features of the structure (see Figure 2). Moreover, this structure reflects the online teacher feedback as soon as the learner takes action and deals with the quantity of teacher's feedback. This paper begins by overviewing the related work on RL and IL in Section 2. It is continued by formalizing the problem of imitation learning and the details of the proposed structure (Section 3). The proposed structure is validated and compared with RL standalone in Section 4. Experimental validation and analysis of the results are concluded in Section 5.
Related Works
Having an agent learn behavior from a standalone reward, which is the main concept of Reinforcement Learning (RL), is particularly difficult in a complicated environment. The main problems are high dimensionality of environment spaces in challenging tasks. Moreover, the definition of reward function in real-word applications is very complicated and implicit. Contribution of humans and agents in the form of using human knowledge in the training loop by Imitation Learning (IL) is a promising solution to improve the data efficiency and to gain a robust policy [4].
In IL, the agent observes, trusts and replicates the teacher's behavior. For a typical method of IL, which is presented as Behavioural Cloning (BC) or Learning from Demon-stration (LfD), the goal is to train a classifier-based policy to predict the teacher's actions. In BC, features are lists of an environment's features and the labeled data are actions performed by the teacher. However, the statistical learning assumption affected by ignoring the relationship of current action and next states during execution of the learned policy causes poor performance concerning this method [5,6].
The forward training algorithm in IL was introduced to train one policy at each time step to achieve a non-stationary policy. In the training algorithm, the agent learns how to imitate the teacher's behaviour in the states generated by the previous learned policies [7]. The main disadvantage of the forward training algorithm is that it requires an investigation of the environment over all periods, regardless of the horizon size. In fact, considering the non-stationary policy in this model causes its impracticality in real-world applications with a large horizon.
Search-based structured prediction (SEARN) learns a classifier to choose the search optimally. This model outperforms the traditional search-based methods which first learn some sort of global model and then start exploring. SEARN follows the teacher's action at the beginning of a task. Then it aggregates more demonstrative data iteratively to obtain an updated policy. It generates new episodes to create a combination of previous models and teacher behaviour [8]. However, the optimistic prediction, as a result of the difference between its initial value and the optimal policy, is the main drawback of this learning method.
Stochastic Mixing Iterative Learning (SMILe) has been proposed to improve the forward training algorithm using SEARN's benefits with a straightforward implementation and less dependency on interaction with a teacher. After several iterations, the method utilizes a geometric policy by training a stochastic policy [7]. While the training process can be interrupted at any time, it suffers from instability in the model because of the stochastic policy assumption.
Two popular IL algorithms called dataset aggregation (DAGGER) and Generative Adversarial Imitation Learning (GAIL) [4,9] introduce new approaches for incorporating teacher experience. These papers proposed iterative algorithms for an online learning approach to train for a stationary deterministic policy. It was proved that the combination of the algorithms with reduction-based approaches outperforms the policy findings in sequential settings, thanks to reuse of existing supervised learning algorithms.
DAGGER performs well for both complex and simple problems; however, the information may not be intuitive from the states [10]. Moreover, GAIL presented considerable achievement on imitation learning, especially for complex continuous environments. However, it suffers from the requirement of a huge amount of interactions during the training. Furthermore, it is very time-consuming in real-world applications, where there needs to be more interactions between an agent and the environment to achieve an appropriate model [11].
Reference [12] shows that Reductions-based Active Imitation Learning (RAIL) consists of N iterations, where each iteration has a specific stationary policy over time steps and has a significant difference with the previous iteration. This method provides a small error at the expert actions prediction, considering the state distribution of the former iteration. Nevertheless, the results in [12] can be faulty and impractical in some cases due to the unlabeled state distributions in the previous iterations.
As has been presented in the research studies above, all IL methods mostly suffer from instability in the model because of the stochastic policy assumptions. Moreover, the labeled-information needs make the presence of an expert human necessary in order to annotate the dataset. These two main drawbacks prevent the use of IL for high-dimensional high-frequency real-world applications. Fortunately, a promising solution is the integration of IL with RL to overcome these aforementioned limitations.
The idea of exploiting IL to increase the speed of convergence of RL has been considered in [13]. However, that study considers the stochastic policy and uses IL as a "pre-training" solution to speed up the convergence. IL has been considered as a pre-training step for reward reshaping [14], policy reshaping [15] and knowledge transfer [16] with teacher feedback.
Reference [17] uses a reward shaping method which is one of the significant aspects of RL. Reference [18] describes that this method is an accepted way for human feedback in RL, but it causes some issues where human feedback signals may contain inconsistency, infrequency, ambiguity and insufficiency [17]. As an example, translating statements into rewards may be difficult and unclear; accordingly [19], tried to solve this problem considering a drift parameter to reduce the dependency on human feedback signals. To overcome some of the aforementioned limitations, reference [20] proposed an UNDO function as a policy feedback which contains a reversible feedback signal for agents. The results in [18] demonstrate that the human feedback signals can improve RL algorithms by applying them in the process of action selection. Some recent studies use the human feedback as an optimal policy instead of a reward shaping method like agent's exploration seed [21] and inverse RL control [22] and even as a substitute of exploration [16,17].
The core of this paper provides an accessible and effective structure for the agent to become an expert with teacher help and advice. It also addresses a set of generic questions, namely, what should be imitated, how the agent may imitate, when is the time to imitate and who is trustworthy to imitate. Moreover, it addresses when teachers are available and how their feedback can be the most effectively leveraged.
Teacher Assistance Structure
The proposed structure exploits teacher feedback as a rectification of the action domain of the learner; as soon as an action is performed by the agent, this teacher feedback improves the online policy. It can also infer the policy of the agent from infrequent teacher feedback. Four main characteristics of human teacher feedback and their related effects are considered and formulated; namely, duality, reaction time delay, contingency and instability.
While several studies consider a range for teacher feedback, like very bad, bad, good, very good or −100, −50, 50, 100, giving feedback by humans from a range is very complicated and requires very good knowledge of the task and environment. This study takes advantage of duality feedback when a human teacher is satisfied with the decision made by agent or not. This kind of feedback can be sent by expert or non-expert human due to its simplicity concerning knowledge transferral.
The next feature that is considered is the reaction time delay of the human to send feedback. Several studies like [23] present the sample efficiency for training neural reinforcement when it is pre-trained by an expert using the supervised learning method. In fact, in different IL algorithms like DAGGER and GAIL, which are based on offline learning, there is no need to consider the reaction time delay of the human in the model. In the mentioned algorithms just an expert should prepare a time-consuming metadata before starting the training process. Moreover, unprofessional feedback from a non-expert teacher can ruin the training process.
The proposed structure for human-AI interaction presents a methodology to enable AI agents to interact with the human (teacher, expert or not) completely online; we should deal with the delay in the reaction of humans [24,25]. Using the teacher feedback in online training without recognizing that delay can make the training process impractical. However the reaction time delay is not constant and it would vary depending on teacher personality, teacher knowledge, complexity of the environment and ambiguity of actions.
In addition to the reaction time delay, the contingency of the human teacher feedback as a feature of reactive manner is dealt with. Due to limited patience, mostly the human teacher stops to send positive feedback while the agent carries out actions correctly. Moreover, the frequency of releasing the feedback can vary based on human preference [26]. So the proposed methodology considers a module named "feedback predictor" (See Figure 2) to present the contingency and stochasticity of correct feedback which is sent in a specific timestamp.
The details of the structure are explained as follows (See Figure 2): As soon as the agent picks an action, the supervisor can observe the outcome of that action on the environment and send his feedback. This feedback ( f ) is a positive, negative or neutral value to intimate that the last performed action should be modified increasingly or decreasingly. The neutral value is considered when the teacher is not available or he prefers not to expose his idea. The environment state (S), performed action, A(S), and teacher feedback ( f ) are sent back to the "feedback evaluator" and "supervised policy" sections to update the Φ and Ψ.
Feedback Predictor
FB(s) shows the policy learned in the "feedback manager" box and is able to predict the next feedback of the teacher by observing the current state and the agent's action. The dual teachers feedback is in the range [−1, 1]; −1 shows that the teachers are not satisfied with the action taken by the agent, so they send a request to the agent to stop or reduce it. On the other hand, they send back +1 whenever the taken action is convincing, so the teachers encourage the agent to continue it. Moreover, an adjustable learning rate considered to improve the online and offline models on the online training dataset is monitored by the learning algorithm and the learning rate can be adjusted in response. The policy is formulated by Equation (1): where FB(O, A), ψ and θ(O, A) are the teacher feedback policy, the parameters vector and the probability density function delay of the human's feedback signal, respectively. Details about these parameters are explained in the next sections.
Supervised Policy
The policy can be updated and modified directly using a supervised policy based on supervised learning methods. In fact, the agent can change its actions based on stateaction training pairs provided real-time by the supervisor, without considering the value of the training data. This element can improve the model parameters, using state-actionreward-state-action (SARSA) from value-based RL algorithms or Asynchronous Advantage Actor-Critic Agent (A3C) from policy-based algorithms.
RL algorithms are required for the optimization process, whereas the teacher helps the agent to gain a level of skill while the RL algorithms provide poor estimation of value functions. The supervised policy module provides both sorts of error information for the agent as long as the actions are for the environment.
The agent receives evaluations of its behaviour that can help in carrying out the given task. When the agent becomes a professional concerning the task, the teacher gradually withdraws the additional feedback to shape the policy toward optimality for the true task. The error of the prediction is not clear because of the uncertainty of the quantified human feedback. This is considered in Equation (2): where e(t), r(t) and k present the prediction error, error sign extracted from human feedback and constant error value predefined by the user, respectively.
Feedback Evaluator
The responsibility of the feedback evaluator is to update the parameters vector (ψ) of the teacher feedback policy (FB(s)). FB(s) can be calculated by multiplying the teacher feedback and the parameters vector. This can be rewritten as Equation (3): where γ is the adjustable learning rate and can be observed by: where b shows the predefined value of the learning rate as the bias of the model. The variation between the actual teacher feedback and the predicted teacher feedback (calculated from the teacher feedback policy) is considered as the prediction error.
Feedback Flag
The action spaces of the control system are generally dichotomized in continuous and discrete modes. Continuous control systems mostly are designed to deal with continuous action space, especially in high-frequency environments like video games. Speed of system, time delay of call response and non-constancy of the human-response rate make communication between the system and human very difficult in these environments. To deal with this problem, The "feedback flag" module is presented to bufferize and integralize the several past state-action tuples. Each past state-action tuple is weighted with the corresponding probability that characterizes the delay in teacher response and is shown by RD(t).
It is used by the "feedback evaluator" and the "supervised policy". The teacher feedback function is a linear model (Equation (1) and Figure 2). The uncertainty of the feedback's receiving time, time t, is defined as t 1 < t < t n , and directly affects the agent that is trying to attach the reward signal to a specific action in time. This feedback could in fact be attached to any prior action at time t − 1, t − 2, ..., t − n. This is why we need to use (Θ) to define the delay of the human's response signal: where Θ is the density of the continuous human's response.
Reinforcement Learning (RL) Policy
In RL, we consider a predefined environment. In such an environment, an agent performs actions and reactions sequentially to complete a task, using its observations of the environment and the rewards it gets from it. The agent can choose an action from an action space, A s = A 1 , A 2 , ..., A n . That action passes to the stochastic environment and return a new observation space, O s = O 1 , O 2 , ..., O n , and reward, R s = R 1 , R 2 , ..., R n . At each step, the agent observes the current state game state and cannot understand the whole task by observing just the current game state. Moreover, the Marcov Decision Process (MDP) is a fundamental of RL. MDP can be used in a cooperative structure for the decision-making tasks to partly or completely control and balance agents.
The first RL algorithm in this study is SARSA and its details are presented in Algorithm 1. SARSA is very similar to Q-learning. The key difference between them is an on-policy, whereas Q-learning is a class of off-policy Temporal Difference (TD). This implies that the SARSA learning process deals with the actions taken by the current policy instead of the greedy policy. So the SARSA update (see Equation (6)) does not consider the maximum value and greedy policy (Equation (2)).
On the other hand, Q-learning consists of a multi-layer neural network (NN), where its inputs would be states of an environment and the outputs would be the action value, Q(s, θ), where θ is the parameters of NN. In fact, Q-learning updates for the parameter after taking action A n after observing the state O n , and receive an immediate reward R n and Q n+1 . So the update equation is given by: This equation shows that the policy considered to select an action is a greedy policy calculated by Equation (8): The second RL algorithm in this study is Asynchronous Advantage Actor-Critic Agents (A3C), a policy gradient algorithm with a special focus on parallel training. In A3C, the critic part learns the value function while the actor part is trained in parallel and becomes synchronized with the global parameters sequentially. In A3C, there is a loss function for state value to minimize the Mean Square Error (MSE) (Equation (9)) and this is the baseline in the policy gradient update. Finally, the gradient descent can be applied to find the optimal value. For more details about the A3C see the Algorithm 2.
update R, dθ and dω end for end while end while return Q
Experiments and Results
The performance of the proposed Algorithms 3 and 4 is evaluated on two separate use-cases: • Continuous classic cart-pole OpenAI-Gym environment • Continuous classic mountain-car OpenAI-Gym environment.
Continuous cart-pole and mountain-car are used in this study as continuous classical OpenAI-Gym environment. The objective of the cart-pole system is to adjustably control the cart by taking continuous and unlimited actions. The cart has two degrees of freedom (DoF) to balance with the horizontal axes. The system state is parameterized by orientation, position and velocity of both pole and card. The stability of this system is defined by orientation from −12°to 12°, and position deviation between −2.4 to 2.4 (see Figure 3a). Whenever the system gets unbalanced, a negative signal as a punishment acknowledgement is sent back to the system and it will be reset.
The next OpenAI-Gym considered in this study is continuous mountain-car illustrated in Figure 3b. This environment presents a car on a sinuous curve track, located between two mountains. The goal is to drive up the right mountain; however, the car's engine is not strong enough to scale the mountain in a single pass. Therefore, the only way to succeed is to drive back and forth to build up momentum. Here, the reward is greater if you spend less energy to reach the goal.
The results of applying different algorithms (see Algorithms 1-4) on a continuous classic cart-pole in an OpenAI-Gym environment are presented in Figure 4a. At each step, the agent is rewarded for balancing the horizontal axes.The results for both "Hybrid A3C/IL" and "Hybrid SARSA/IL" show that the proposed algorithms based on integrationimitation learning and reinforcement learning can overcome the stand-alone reinforcement learning, A3C and SARSA. Figure 4a presents Hybrid A3C/IL convergeing faster (in Episode # 70) than Hybrid SARSA/IL. The reason for the acceleration of the convergence by Hybrid A3C/IL generally is based on the accuracy of policy-based reinforcement learning in continuous environments. This is proven by comparing stand-alone SARSA and A3C presented by blue and pink dots in the figure. It shows that the value-based reinforcement learning (SARSA here) in a continuous environment like a cart-pole is not satisfying regarding data efficiency. Finally, Hybrid A3C/IL increases the data efficiency of the cart-pole environment by 85.7%, 53.8% and 14.2% compared to SARSA, A3C and Hybrid SARSA/IL, respectively.
Require:
Exact requirements of Algorithm 1, Initialization: Same as Algorithm 1, Moreover, the achievements of utilizing Algorithms 1-4 on continuous classic mountaincar in an OpenAI-Gym environment are shown in Figure 4b. In this environment, the agent receives a punishment (negative reward) for each step of an episode. The maximum performance of this environment is −100 cumulative rewards and it shows that the minimum number of steps in an epoch to gain the flag on top of the hill in the continuous environment is 100. Like the cart-pole environment, the results for both "Hybrid A3C/IL" and "Hybrid SARSA/IL" show that the proposed algorithms based on the integration-imitation learning and reinforcement learning outperform the A3C and SARSA, as examples of policy-based and value-based RL. Figure 4b presents Hybrid A3C/IL and Hybrid SARSA/IL converging faster concerning the two RL algorithms. However, Hybrid SARSA/IL shows lots of oscillations before stabilising at episode # 20. The reason for these fluctuations is that the value-iteration-based RL cannot act well in complicated continuous environments regarding exploration and evaluation of the tuple of state and action. Hybrid A3C/IL and Hybrid SARSA/IL both increase data efficiency about 60% and 33.4% compared to SARSA and A3C, respectively.
Conclusions
In this paper, a novel approach is proposed which combines IL with different types of RL methods, namely, state-action-reward-state-action (SARSA) and Asynchronous Advantage Actor-Critic Agents (A3C), to take advantage of both the IL and RL methods. Moreover, we address how to effectively leverage the teacher's feedback for the agent learner to learn sequential decision-making policies. The results of this study on a simple OpenAI-Gym environment show that Hybrid A3C/IL increases the data efficiency of the cart-pole environment by 85.7%, 53.8% and 14.2% compared to SARSA, A3C and Hybrid SARSA/IL, respectively. Moreover, the results on a complicated OpenAI-Gym environment show that Hybrid A3C/IL and Hybrid SARSA/IL both increase data efficiency by about 60% and 33.4% compared to SARSA and A3C, respectively. | 5,584.6 | 2021-03-30T00:00:00.000 | [
"Computer Science"
] |
Fuzzy multi-objective medical service organization selection model considering limited resources and stochastic demand in emergency management
In recent years, most countries around the world have faced increasing pressures in the realm of emergency management than ever before. Medical service organization selection is one of the most vital facets of emergency management. Meanwhile, during the selection process, many criteria may conflict with one another and information is uncertain, rendering decision-making processes complex. Hence, multi-objective optimization, fuzzy way and stochastic theories serve as suitable means of addressing such problems. In this paper, a fuzzy multi-objective linear model is developed to overcome medical service organization selection issues and uncertain information. Meanwhile, a fuzzy objective and weight are applied to enable the decision-maker to select suitable schemes while considering stochastic medical service demand. Moreover, real data cannot been obtained. Hence, according to actual conditions, we assume relative information. For illustrative purposes, a numerical example is presented to verify the effectiveness of the proposed model from experimental data.
Introduction
In recent years, in the field of emergency management, many countries have been confronted with a lack of efficient emergency management and an increase in death tolls. Therefore, these countries must sustain heavy losses when disasters occur. For example, in 2008 the Chinese Wenchuan earthquake killed over 69,000 people, injured over 370,000 people, left over 17,900 people missing, causing economic losses of over 845.2 billion RMB; in 2010, the Haitian earthquake killed over 200,000 people, affecting over 1,000,000 people. From this perspective, emergency medical service organization selection has become extremely important to countries. In such environments, scholars and managers are more concerned about issues of emergency medical service organization selection than ever before. PLOS In existing studies most scholars of emergency management have paid much more attention to issues of vehicle optimization, supply networks and so on. For example, Sheu proposed a means of designing a seamless centralized emergency supply network by integrating three subnetworks (the shelter, medical, and distribution networks) to support emergency logistics operations in response to large-scale natural disasters [1]. Wilson et al. described a novel combinatorial optimization model of this problem that acknowledges its temporal nature by employing a scheduling approach [2]. Cheng and Liang examined the locations of emergency rescue problem occurrence for urban ambulance and railway emergency systems [3]. Wohlgemuth et al. modeled a multistage mixed integer problem that is able to operate under variable demand and transport conditions [4].
Meanwhile, in practical situations, medical resource services are central to emergency management. Based on real conditions, Torres presented a novel multi-objective heuristic approach for the efficient distribution of 24-h emergency units [5]. Topaloglu constructed a multi-objective programming model for scheduling emergency medicine residents [6]. Walls established a multicenter registry and initiated the surveillance of a longitudinal, prospective convenience sample of intubations at 31 EDs [7]. This study is one of the first multicenter genetic research protocols designed solely for an Emergency Department (ED) [8]. Zakaria created a decision support system for the provision of emergency sanitation [9]. Cong studied family emergency preparedness plans for severe tornado events [10]. Canós improved emergency plan management systems using SAGA [11]. Zhu studied the standardized management of China's strategic railway emergency plan [12]. Calixto applied regional emergency plan requirements to the Brazilian case [13]. Su conducted a case study of emergency medical services deployment in Shanghai [14].
Moreover, as information is usually uncertain, researchers must consider stochastic data and the vagueness of input information. Araz established a fuzzy multi-objective coveringbased vehicle location model for emergency services [15]. El-Ela established optimal preventive control actions using a multi-objective fuzzy linear programming technique [16]. Adan improved the operational effectiveness of tactical master plans for emergency and elective patients using stochastic demand and capacitated resources [17].
However, emergency medical service organization selection is a multicriteria decision-making problem affected by several conflicting factors including costs, degrees of social satisfaction, response times, service qualities, etc. The multiple criteria are usually unequally important. Moreover, information is usually uncertain. Consequently, scholars must analyze the trade-off among several criteria and the uncertainty of input information.
In real situations, objectives, constraints and weight information are usually uncertain. The decision-maker cannot precisely apply relative weights and information during emergency medical service organization selection. Meanwhile, stochastic emergency medical service demand also must be considered during medical service organization selection. However, most of the above models do not simultaneously consider such conditions. Thus, to generate a more practical and meaningful model for addressing the selection problem, we present a new fuzzy multi-objective medical service organization selection model based on stochastic demand and limited resources. In this model, objectives and weights are assumed to be fuzzy numbers with an interval fuzzy number where demand is stochastic. This paper differs from past works in that it applies the following four conditions: 1. Due to vague information, we assume that objectives and some constraints are fuzzy.
2. Based on practical conditions, emergency medical demand is defined as a stochastic variable.
3. Delayed medical service cost are considered.
4. Medical organizations can provide multiple services.
As the main motivation of this study, as information is usually uncertain and as multiple medical services must be considered in real situations, we determine how to optimize emergency medical service organization selection in uncertain environments for the country. Furthermore, in this paper, a numerical example is used to illustrate the validation of the proposed method, as the explored problem is complex and difficult to address in real life. From this example, we demonstrate that the proposed method is valid.
The rest of this paper is organized as follows. In Section 2, a fuzzy multi-objective model and its formulation for the decision-making process are proposed in which the objectives are not equally important and have fuzzy weights. Subsequently, a general linear multi-objective programming model for this problem is formulated and some definitions and appropriate approaches to solving this decision-making problem are discussed. Section 3 presents the numerical example and describes the results. Finally, concluding remarks are given in Section 4.
Multi-objective medical service organization selection model with stochastic demand and limited emergency management resources
In emergency management, the manager receives medical resource demand information from the relevant department and allocates corresponding medical services for dealing with this risk. The challenge here is to allocate medial resource demand and select a suitable medical service organization approach. Notably, emergency medical service organization selection is a multiple criteria decision-making problem, and a multi-objective decision model must be built to allocate medical service demand for sudden risk and to select an organization approach among other potential approaches. Meanwhile, in developing similar models, researchers have rarely simultaneously considered stochastic demand, fuzzy objective and weight factors. Our model recognizes that these phenomena must be considered to address emergency plan selection problems. The following section discusses our model in detail and presents a flowchart describing the proposed model (Fig 1). We first make the following assumptions: 1. Medical service demand is stochastic.
2. Delayed medical service costs are considerable.
3. The objective and weight are fuzzy. 4. Medical resources are limited. 5. Life is the most precious resource of all.
6. Medical organization can support multiple services.
Moreover, we use the following notation throughout this paper. with the following constraints: where f(1), . . ., f(k) are the negative objectives or criteria-like cost, response time, etc. f(k+1), . . ., f(q) are positive objectives or criteria such as the degree of social satisfaction, service quality and so on; X tj , t = 1,. . ., n j = 1,. . .,m t are nonnegative decision variables and b i , i = 1,. . ., m are independent continuous random variables with given distributions while a itj represents the coefficient of the t-th decision variable in the i-th constraint, and β i is the i-th pre-assigned probability level 0<β i �1, i = 1, . . ., m. Meanwhile, to solve an emergency medical organization selection problem, we assume that the objective includes the cost f 1 , degree of social satisfaction f 2 , response time f 3 and service quality level f 4 together with the major constraint that medical services can mostly satisfy stochastic demand. Each emergency medical organization has its own unit cost, social satisfaction history, response time record and service quality data.
However, we assume that life is the most precious resource of all and that costs must been considered. Moreover, medical services are limited. At the same time, in real situations, many factors that shape medical service demand cannot been fully addressed. According to such conditions, we must define a penalty function. Notably, delayed medical services result in losses. Diverse conditions can result in different costs. With a scenario analysis we construct two penalty functions as follows: 1. When a delayed medical service can be provided by another organization, the penalty function is equal to zero, i.e., 2. When a delayed medical service can immediately be replenished, the penalty function is linear as follows: The objective function for costs can be written as follows: which should minimize the cost of the product. The objective function for the degree of social satisfaction is defined as: which should maximize the number of reliable units. The aggregate performance measure for the response time objective function is defined as: which should minimize the response time.
The objective function for service quality is defined as: which should maximize service quality.
In turn, the final form of the multi-objective model for medical service organization selection is as follows: x tj � C tj ð15Þ x tj � 0 and integer Generally, managers do not have exact and complete information related to decisionmaking criteria and constraints that are fuzzy and stochastic in nature. A new fuzzy multi-objective medical service organization selection model is thus developed to address this problem.
In the new model,~denotes the fuzzy environment. ≳ used in the objectives and constraints denotes a fuzziness of � i.e., approximately greater than or equal to, and ≲ linguistically denotes "essentially smaller than or equal to."
A new fuzzy multi-objective medical organization selection model considering multiple services and stochastic demand for emergency management
Thus, by applying c it and σ i where c it >0 and 0<σ i <β i as predetermined values set by the decision-maker, satisfaction constraints of the decision-maker can be stated as follows: The decision-maker is fully satisfied if The decision-maker is almost satisfied if The decision-maker is not satisfied if then, the equivalent deterministic constraints for (17)- (19), respectively, are where F À 1 i ð�Þ is the inverse of the cumulative distribution function of random variable b i , i = 1,. . .,m.
Using the Bellman-Zadeh approach [18,19], the fuzzy set objective functions f p and constraints g i are defined as To obtain (24) and (25) (26) and (27).
where k i (x) and h i (x) are defined as (30) is a membership function of fuzzy values of linguistic variables that reflect constraints of a qualitative nature.
Decision-making processes
First, the max-min operator used by Zimmermann [20,21] for fuzzy multi-objective problems is discussed. Then, the convex (weighted additive) operator that enables DMs to assign different weights to various criteria is described.
In fuzzy programming modeling, when using Zimmermann's approach [22], a fuzzy solution is given by the intersection of all fuzzy sets representing either fuzzy objective or fuzzy constraints. The fuzzy solution for all fuzzy objectives and fuzzy constraints may be written as The optimal solution (x � ) is given by where μ D (x), m f p ðxÞ and m g i ðxÞ represent membership functions of the solution, objective functions and constraints. Under real conditions, different objectives and constraints are of unequal importance to DM and other patterns, and thus weight should be considered. The fuzzy weighted additive model can address this problem as described below.
The weighted additive model is widely used in vector-objective optimization problems; the basic premise is to use a single utility function to express the overall preference of DM to draw out the relative importance of criteria [23]. In this case, multiplying each membership function of fuzzy goals by its corresponding weight and then adding the results together generates a linear weighted utility function.
The fuzzy model proposed by Bellman and Zadeh and Sakawa and the weighted additive model developed by Tiwari are written as [24][25][26] where w f p and w g i are the weighting coefficients denoting the relative importance of fuzzy goals and fuzzy constraints.
Z � m f p ðxÞ; p ¼ 1; . . . ; q ð41Þ k � m g i ðxÞ; i ¼ 1; . . . ; m ð42Þ If w j denotes the fuzzy weight of the j-th objective or constraint, letw j ¼ fw j ; w j1 ; w j2 ; � w j g, j = 1, . . ., p+m be a trapezoidal fuzzy number, or letw j ¼ fw j ; w j0 ; � w j g j = 1, . . ., p+m be a triangular fuzzy number. Then, by utilizing the α−cut approach forw j , as a trapezoidal fuzzy number, the proposed model can be written in the form of a weighted max-min deterministic-crisp nonlinear programming model as follows: s.t.: w j1 � w j � w j2 ; j ¼ 1; ::; m þ p ð47Þ x t � 0; t ¼ 1; . . . ; n ð49Þ It should be noted that w j , j = 1,2,. . .,m+p become decision variables in addition to η p , κ i and x t , t = 1, . . ., n. Constraint (48) ensures that the relative weights should add up to one. Additionally, fuzzy weights reflect the uncertain relative importance of objectives where the sum of all fuzzy weights should be one, i.e., objective or constraint is considered a triangular fuzzy number, then w j1 and w j2 should be replaced with w j0 for the j-th objective or constraint.
Results and discussion
Since we cannot obtain real data on medical service organization selection, all data presented in this paper are experimental. In this section, for explaining validity of our model, we assume that all relative experimental data are based on real conditions. According to these experimental data, a numerical example is presented to illustrate the above listed model and algorithm. All experimental data used are shown in "Assumptions" and "Tables 1-7."
Assumptions
• Three medical services are purchased from three medical organizations. Factors such as price, satisfaction, response time, service quality and capacity can be obtained from historical data.
• Demands for three items are separately normally distributed fuzzy random variables with mean values of 100, 120 and 140, respectively, and variances of 25, 49 and 36, respectively; the three organizations' services are limited.
• Objectives include the cost (f 1 ), degree of social satisfaction (f 2 ), response time (f 3 ) and service quality (f 4 ). All experimental data are included in Table 1 and Table 2. In Table 1, information on objectives and constraints is provided. As the only stochastic constraint, medical services must almost satisfy demand. Meanwhile, data on decision makers' relative weights of fuzzy goals and constraints are shown in Table 2.
From Eq (44)-(52) and relative experimental data we find that our numerical example has interesting implications for medical service organization schemes as illustrated in Table 3, Fuzzy multi-objective medical service organization selection model in emergency management which shows that the value of the penalty function does not always influence medical service organization selection. Regarding scenario descriptions, under one scenario, it is easy to find alternatives; under the other scenario, although a medical service can be purchased, there is a loss. Meanwhile, the loss is linearly related to the purchase volume. These scenarios serve as simplified descriptions of real situations. Thus, an interesting result occurs when a selected organization incurs a higher penalty than an unselected one. In Table 3 x 22 and x 23 , we purchase more medical services at higher losses rather than incurring no loss. This phenomenon is usually inconsistent with what people anticipate. However, when costs are not the sole factor, we know that this is common. In our model, decisions are influenced by costs, degrees of social satisfaction and so on. This finding means that we must simultaneously consider multiple criteria requirements. A factor cannot always influence the final result. According to this rule, a penalty function cannot can occasionally impact medical service organization selection decisions due to other factors. Meanwhile, we use a sensitivity analysis to investigate the changes in optimal decision values regarding medical service Item 1 when only one parameter in the dataset changes while others remain unchanged. Relative data are shown in Tables 1 and 2. The computational results are illustrated in Tables 4-7.
From Tables 4-7 sensitivity analysis demonstrates that, when only one parameter is changed, the optimal decision is sometimes not influenced when applying current experimental data to the multi-objective problem. Moreover, this result simultaneously validates the result shown in Table 3 that the sole parameter occasionally fails to affect decisions.
The model proposed here is reasonable to apply under defined conditions and can solve the decision-making problem of selecting medical organization approaches with uncertain information. As the variables used in the model are different from real conditions, the model's validity is limited to a certain extent. Therefore, it is necessary to adjust the variables used according to real conditions. In addition, this work mainly presents a theoretical analysis that needs to be verified with real data.
Conclusion
Medical service organization selection is one of the most important activities of emergency management. At the same time, medical service organization selection is a multiple objective decision-making problem for which objectives and constraints are not equally important. Moreover, information available to managers is uncertain. In considering these complex conditions and in solving this problem, we use fuzzy way, stochastic theory and multi-objective optimization to construct a medical service organization selection model. Hence, we develop a new fuzzy multi-objective model for this selection while considering stochastic demand. From the assumed data, we find that the proposed model can effectively address the uncertainties of input data and help managers identify suitable medical organization plans with a number of examples.
The examined problem can be transformed into a weighted max-min deterministic-crisp linear programming model. This transformation reduces the computational complexity and renders the application of fuzzy models more understandable. Finally, the proposed model's further applications are also worthy of further research. | 4,317.6 | 2019-03-13T00:00:00.000 | [
"Computer Science"
] |
MINIMUM PROFILE HELLINGER DISTANCE ESTIMATION FOR A TWO-SAMPLE LOCATION-SHIFTED MODEL
Minimum Hellinger distance estimation (MHDE) for parametric model is obtained by minimizing the Hellinger distance between an assumed parametric model and a nonparametric estimation of the model. MHDE receives increasing attention for its efficiency and robustness. Recently, it has been extended from parametric models to semiparametric models. This manuscript considers a two-sample semiparametric location-shifted model where two independent samples are generated from two identical symmetric distributions with different location parameters. We propose to use profiling technique in order to utilize the information from both samples to estimate unknown symmetric function. With the profiled estimation of the function, we propose a minimum profile Hellinger distance estimation (MPHDE) for the two unknown location parameters. This MPHDE is similar to but different from the one introduced in Wu and Karunamuni (2015), and thus the results presented in this work is not a trivial application of their method. The difference is due to the two-sample nature of the model and thus we use different approaches to study its asymptotic properties such as consistency and asymptotic normality. The efficiency and robustness properties of the proposed MPHDE are evaluated empirically though simulation studies. A real data from a breast cancer study is analyzed to illustrate the use of the proposed method.
Introduction
Minimum distance estimation of unknown parameters in a parametric model is obtained by minimizing the distance between a nonparametric distribution esti-mation (such as empirical, kernel, etc) and an assumed parametric model.Some well-known examples of minimum distance estimation include least-squares esti-mation and minimum Chi-square estimation.Among different minimum distance estimations, minimum Hellinger distance estimation (MHDE) receives increasing attention for its superior properties in efficiency and robustness.The idea of the estimation using Hellinger distance was firstly introduced by Beran (1977) for parametric models.Simpson (1987) examined the MHDE for discrete data.Yang (1991) and Ying (1992) studied censored data in survival analysis by using the MHDE.Woo and Sriram (2006) and Woo and Sriram (2007) employed the MHDE method to investigate mixture complexity in finite mixture models.The MHDEs for mixture models were also studied by many literatures such as Lu et al. (2003) and Xiang et al. (2008).Other applications of the MHDE method can be referred to Takada (2009), N'drin and Hili (2013) and Prause et al. (2016).
For any given θ, since X1 -θ0, . . ., Xn0 -θ0, Y1 -θ1, . . ., Yn1 − θ1 are i.i.d.r.v.s from f , we can estimate the unknown f using the following kernel density estimator based on the pooled sample: where 0 = 0/, 1 = 1 − 0 = 1/, kernel function K is a symmetric density function, the bandwidth bn is a sequence of positive constants such that bn → 0 as n → ∞, and ̂0 and ̂1 are kernel density estimators of f0 and f1, respectively.To be specific, f0 and f1 have and Even though ρ0 and ρ1 depend on n, we depress their dependence for notation simplicity.We generally require that ni/n → ρi as n → ∞ with ρi ∈ (0, 1), i = 0, 1.Based on (2), f0 and f1 can also be estimated respectively by and To obtain the MPHDE of θ, we firstly profile the unknown nuisance parameter f out by minimizing the sum of the squared Hellinger distance for the two samples, i.e.where in the last equality we represent ̂ as a functional T which only depends on ̂0 and ̂1.
As there is no explicit expression of the solution to the above optimization in (5), ̂ has to be calculated numerically.In this manuscript, the computation was implemented by the R function "nlm" with the medians of Xi and Yi to be the initial values of 0 and 1 , respectively.The numerical optimization leads to satisfactory results in our simulation and data application studies.All of them successfully achieve convergence.model is identifiable if ρ0 ∈ (0, 0.5) ∪ (0.5, 1).If f is unimodal, then this mixture model is identifiable even when ρ0 = 0.5.Therefore the identifiability is not a problem for the MPHDE and we will assume from now on that the mixture model is identifiable for the sake of simplicity.
Remark 3.For one-sample location model (. − ), the Hellinger distance is between the location model, involving both f and θ together, and its nonparametric estimation.For this two-sample model, in order to use the information about the nuisance parameter f contained in both the first and second samples, the Hellinger distance is between f and its estimation that involves the nuisance density estimation and the location parameters of our interest.
Asymptotic Properties
In this section, we discuss the asymptotic distribution of the MPHDE ̂ given in (5) for the two-sample semiparametric location-shifted model (1).Note that ̂ given in ( 5) is a bit different than the MPHDE defined in Wu and Karunamuni (2015) for general semiparametric models in the sense that the former incorpo-rates the model assumption in the nonparametric estimation of f while the later uses a completely nonparametric estimation of f not depending on the model at all.In this sense, we can not apply the asymptotics obtained in Wu and Karuna-muni (2015) to our model (1).
Instead we will directly derive below the existence,consistency and asymptotic normality of ̂.Let F be the set of all densities with respect to (w.r.t.) Lebesgue measure on the real line.We first give in the next theorem the existence and uniqueness of the MPHDE ̂.
The following theorem is a consequence of Theorem 1 which gives the consis-tency of the MPHDE ̂ defined in (5).
Theorem 2. Suppose that the kernel K in ( 3) and ( 4) are absolutely continuous, has compact support and bounded first derivative, and the bandwidth bn satisfies bn → 0 and and furthermore the MPHDE ̂ → .
The next Theorem 3 gives the expression of the different ̂− which will be used to establish the asymptotic normality of θˆ in Theorem 4.
Theorem 3. Assume that the conditions in Theorem 2 are satisfied.Further suppose f has uniformly continuous first derivative.Then where With (6) and some regularity condition we can immediately derive the asymptotic distribution of ̂− given in the next theorem.
Simulation Studies
We assess the empirical performance of the proposed MPHDE in Section 2 for the two-sample location-shifted model.Five hundred simulations are run for each parameter configuration.We consider a parameter setting of ( 0 , 1 ) ⊤ = (0,1) ⊤ and simulate four different distributions for f (x): normal, Student's t, triangular and Laplace.We set the standard deviation to be 1 for normal distribution, the degrees of freedom to be 4 for t distribution.The triangular distribution has density function 1 are for θ0, n0 = n1 = 20 and the case that the first sample is contaminated.The results for θ1, n0 = n1 = 50 or the case that the second sample is contaminated are very similar to those in Figure 1 and thus omitted to save space.
Figure 1 presents the average α-IFs over 500 simulation runs for the MPHDE, MLE and LSE of θ0 under normal, t, triangular and Laplace distributions.Regardless of the population distribution, the α-IF of the MPHDE are bounded and converge to the same small constant when the value of the outlying observation gets larger and larger on either side, while the α-IFs of the MLE and LSE are unbounded in general.Therefore, compared to the MLE and LSE methods, the MPHDE has a little lower efficiency but this limitation is compensated by its excellent robustness.In summary, the MPHDE method always results in reasonable estimates no matter data is contaminated or not, whereas the MLE and LSE methods under contaminated data lead to significantly biased estimates.
Data Applications
In this section, we demonstrate the use of the proposed MPHDE method through analyzing a breast cancer data collected in Calgary, Canada (Feng et al., 2016).Breast cancer is regarded as the most common cancer and the second leading cause of cancer death for females in North America.
Existing studies suggest that it would be more informative to use some protein expression levels as indicators of biological behavior (Feng et al., 2015).These biomarkers could reflect genetic properties in cancer formation and cancer aggressiveness.Our dataset has 316 patients diagnosed with breast cancer between years 1985 and 2000.Two interested biomarkers measured on these patients are Ataxia telangiectasia mutated (ATM) and Ki67.ATM is a protein to support maintaining genomic stability.Comparing with normal breast tissue, ATM could be significantly reduced in the tissue with breast cancer.Ki67 is a protein expressed exclusively in proliferating cells.It is often used as a prognostic marker in breast cancer.
Let (1) and (2) denote the location parameters in the distributions of the protein expression level of ATM and Ki67 biomarkers, respectively.Our research focuses on the comparison of the protein expression levels across both cancer stages (Stage) and lymph node (LN).As for cancer stage, To compare the two biomarkers ATM and Ki67, we calculate the MPHDEs 0 () and 1 () for both k = 1 and k = 2.The parameter estimates (Est.), estimated standard errors (SE), 95% confidence intervals (CI) and p-values are reported in Table 3.Based on the results in this
Concluding Remarks
In this paper, we propose to use MPHDE for the inferences of the two-sample semiparametric location-shifted model.Compared with commonly used least-squares and maximum likelihood approaches, the proposed method leads to ro-bust inferences.Simulation results demonstrate satisfactory performance and the analysis for the breast cancer data exemplifies its utility in real practice.
Appendix
The proofs of Theorems 1, 2, 3 and 4 are presented in this section.The techniques used in the proofs are similar to those in Karunamuni and Wu (2009).
Remark 3 .
Distributions satisfying ∫ ′′ () = 0 include those with support on the whole real line, such as normal and t distributions.The distributions satisfying ∫ ′′ () ≠ 0 include those with finite support and its first derivative evaluated at boundary of support is non-zero, such as f(x) = Remark 4. If the two samples in (1) are actually a single sample from the mixture 0 (• − 0 ) + 1 (• − 1 ) with known classification for each data point, then by comparing the lower bound of asymptotic variance described inWu and Karunamuni (2015) with the results in our Theorem 4, we can conclude that the proposed MPHDE ̂ defined in (5) is efficient, in the semiparametric sense, for any f .In addition, if ∫ ′′ () = 0 , then this semiparametric model is an adaptive model and the proposed MPHDE ̂ is an adaptive estimator.
and we set c = 1 . 16 ( 1 −
The Laplace distribution has density function (we set b = 1.The bandwidth bn is chosen to be bn = −1/5 according to the bandwidth requirement in Theorem 4. The biweight kernel () = 15 2 ) 2 for |t| ≤ 1 is employed in the simulation studies.We consider both smaller sample sizes n0 = n1 = 20 and larger sample sizes n0 = n1 = 50.As a comparison, we also give both least-squares estimation (LSE) and max-imum likelihood estimation (MLE).For the two-sample location-shifted model (1) under our consideration, simple calculation shows that the LSEs of θ0 and θ1 are essentially the sample means ̅ and ̅ respectively.With f assumed known, straight calculation says that the MLEs of θ0 and θ1 are sample means for normal case and sample medians for Laplace case, while there is no explicit expression of the MLEs for Student's t and Triangular populations.Tables1 and 2display the simulation results of MPHDE, LSE and MLE methods for sample sizes n0 = n1 = 20 and n0 = n1 = 50, respectively.In the tables, the term Bias represents the average of biases over the 500 repetitions; the terms RMSE and SE are the average of root mean squared errors and empirical standard errors, respectively; and the term CR represents the empirical coverage rate for 95% confidence intervals.From Tables1 and 2we can see that all the three estimation approaches have fairly small bias.In terms of standard errors, the MPHDE has worse performance than the LSE and the MLE regardless of sample size.To investigate the robustness properties of the proposed MPHDE and make comparison, we examine the performance of the three methods under data con-tamination.In this simulation, the data from model (1) is intentionally contami-nated by a single outlying observation.This is implemented, say for n0 = n1 = 20,by replacing the last observation X20 with an integer number z varying from −20 and 20.To quantify the robustness, the α-influence function (α-IF) discussed by Lu et al. (2003) is used.The α-IF for parameter θi, i = 0, 1, is defined as () = ( ̂ − ̂), where ̂ represents the estimate based on the contaminated data with outlying observation X20 = z and ̂ denotes the estimate based on the uncontaminated
2 )
and 1 () (k = 1, 2) denote the location parameters in the distributions of protein expression level for Stage I and Stage II/III patients, respectively.Regarding LN status, 0 () and 1 () (k = 1, denote the location parameters in the distributions of protein expression level for negative LN (LN-) and positive LN (LN+) patients, respectively.Figure 2 displays the boxplots for ATM and Ki67 expression levels across both cancer stages and LN statuses, respectively.From this figure we do see the difference in location of both ATM and Ki67 variables across both cancer stages and LN statuses, especially for Ki67 considering the smaller variation in expression level.
Figure 1 :
Figure 1: The average α-IFs under (a) normal distribution, (b) Student's t dis-tribution, (c) triangular distribution and (d) Laplace distribution.Thin-solid line represents the zero horizontal baseline, and the thick-solid, dot-dashed and dashed lines represent respectively the MPHDE, LSE and MLE approaches.
data.The α-IF is calculated by using the change in the estimate before and after contamination divided by the contamination rate, i.e. 1/ni.We can similarly calculate the α-IF when outlying observations contaminate the second sample.The simulation results in Figure
Table 3 :
Breast cancer data analysis results based on MPHDE.
level in negative LN group than in positive LN group (p = 0.019), while Ki67 has lower expression level in negative LN group than in positive LN group (p < 0.001). | 3,313.8 | 2021-02-24T00:00:00.000 | [
"Mathematics"
] |
Elemental Composition of Skeletal Muscle Fibres Studied with Synchrotron Radiation X-ray Fluorescence (SR-XRF)
Diseases of the muscle tissue, particularly those disorders which result from the pathology of individual muscle cells, are often called myopathies. The diversity of the content of individual cells is of interest with regard to their role in both biochemical mechanisms and the structure of muscle tissue itself. These studies focus on the preliminary analysis of the differences that may occur between diseased tissues and tissues that have been recognised as a reference group. To do so, 13 samples of biopsied human muscle tissues were studied: 3 diagnosed as dystrophies, 6 as (non-dystrophic) myopathy and 4 regarded as references. From these sets of muscle biopsies, 135 completely measured muscle fibres were separated altogether, which were subjected to investigations using synchrotron radiation X-ray fluorescence (SR-XRF). Muscle fibres were analysed in terms of the composition of elements such as Br, Ca, Cl, Cr, Cu, Fe, K, Mn, P, S and Zn. The performed statistical tests indicate that all three groups (dystrophies—D; myopathies—M; references—R) show statistically significant differences in their elemental compositions, and the greatest impact, according to the multivariate discriminate analysis (MDA), comes from elements such as Ca, Cu, K, Cl and S.
Introduction
Pathological conditions that affect skeletal muscles, commonly termed myopathies, represent a broad spectrum of clinical and pathological changes. Some of them, such as muscle dystrophies, have a genetic background. Generally, they can be divided into two categories: primary and secondary. The first category encompasses a panoply of genetic disorders, including an important group collectively called dystrophies and other inherited myopathies; in the second category, one can especially distinguish disorders of the nervous system (e.g., leading to so-called denervation or neurogenic atrophy of muscle), toxic injury (including medication) and different metabolic, endocrine and immunological disorders. However, in many cases, the division into primary and secondary myopathies is difficult, unclear and even artificial-for instance, inflammatory myopathies that could be regarded as "primary", with so-called inclusion body myositis (IBM) being an example-but in many myositides, there is evident overlap with essentially "non-muscular" conditions such as connective tissue diseases. Meanwhile, in many muscular dystrophies, 2 of 12 not only skeletal muscles are affected, and the clinical picture of the disease includes disturbances of other organs, such as cataracts (in myotonic dystrophy); in turn, so-called mitochondrial myopathies may be involved. However, from the clinical point of view, it is obviously important to know whether the disease is essentially "muscular" or rather a "multi-organ" condition. Diseased muscle tissue can show numerable morphological (microscopic) changes, though the most typical (and essentially encountered in almost all "pathological" biopsies) is muscle fibre atrophy, which is represented by the diminishing of the muscle fibre cross-section dimension in a histological slide. As a result, being so unspecific as a common denominator of divergent muscle pathologies, its molecular mechanisms most probably are not the same in different types of myopathies.
This opens the possibility for investigation of the elemental and different biomolecular patterns of muscle atrophy and, in broader aspects, other pathological changes in diseased muscle.
The diagnosis of a muscle disease based on biopsy is, of course, not limited to investigation of the morphological changes observed under the microscope, but also involves several different methods of staining, including histochemical staining once and multiple immunohistochemical stainings with a broad spectrum of antibodies against different molecular constituents of muscle cells and cells that may "invade" muscle, such as lymphocytes, etc. It is worth noting that the interpretation of such multi-aspect pathology is very demanding and may be biased by some degree to subjectivity in assessment and conclusions, and the whole process of diagnosis is difficult; what is more, one also has to include the clinical picture of the diseases [1][2][3][4].
Since, as mentioned above, myopathies are diseases related to muscle fibres that lead to their decline and structural changes, their biomolecular and elemental composition may change as well. Those changes may therefore reflect the biochemical processes taking place in the fibres, so investigation of those changes may improve the understanding of the disease development at its early stages.
To the best of our knowledge, there is a lack of research related to measuring differences in the elemental composition of diseased human muscle tissue. Therefore, for the first time, we herein present the use of the synchrotron radiation X-ray fluorescence (SR-XRF) method to examine whether there are the differences in elemental composition between muscle fibres affected by disease and apparently healthy fibres-i.e., those that at least do not show microscopic features of pathology. The SR-XRF technique is a well-known gold-standard multi-elemental analytical method that enables the simultaneous tissue micro-imaging of chemical elements at trace concentrations (~mg/kg). A great advantage of the technique in muscle studies is its ability to simultaneously determine both chemical elements that play a key role in important processes, such as ion transport systems (Cl, K and Ca), energy metabolism (P and Fe) and enzymatic reactions (Fe, Cu, Zn and Mn), or are simply the components of biological macromolecules (P and S) as well as those whose importance for skeletal muscle function is less well understood (Cr and Br). Therefore, all the elements that could be determined using the SR-XRF technique were taken into account in the analysis [5][6][7].
Results and Discussion
The elemental intensity maps ( Figure 1) show that the spatial distribution of elements in the muscle fibres is very heterogeneous, which can have a large impact on the performed analyses. For this reason, the fibres that were partially outside the scanning area were excluded from the statistical analysis. The overall dataset included 135 isolated and irregular fibres, for which 11 elements were analysed in each fibre. For the muscle fibres thus selected, the values of the normalised peak areas of each element were used in further statistical analysis. First, the data were evaluated using the Shapiro-Wilk test to check their agreement with the normal distribution. Based on the obtained results, it was found that the data did not show normal distribution. Therefore, the further statistical analysis was based on non-parametric tests. check their agreement with the normal distribution. Based on the obtained results, found that the data did not show normal distribution. Therefore, the further stat analysis was based on non-parametric tests. For all analysed elements (without separation into types), the cross-correlatio checked using the Spearman correlation analysis ( Figure 2). All the obtained corre coefficients were considered statistically significant if the parameter p had a value A surprisingly large part of the elements showed a very strong linear relationship correlation coefficient value above 0.9. All the relationships turned out to be po indicating overlapping elevated concentrations. The strongest dependence (value 0.95) was shown by paired elements, such as P and S, P and Fe, P and Zn, S and Fe, Zn, Mn and Cr, Cr and Cu and Cu and Mn. For all analysed elements (without separation into types), the cross-correlation was checked using the Spearman correlation analysis ( Figure 2). All the obtained correlation coefficients were considered statistically significant if the parameter p had a value < 0.05. A surprisingly large part of the elements showed a very strong linear relationship with a correlation coefficient value above 0.9. All the relationships turned out to be positive, indicating overlapping elevated concentrations. The strongest dependence (value above 0.95) was shown by paired elements, such as P and S, P and Fe, P and Zn, S and Fe, S and Zn, Mn and Cr, Cr and Cu and Cu and Mn.
The Kruskal-Wallis test was performed in order to check whether there are statistically significant differences between the groups for the analysed elements. In order to eliminate the outliers, the two-sided Tukey test [8] was used, with the change of the value into the mean. The values obtained as a result of the test were considered statistically significant when the p parameter obtained a value of 0.05 or less; for values above this number, the test result was considered statistically insignificant. The test results showed that for all elements, at least one group showed statistically significant differences. A post hoc test was performed in order to verify which groups show differences. For all groups, elements such as P, K, Ca and Fe were considered statistically significant in the differentiation of fibres. However, for the myopathy-reference pair elements, S, Cl, Cr, Mn, Cu, Zn and Br were considered statistically insignificant. The results of the intergroup comparison are presented for individual elements in Figure 3. For all the elements presented, it can be seen that the values of the normalised intensities of characteristic X-rays are much lower for the fibres from "dystrophic cases". Here, it is worth remembering that dystrophies are muscle diseases that are inherited. The mutated genes associated with dystrophies are numerous, and the mode of inheritance may be either autosomal dominant, autosomal recessive or X-linked. Furthermore, the encoded proteins may be structural constituents of the cellular membrane of the myofiber (sarcolemma) or the constituents of the membrane that envelops the nuclei of muscle fibres (nucleolemma), or they may play some other particular functions. Mutation leads either to total or partial lack of expression of the encoded protein, or to its malfunction. The ultimate result of the mutation is therefore either structural or functional disintegration of muscle fibre, though the dynamics of the changes differ very much. The loss of elements investigated in this study in muscle fibres in dystrophies may suggest a sort of common pathway of pathological processes in dystrophies or (at least) simply underscore the level of cellular metabolic derangements which take place during disease. Each particular element plays a specific role in biomechanisms. Sulphur deficiency makes it difficult to absorb certain minerals. Chlorine deficiency causes muscle cramps and a 4 of 12 weakening of muscle strength. Elements such as potassium and calcium are responsible for nerve conduction and muscle contraction processes; their deficiency may, therefore, cause problems with mobility. Iron is an essential component of myoglobin, which supplies oxygen to the muscles to produce energy. Copper deficiency causes muscle weakness and anaemia. Zinc, as an antioxidant that affects the development and regeneration of muscle tissue, also stabilises the protein structures from which fibres are made [9][10][11][12][13][14]. In order to find out which elements have the greatest influence on differentiation, a multivariate discriminate analysis (MDA) was performed for the analysed fibres. Log transformation [15] was used to normalise the data. From each group, 70% of the randomly selected fibres were used to create the model. Based on the partial Wilks' lambda (Table 1), it was concluded that elements such as Ca, K, Cr and P made the greatest contribution to the separation of the measured fibre. S, Mn, Fe, Zn and Br were outside the discrimination model due to the overly high value of the p parameter (p > 0.05). The lower the value of the parameter is-the partial Wilks' lambda-the greater is the contribution of the element to the separation of fibres. The discriminate functions were also obtained, which are combinations of the characteristic X-ray intensities of the elements used in the model.
For the first function, the greatest influence on the components was from P and Ca, while for the second function, the greatest influence was from K and Cr. The graphic representation of data distribution in the space of discriminate functions is presented in Figure 4.
In order to find out which elements have the greatest influence on differentiatio multivariate discriminate analysis (MDA) was performed for the analysed fibres. transformation [15] was used to normalise the data. From each group, 70% of randomly selected fibres were used to create the model. Based on the partial W lambda (Table 1), it was concluded that elements such as Ca, K, Cr and P made the grea contribution to the separation of the measured fibre. S, Mn, Fe, Zn and Br were out the discrimination model due to the overly high value of the p parameter (p > 0.05). lower the value of the parameter is-the partial Wilks' lambda-the greater is contribution of the element to the separation of fibres. The discriminate functions were also obtained, which are combinations of characteristic X-ray intensities of the elements used in the model. For the first function, the greatest influence on the components was from P and while for the second function, the greatest influence was from K and Cr. The grap representation of data distribution in the space of discriminate functions is presente Figure 4. In order to verify the work of the created model, an inverse analysis was perform for the data used to develop the model. The results were compared with the orig diagnosis. Overall, 93% of the fibres were matched to the category in which they w diagnosed ( Table 2). From the fibres diagnosed as myopathy, six were assigned as reference group. From the reference group, one was assigned as myopathy. Meanwh in the dystrophy group, all fibres showed the features of their assigned category. In case of the fibre belonging to the reference group but assigned to the "myopathy" gro In order to verify the work of the created model, an inverse analysis was performed for the data used to develop the model. The results were compared with the original diagnosis. Overall, 93% of the fibres were matched to the category in which they were diagnosed ( Table 2). From the fibres diagnosed as myopathy, six were assigned as the reference group. From the reference group, one was assigned as myopathy. Meanwhile, in the dystrophy group, all fibres showed the features of their assigned category. In the case of the fibre belonging to the reference group but assigned to the "myopathy" group, it was a fibre from a sample where the remaining fibres were unequivocally classified as myopathy. In the case of the six fibres diagnosed as myopathy and assigned as the reference, four came from one sample, of which the remaining fibres were ambiguously assigned to myopathy. Two of those fibres, however, belonged to a different sample in which they were the only fibres measured. They were unequivocally assigned to the reference group, which may indicate the initial disease processes which require further histopathological investigation. In order to verify the correctness of the created model, an inverse analysis was performed for 30% of the randomly selected fibres, which were not involved in creating the model. The results were also compared with the original diagnosis (Table 3). For both the dystrophy and reference groups, the model achieved 100% correctness in assigning fibres to the appropriate group. In the case of myopathy, two fibres showed reference group characteristics, which gives us 87% correctness. Thus, the overall correctness of assigning fibres to the appropriate group in the model was 94%. Fibres assigned as references and diagnosed as myopathy were from the same sample as the four fibres involved in model building, also classified as reference. Here, first, it must be considered that when assigning fibres to one of the three given categories (D, M and R), all the measured fibres in the sample assigned to the category by diagnosis were taken into account. Not all the fibres that constitute the tissue in a given sample are in the same phase of pathological changes. Some can present advanced pathological changes, and others the initial ones. Additionally, the samples presented as a reference group came from patients suspected of having pathological changes, though histopathological diagnosis did not support clinical suspicions. However, the changes that have occurred in cells could be very slight and at a very early stage and thus could not be discernible in microscopic examination. The discovered changes may be more significant after a kind of preselection of the analysed fibres, which in turn may introduce a bias of some subjectivity. Nevertheless, such a non-random approach deserves to be tested in further analyses. At this stage of study, factors such as medications, the possible presence of comorbidities, genetic factors, diet, etc., were not taken into account. Furthermore, it is necessary to 8 of 12 keep in mind that samples of human origin can be affected by various factors, which unfortunately cannot be eliminated. This particularly relates to one of the more difficult aspects, which is the ability of patients to take medications and, especially, diet and eating habits. In this case, the biopsies were taken under the regime of short admission (hospitalisation) of the patient for a period of 1-2 days. Corticosteroid and other immunosuppressive therapy should be, if possible, discontinued prior to biopsies for a minimum of one month beforehand. However, medications critical to the patient's health (e.g., cardiovascular) cannot be stopped. This situation in particular can occur for elderly patients with comorbidities. Therefore, it is difficult to speak about samples of human origin of a true control group. For this reason, fibres that were compared with others assigned as affected by disease changes were the reference group in which no fibres were assigned to the myopathy or dystrophy group.
Materials and Methods
The tissue material used in this study was obtained from surgical biopsies from patients with suspected muscle changes and then processed according to the standard neuropathological protocol used for muscle biopsy at the Department of Neuropathology at Jagiellonian University Medical College in Krakow (DN JUMC). The biopsy specimen being used for elemental micro-imaging was oriented on a holder to cut it in the plane strictly perpendicular to the long axis of the muscle fibres and then shock-frozen in isopentane previously cooled in liquid nitrogen. The muscle tissues prepared in this way were cut into slices of 8 µm thickness using a cryomicrotome at −20 • C. Samples intended for the elemental analysis were placed on silicon nitride windows (thickness: 200 nm; size: 2 × 2 mm 2 ) (Norcada, Canada), while adjacent consecutive tissue slices were placed on microscope slides and stained with haematoxylin and eosin (H&E) for further histopathological examination. The samples prepared on the silicon nitride windows were freeze-dried at −80 • C. A schematic overview of the sample preparation protocol is presented in Figure 5. All the cases studied were diagnosed histopathologically in DN JUMC. The research was approved by the Jagiellonian University Medical College Ethics Committee (approval number: 1072.6120.249.2020). , x FOR PEER REVIEW 9 of 13
Materials and Methods
The tissue material used in this study was obtained from surgical biopsies from patients with suspected muscle changes and then processed according to the standard neuropathological protocol used for muscle biopsy at the Department of Neuropathology at Jagiellonian University Medical College in Krakow (DN JUMC). The biopsy specimen being used for elemental micro-imaging was oriented on a holder to cut it in the plane strictly perpendicular to the long axis of the muscle fibres and then shock-frozen in isopentane previously cooled in liquid nitrogen. The muscle tissues prepared in this way were cut into slices of 8 µm thickness using a cryomicrotome at −20 °C. Samples intended for the elemental analysis were placed on silicon nitride windows (thickness: 200 nm; size: 2 × 2 mm 2 ) (Norcada, Canada), while adjacent consecutive tissue slices were placed on microscope slides and stained with haematoxylin and eosin (H&E) for further histopathological examination. The samples prepared on the silicon nitride windows were freeze-dried at −80 °C. A schematic overview of the sample preparation protocol is presented in Figure 5. All the cases studied were diagnosed histopathologically in DN JUMC. The research was approved by the Jagiellonian University Medical College Ethics Committee (approval number: 1072.6120.249.2020). For each sample the areas of interest in which the atrophic or hypertrophic fibres were present were selected. In the case of samples from the reference group, attention was paid to the places where the cell sizes were similar, and also to whether the fibres were in one so called fascicle or not.
The experiment was carried out at the I18 beamline at the DIAMOND Light Source. The beam size was shaped to the 5 × 5 µm 2 size by the Kirkpatrick-Baez mirrors. The excitation energy was 13.5 keV. The recorded maps had different sizes depending on the number of measured fibres and their distribution. The scan step was 5 µm. The acquisition time of 4 s per pixel has been used. To detect the characteristic radiation two four-segment Vortex ME-4 SSD detectors were used-one in front of the sample at 45/45 deg geometry and one behind the sample at 45/−60 deg geometry ( Figure 6). All measurements were performed in a helium atmosphere. For each sample the areas of interest in which the atrophic or hypertrophic fibres were present were selected. In the case of samples from the reference group, attention was paid to the places where the cell sizes were similar, and also to whether the fibres were in one so called fascicle or not.
The experiment was carried out at the I18 beamline at the DIAMOND Light Source. The beam size was shaped to the 5 × 5 µm 2 size by the Kirkpatrick-Baez mirrors. The excitation energy was 13.5 keV. The recorded maps had different sizes depending on the number of measured fibres and their distribution. The scan step was 5 µm. The acquisition time of 4 s per pixel has been used. To detect the characteristic radiation two four-segment Vortex ME-4 SSD detectors were used-one in front of the sample at 45/45 deg geometry and one behind the sample at 45/−60 deg geometry ( Figure 6). All measurements were performed in a helium atmosphere.
For the in SR-XRF investigation and according to the diagnosis, the samples were divided into three groups: dystrophy (D), myopathy (M) and a reference group (R). "My-opathy" here denotes muscle pathologies other than muscular dystrophy. The reference group consisted of samples taken from patients with suspected muscle disease where the pathomorphological investigation did not reveal evidence of pathology. All samples were collected from patients of both genders (male-m; female-f) and of different ages (41 ± 4 y), distributed as follows for all groups: dystrophies-N = 3: f (age: 56 y), m (age: 46 y) and m (age: 23 y); myopathies-N = 6: m (age: 30 y), m (age: 54 y), f (age: 32 y), f (age: 32 y), m (age: 32 y) and m (age: 30 y); reference group-N = 4: m (age: 50 y), m (age: 42 y), f (age: 35 y) and f (age: 75 y). Overall, 24 maps from 13 samples were used for measurements, from which areas of 135 fibres were separated. Of them, 17 were assigned as fibres affected by dystrophy, 64 as fibres affected by "myopathy" and 54 as a reference group. Due to the relatively small number of cases constituting the study group, no additional breakdown by age, gender, etc., was performed in the analysis. For the in SR-XRF investigation and according to the diagnosis, the sampl divided into three groups: dystrophy (D), myopathy (M) and a reference gro "Myopathy" here denotes muscle pathologies other than muscular dystroph reference group consisted of samples taken from patients with suspected muscle where the pathomorphological investigation did not reveal evidence of patholo samples were collected from patients of both genders (male-m; female-f) different ages (41 ± 4 y), distributed as follows for all groups: dystrophies-N = 3 56 y), m (age: 46 y) and m (age: 23 y); myopathies-N = 6: m (age: 30 y), m (age: (age: 32 y), f (age: 32 y), m (age: 32 y) and m (age: 30 y); reference group-N = 4: 50 y), m (age: 42 y), f (age: 35 y) and f (age: 75 y). Overall, 24 maps from 13 sampl used for measurements, from which areas of 135 fibres were separated. Of them, assigned as fibres affected by dystrophy, 64 as fibres affected by "myopathy" and reference group. Due to the relatively small number of cases constituting the study no additional breakdown by age, gender, etc., was performed in the analysis.
Measurement data were saved in NeXus files. Then, the PyMca program [ used to obtain the ROI image-an image based on the integral over the entire spec on the basis of which a binary image was created for each map. In order to bina image, the maps were pre-processed with Gaussian blur and then the Otsu meth applied using the Python environment [17]. The obtained images were processe mask to automate the allocation of pixels to a specific irregularly shaped fibre. B the binary image, each pixel was assigned a specific number depending on its l Measurement data were saved in NeXus files. Then, the PyMca program [16] was used to obtain the ROI image-an image based on the integral over the entire spectrumon the basis of which a binary image was created for each map. In order to binarise the image, the maps were pre-processed with Gaussian blur and then the Otsu method was applied using the Python environment [17]. The obtained images were processed into a mask to automate the allocation of pixels to a specific irregularly shaped fibre. Based on the binary image, each pixel was assigned a specific number depending on its location. The intercellular space-i.e., endomysium-was marked with the number 0; the fibres that were not completely measured were left with the number 1 and the pixels that made up the fibres that were completely measured were assigned the next natural numbers. The created mask was used to isolate the pixels that make up the irregularly shaped fibre. For this purpose, an original script in the Python environment was created. Using the NeXus files and the mask, the script was able to extract the appropriate spectra assigned to a specific fibre, sum them up and save the final spectrum in a separate ASCII file. The obtained spectra were fitted using the PyMca program in order to obtain information about the net area under the peaks of individual elements such as Br, Ca, Cl, Cr, Cu, Fe, K, Mn, P, S and Zn. Those values were normalised in two steps: to the Compton scatter peak [18] area values to minimise the influence of the sample thickness and density variations, and to the number of pixels corresponding to each fibre. An exemplary spectrum is shown in Figure 7. The data prepared in this way were then subjected to statistical analysis in the STATISTICA program. Figure 8 shows graphically the individual steps of creating the mask.
Conclusions
In summary, the above experiment, together with all the analyses performed, showed statistically significant differences in the elemental composition of the fibres classified as the reference group (R) and those representing the diseased tissuedystrophy (D) and myopathy (M). Taking into account such statistical analyses as the
Conclusions
In summary, the above experiment, together with all the analyses performed, showed statistically significant differences in the elemental composition of the fibres classified as the reference group (R) and those representing the diseased tissuedystrophy (D) and myopathy (M). Taking into account such statistical analyses as the
Conclusions
In summary, the above experiment, together with all the analyses performed, showed statistically significant differences in the elemental composition of the fibres classified as the reference group (R) and those representing the diseased tissue-dystrophy (D) and myopathy (M). Taking into account such statistical analyses as the Spearman correlation and multivariate discriminate analysis, the greatest impact on the differentiation of fibres can be attributed to elements such as Ca, K, Cr and P, which notably play an important role in the structure of fibres and nerve conduction. However, it must be taken into account that all fibres that have been measured for a relevant sample have been assigned to a given category by sample recognition; some of them could therefore be incorrectly assigned to a different category. This may be evidenced by the results obtained from the MDA. For this reason, some fibres may have been analysed at an advanced stage of the disease, and some only in its beginning. Further analyses after selecting the fibres, as well as taking into account factors such as diet, age, gender and co-morbidities, may show larger differences. Regarding our results, it seems that SR-XRF could be an important adjunct in the diagnosis of muscle disease, though it requires further studies. Institutional Review Board Statement: The research was approved by the Jagiellonian University Medical College Ethics Committee and obtained approval for their conduct (approval number: 1072.6120.249.2020).
Informed Consent Statement:
The samples used in the experimental measurements are archival samples that remained after the whole process neuropathological diagnosis had been accomplished and for which the patient's consent was not required. All the procedures were in agreement with relevant guidelines and regulations.
Data Availability Statement: Data supporting this study cannot be made available due to ethical, reasons. The data includes the personal data of the patients.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,697.6 | 2022-07-01T00:00:00.000 | [
"Materials Science",
"Physics",
"Medicine"
] |
Optimal relationship between power and design-driving loads for wind turbine rotors using 1-D models
. We investigate the optimal relationship between the aerodynamic power, thrust loading and size of a wind turbine rotor when its design is constrained by a static aerodynamic load. Based on 1-D axial momentum theory, the captured power ˜ P for a uniformly loaded rotor can be expressed in terms of the rotor radius R and the rotor thrust coefficient C T . Common types of static design-driving load constraints (DDLCs), e.g., limits on the permissible root-bending moment or tip deflection, may be generalized into a form that also depends on C T and R . The developed model is based on simple relations and makes explorations of overall parameters possible in the early stage of the rotor design process. Using these relationships to maximize ˜ P subject to a DDLC shows that operating the rotor at the Betz limit (maximum C P ) does not lead to the highest power capture. Rather, it is possible to improve performance with a larger rotor radius and lower C T without violating the DDLC. As an example, a rotor design driven by a tip-deflection constraint may achieve 1.9 % extra power capture ˜ P compared to the baseline (Betz limit) rotor. This method is extended to the optimization of rotors with respect to annual energy production (AEP), in which the thrust characteristics C T ( V ) need to be determined together with R . This results in
Introduction
From the inception of the wind energy industry, it has been a clear trend that rotor sizes have been increasing.However, as discussed in Sieros et al. (2012), increasing the rotor size is not a clear way to decrease the cost of energy (CoE), since the rotor weight (closely related to rotor cost) will always scale with a larger exponent than the increase in power does.It is, therefore, argued that the lower CoE that has taken place is mostly due to improvements in technology.The turbine is structurally designed to carry loads coming from aerodynamics (steady or extreme) and the self-weight.Therefore, lowering the loads should lead to a lighter blade.The steady aerodynamic load is applied to extract power, and increasing the load leads to greater power production until the maximum power coefficient (max C P ) is reached.Increasing the load should lead to a heavier blade but it also leads to greater power production.It goes to show that understanding the relationship between loading, power production and structural response is very important for designing the most costeffective turbine.This follows a trend occurring in recent years in which there is a belief that wind turbine optimization should include a more holistic approach, with concepts like multidisciplinary design analysis and optimization (MDAO) and systems engineering (Bottasso et al., 2012;Zahle et al., 2015;Fleming et al., 2016;and Perez-Moreno et al., 2016), where all of the parts of the turbine design that affect the cost should be taken into account along with the overall objective Published by Copernicus Publications on behalf of the European Academy of Wind Energy e.V.
of minimizing the CoE.Some of these related works focus more on how the rotor loading affects the power and structural response.One of the concepts that comes out of this is the so-called low-induction rotor (LIR), in which the velocity induction at the rotor plane is lower than the value that maximizes the power coefficient.The concept was introduced by Chaviaropoulos and Sieros (2014) and it comes out of the optimization of annual energy production (AEP) by allowing the rotor to grow while constraining the flap root bending moment to be the same as some baseline.They state that the method can increase AEP by 3.5 % with a 10 % increase in the rotor radius, thereby showing that the LIR can increase AEP while keeping the same flap root bending moment.It agrees with Kelley (2017) who allowed for a change in the radial loading, resulting in a 5 % increase in AEP with a radius increase of 11 %.It was also investigated by Bottasso et al. (2015) who tested the potential of using the LIR both for AEP improvements with load constraints and as a costoptimized rotor.They found the same results as the previous two investigations; the LIR can improve AEP, but when they consider the CoE they find that the LIR is not cost effective, meaning that the additional cost of extending the blade is not compensated by the increase in power.This conclusion is opposed to the conclusion made by Buck and Garvey (2015b) who set out to minimize the ratio between capital expenditures (CapEx) and AEP.They arrive at LIR as the optimal solution for minimizing CapEx/AEP, which is taken as a measure of CoE.Overall it seems that LIR can increase AEP while keeping the same load as a non-LIR baseline, but it is not clear if LIR is a cost-effective solution.
Another concept that is relevant in the context of this paper is thrust clipping (also known as peak shaving or force capping).For turbines, it is often the case that the maximum thrust is reached just before reaching the rated power, resulting in a so-called thrust peak.When using thrust clipping, this peak is lowered at the cost of power.It is used with many contemporary turbines for load alleviation but is often added as a feature after the design process.Buck and Garvey (2015a) made a design study in which they found that lowering the maximum thrust by 11 % leads to a 9 % reduction in material used, at the cost of 0.1 % less lifetime energy, resulting in an overall reduction of 0.2 % in the cost of energy.This shows that including thrust clipping in the design process can lead to a lower CoE.
In this paper, we investigate the relationship between the load, power and structural response of wind turbine rotors.Simple analytical models, based on 1-D aerodynamic momentum theory and Euler-Bernoulli beam theory, are introduced to establish the first-order relationship between these responses.This provides a useful framework for the initial rotor design, especially when high-level design parameters such as the rotor radius need to be fixed or there is a need to understand how load/structural responses will change with rotor size.The effect on the power curve and the related load/structural response with the variation in wind speeds is also investigated, which is useful for the initial design of the highly coupled aeroservoelastic rotor design problem.
The relatively simple models used in this paper do not capture the full complexity needed for detailed wind turbine rotor design and should be considered a tool for earlystage rotor design and overall exploration only.For example, the underlying theories (of 1-D aerodynamic momentum and Euler-Bernoulli beams) assume steady-state conditions, while designs are often constrained by load cases that are linked with extreme, unsteady or non-normal operational events, e.g., extreme turbulence, gusts, emergency shutdowns, subsystem faults or parked conditions.This is a limitation of the model developed here, but if there is a relation between the steady-state loads and the extreme loads, which is very likely, then the results are still valid.
As mentioned before, the overall target for current turbine design is to lower the CoE, but a cost model is not used, which is also a limitation of this study.However, cost models use several assumptions made in the design process such as the price of components in the design or composite lay-up of the blades, so a predicted cost will always be made with some uncertainty.Instead, load constraints are considered, much like in the above-mentioned LIR example.As was found by Bottasso et al. (2015), a constrained load might not lead to a lower CoE.So, to accommodate this, a constraint with a fixed mass is made, which is thought to be a better approximation of a fixed cost.
This study is carried out in order to obtain an overview of how the rotor design is more fundamentally influenced by different types of aerodynamic loading.Thus, an issue like the self-weight is important for modern turbines but is not directly included in this study; the static-mass moment especially has an impact on contemporary turbines.It could be included, but it was excluded to keep the study as simple as possible.Further discussion about the limitations and possible improvements of the study is given later in Sect.4.5.
Theory
This section will introduce the variables and the basic relationships used in this paper.It is split into two subsections, in which Sect. 2.1 introduces aerodynamic variables, equations and the baseline rotor, while Sect 2.2 presents scaling laws used to formulate design-driving load constraints relative to the baseline rotor.
Aerodynamics
The theory underlying this Aerodynamics section is found in Sørensen (2016).
For wind turbine aerodynamics non-dimensional coefficients are often introduced and some of the common ones are for the rotor thrust (C T ) and power (C P ). (1) where T and P are the rotor thrust and power, respectively; ρ is the air density, V is the undisturbed flow speed and R is the rotor radius.These definitions can be applied for any wind turbine rotor, but in this paper, we will use a simplified relationship between C T and C P that is derived from classical 1-D momentum theory.This implies an assumption of uniform aerodynamic loading across the rotor plane.The classical equations are often given in terms of the axial induction (a), which is defined as a = 1 − V rotor V , where V rotor is the axial flow speed in the rotor plane.By combining the two classical momentum theory expressions for C P (a) and C T (a) (Sørensen, 2016, p. 11, Eq. 3.8), the relationship between these coefficients is arrived at as follows: where a(C T ) is found by inverting C T (a) and using the negative solution.A plot of C T vs. C P can be seen in Fig. 1.This C P (C T ) curve is monotonically decreasing in slope and reaches a maximum of C P = 16/27, corresponding to the well-known Betz limit at C T = 8/9.These monotonicity properties lead to the key observation that a reduction in thrust (C T = 8/9 − C T ) will not lead to a proportional change in power ( C P ).This motivates the investigation in this paper of the trade-off between power and loads.
Power capture and annual energy production (AEP)
One way to understand the power yield of a rotor is to consider Eq. (2) as consisting of three separate terms as follows: "Wind" is the part of the equation that depends on the wind conditions, "size" is the part of the equation that depends on the rotor-swept area and "coefficient" is the part of the equation related to the power coefficient, representing the capability of the rotor to extract power from the wind.The combination of Eqs. ( 2) and ( 3) provides an expression that captures the last two terms, which are the only ones affected by the design of the turbine; the result is as follows: where R equals R/R 0 , with R 0 being the radius of the baseline rotor.This equation will be referred to as the power capture equation.It shows that power can be changed by changing either the loading (C T ) or the rotor radius (R).This will serve as the basic equation when the power capture is optimized for a single design point.When considering turbine design over the range of operational conditions, annual energy production (AEP) is introduced as an integral metric representing the energy produced per year given some wind speed frequency distribution.It can be computed as the power production (P ) weighted by the probability density of wind speeds (PDF wind ) multiplied by the period of 1 year (T year ) as follows: The wind speed probability distribution PDF wind will be described with a Weibull distribution.V CI and V CO are the wind speeds for cut in and cut out during wind turbine operation.
Here they are taken to be V CI = 3 m s −1 and V CO = 25 m s −1 , which are common numbers for modern wind turbines.
In this paper, we will use a dimensionless measure for AEP which is equivalent to the so-called capacity factor, defined as follows: Ṽ is a normalized wind speed given by V = Ṽ V 0 , where V 0 is the wind speed at which the turbine reaches the www.wind-energ-sci.net/5/155/2020/Wind Energ.Sci., 5, 155-170, 2020 rated power (P rated ).Throughout this paper it is taken to be V 0 = 10 m s −1 .It should further be noted that PDF wind dV is dimensionless and by nondimensionalizing AEP it also follows that PDF wind d Ṽ is dimensionless.Throughout this paper à EP is calculated using a discretization of the integral, which is computed using the trapezoidal rule given as Ṽi , where the discretization (N) was found to become insignificant for N = 200.
Baseline rotor
The work here aims to demonstrate an improved rotor performance compared to a baseline design.This baseline design is chosen to be a turbine operating at the Betz limit below the rated wind speed and keeping a constant power above the rated power.
This choice of baseline mimics the typical practice of designing wind turbines target operation at the maximum C P below the rated power.In reality, turbines will not achieve a maximum C P at C T = 8/9 since losses alter the relationship between C T and C P , but this does not change the fact that turbines are operated at the point of the maximum C P .Figure 2 shows the power and thrust curves for the baseline rotor.
In this paper, all results are presented as the change in performance relative to that of the baseline rotor.For this reason, all of the relevant variables (denoted with a zero in the subscript) will be normalized by the corresponding baseline rotor values.
where L as well as L exp is a generalized load that is introduced in Sect.4.1 (Effects on loads), and it is written here for later reference.
Scale laws and constraints for design-driving loads
In this section, examples of static aerodynamic designdriving loads (DDLs) will be presented.These examples are not meant to be exhaustive but include several of the key considerations that constrain the practical design of wind turbine rotors.From the scaled loads, design-driving load constraints (DDLCs) are introduced, which limit loads so that these do not exceed the levels of the baseline rotor.Based on the DDL examples, it is shown that DDLCs can be elegantly put in a generalized form.
Thrust (T )
Thrust typically does not limit the design of the rotor itself but more likely is a constraint imposed from the design of the tower and/or foundation.The thrust scaling and the associated DDLC is given by The root flap moment is the bending moment at the rotational center in the axial flow direction.To compute M flap , the 1-D momentum theory relations for infinitesimal thrust (dT ) and moment (dM) are integrated; they are first expressed as where r is the radius location of the infinitesimal load (r ∈ [0, R]).The moment scaling and DDLC can be found as follows: As shown, M flap scales with R 3 so it grows faster than the power, which scales as R 2 .M flap is important for the blade design since the flap-wise aerodynamic loads need to be transferred via the blade structure to the root of the blade.
Tip deflection (δ tip )
Tip deflection is a common DDLC for contemporary utilityscale turbines, where tip clearance between tower and blade may become critical because of the relatively long and slender blades.To get an idea of how tip-deflection scales with changes in loading and rotor radius Euler-Bernoulli beam theory; (Bauchau and Craig, 2009, p. 189, Eq. 5.40) is used.
For the problem here, it takes the form of where δ is the deflection in the flap-wise direction of the blade at location r.EI is the stiffness of the blade at location r.For modern turbines the stiffness decrease towards Wind Energ.Sci., 5, 155-170, 2020 www.wind-energ-sci.net/5/155/2020/ the tip of the blade.To get an estimate for the stiffness, it is assumed that stiffness follows the size of the chord (EI ∝ c).
The chord is given by the equation in Sørensen (2016, p. 68, Eq. 5.26); with an approximation for the outer part of the blade it can be found that c ∝ R/r which means that EI ∝ R/r.An approximate model for EI that has EI ∝ R/r can be made, where EI r is the stiffness at the root and EI t is the stiffness at the tip of the blade.As mentioned above for wind turbines EI r > EI t .
With the equation for EI , Eq. ( 17) can be solved by indefinite integration, with the integration constants determined from the following boundary conditions: The resulting displacement solution becomes δ = 11π 120 where the normalized radius (r ∈ [0, 1]) has been introduced so that r = R • r.The polynomial shape of the deflection has been collected in δ shape .The maximum deflection occurs at the blade tip (r = 1), which leads to a scaling relation and DDLC for tip deflection: where it has been implicitly assumed that any change in stiffness needs to follow with the simplest way to satisfy this relation being that EI r = EI r,0 , which gives EI r EI t = EI r,0 EI t,0 .
Tip deflection with constant mass
The final example of a DDL is also based on tip deflection but includes a condition to maintain a constant mass of the loadcarrying structure of the blade.To this end, the stylized sparcap layout depicted in Fig. 3 is assumed.This layout consists of two planks.The stiffness of a spar-cap structure with a homogeneous Young's modulus (E) can be found from the stiffness of the rectangle and the parallel axis theorem (see www.wind-energ-sci.net/5/155/2020/Wind Energ.Sci., 5, 155-170, 2020 Fig. 3 for the variable definitions) as follows: For modern wind turbines h/H 1, meaning that a common approximation is To compute the mass for such a structure it will be assumed that plank height h and the plank width B are constant and the change in EI comes from a decrease in building height H .Then, if h is decreased when R is increased, the following relationship needs to be satisfied for the mass of the planks to be constant (assuming a constant mass density), From there it follows that changes in the radius of the rotor will change the stiffness as Combining this equation with the tip deflection equation (Eq.21), scaling and DDLC can be found as follows: Scaling with the use of the fact that changing h by the same magnitude for the whole blade leads to EI r EI t = EI r,0 EI t,0 and thereby does not affect δ shape .It should be noted that choosing B to change instead will lead to the same scaling but the difference is that changing the plank thickness might lead to higherorder effects, although they are expected to be insignificant.
Generalizing the constraint form
Considering the four DDLC examples presented above, there appears to be a pattern in the scaling relations that may be written as follows: where R exp is the exponent of R in the DDLC.
If the constraint limit is met, the following relationship can be written
Formulation of rotor design problems
Based on the performance and constraint relationships outlined in the previous section, this section will present the formulation for rotor design as optimization problems.Two different classes of problems are introduced, namely powercapture optimization and AEP optimization, where the latter is a generalization of the former with the constraint depending on the wind speed.
Power-capture optimization
The optimization problem can be stated as maximize where the definition of R = R/R 0 has been used for consistency.The solution for this optimization problem is presented in Sect.4.1.
It should be noted that this optimization problem is similar to the problem that is given by Chaviaropoulos and Sieros (2014) in which they optimize while keeping M flap .So the optimization problem in this paper is a generalization of their optimization problem.
AEP optimization
In contrast to the above mentioned optimization of power capture, optimization with respect to AEP requires the determination of C T ( Ṽ ), so it involves a function opposed to a scalar value.It is also necessary to set the rated power to constant value, while the wind speed at which the rated power is reached is allowed to change.The problem can be formulated as maximize where the wind speed scaling has been added to the DDLC.
Results and discussion
This section discusses the solutions to the rotor design optimization problems introduced in the previous section.
Optimizing for power capture
The constrained optimization problem maximizing power capture, as stated in Sect.3, may be simplified based on the observation that optimum solutions will occur at the DDL constraint limit.To understand this, consider that the power capture of a rotor with an inactive constraint may always be improved by scaling the rotor up until the constraint is met.This is true irrespective of the DDLC that determines the rotor design.Hence, an explicit relation R(C T ) can be used to reformulate the problem from a constrained optimization problem in two variables to an unconstrained optimization problem in one variable.
with the optimization problem now as follows: maximize By differentiating the objective function (Eq.35 with respect to C T and finding its root, the optimal C T as a function of R exp is arrived at. This unique solution is a maximum, which is apparent from the always-positive value of P in Fig. 4.This figure shows the optimal solution for C T and C P , as well as the relative change in radius ( R) and power ( P ) compared to the baseline rotor.In the plots in Fig. 4a and c, C P is observed to approach the dashed baseline performance (Betz rotor) much faster than C T as R exp increases.This is a consequence of the relationship between C T and C P (Fig. 1).Especially around the Betz limit, the gradient is very small, which means that changes in C T do not lead to proportional changes in C P .Turning to the two plots in Fig. 4b and d, it is seen that the lower C P is more than compensated for by increasing R since the relative change in power ( P ) is always positive.
When maximizing power capture for a given thrust (R exp = 2; dashed vertical blue line in Fig. 4), it is found that C T → 0 and R → ∞ while P → 50 %, which was found by investigating the behavior of the limit value when R exp → 2. Since R → ∞ is not of much practical interest, further explanation is not given here.Alternatively, the maximum power for a given flap root moment (R exp = 3; orange line in Fig. 4) may be achieved by increasing the rotor radius by 11.6 % compared to the baseline design (maximum C P ).The corresponding relative increase in power P is 7.6 %.Finally, designs constrained by tip deflection (R exp = 5; green line in Fig. 4) allow the relative power P to increase by 1.90 % with a relative change in radius R of 2.30 %.A table with the results for the increase in power capture ( P ) and radius ( R) for four designs (R exp = 2, 3, 5, 6) can be seen in Fig. 6.In conclusion, rotors with a static aerodynamic DDLC should not be designed for the maximum C P , as more power can be generated by rotors with a lower C T and a larger radius R, without violating the relevant DDLC.
Effect on loads
Even though meeting the constraint limits means that the chosen DDL will be the same as the baseline, it is interesting to know what happens to loads that scale differently than the DDL.As an example, if the DDLC is M flap (R exp = 3) it is a given that it will not change relative to the baseline, but it could be interesting to know what happens to T and δ tip .
To investigate it we will introduce a generalized load (L) as a measure of how a load scale.
where K 0 is a scaling constant and L exp is the generalized load exponent.The generalized load equation can be made non-dimensional with www.wind-energ-sci.net/5/155/2020/Wind Energ.Sci., 5, 155-170, 2020 The difference between L exp and R exp is that R exp results in a design, whereas L exp is a load for a design.Take a design made for tip deflection (R exp = 5) as an example, then L exp = 3 will describe the M flap load for that design.
An equation for the relative change L can be found in terms of the baseline rotor as follows: Since it is known that C T ≤ C T,0 these conclusions follow: The load is lower than the baseline level.
The load is identical to the baseline level.
The load is larger than the baseline level.
This agrees with Fig. 5, which illustrates effect of design constraints (DDLCs) on different loads.For example, consider tip deflection (R exp = 5; DDLC(δ tip ); the dashed green line in Fig. 5).Looking at the solid green line (L exp = 5) it is seen that the relative change in L is zero as expected.Now looking at the loads with L exp < R exp , namely thrust (L exp = 2) and flap moment (L exp = 3), it is seen that L is lower than the baseline, with T = −6.6 % and M flap = −4.4%.
But for loads where L exp > R exp the loads are increased.If there was a load that scaled like L exp = 6 the load would be increased by L (L exp =6) = +2.3%.Furthermore, Fig. 5 shows that the relative decrease in load is always most pronounced for the thrust (L exp = 2), with the biggest impact occurring around R exp ≈ 2.5.All of the relative change curves have distinct minima but at the same time are characterized by large plateaus of relatively small change.Another observation is how quickly the curves grow for L exp > R exp .Take DDLC(M flap ) as an example; in this case δ tip = +24.5 % and L (L exp =6) = +38.9%.The relative change in loads becomes smaller as R exp increases.A sketch with a zoomed-in view of the tip and a table with the values can be seen in Fig. 6.
Low-induction rotor
The concept in this section was mentioned in the Introduction since it has had some attention over the recent years.The low-induction rotors (LIR) are rotors designed with a lower axial induction a than the level that maximizes C P .The concept is, to a certain degree, analogous with optimization of rotors for power capture.
To investigate such an LIR design, it was chosen to fix the C T value below the rated power in order for it to be the same as for the power-capture optimization for a given R exp .If the radius was set to the same value as for power capture, it will result in the constraint limit not being met since the turbine Wind Energ.Sci., 5, 155-170, 2020 www.wind-energ-sci.net/5/155/2020/reaches the rated power earlier.Since C T is fixed and the constraint limit needs to be met, the wind speed at which the turbine reaches the rated power ( Ṽrated ) can be found.It is found through the normalized power (the integrant of Eq. 7 without the PDF wind ) and the constraint limit with wind speed scaling (Eq.30 multiplied with Ṽ 2 ) as follows: www.wind-energ-sci.net/5/155/2020/Wind Energ.Sci., 5, 155-170, 2020 For a given rated wind speed the rotor radius can be found using the following steps: With C T , Ṽrated and R, Ã EP can be computed using Eq. ( 7).The LIR is illustrated by the examples in Figs.7 and 8 where the present analysis framework has been applied with constraints pertaining to flap moments (R exp = 3) and tip deflections (R exp = 5).
In both cases, the resulting power curves are slightly above the equivalent baseline ones, and the thrust peaks are reduced compared to the baseline.The relative change in AEP results in a smaller change than the change in power at the design point.For the case with DDLC(M flap ), AEP = 6.0 % while the power capture increased by P = 7.6 %.The corresponding improvements for a tip-deflection-constrained rotor, DDLC(δ tip ), are AEP = 1.2 % and P = 1.9 %.The lower relative improvement for the LIR is related to the amount of the power that is produced below the rated power.The results for the LIR are summarized in Fig. 9 with a table and a sketch showing the relative changes in AEP, radius, thrust, root-flap moment and tip deflection for four different designs (R exp = 2, 3, 5, 6).From Fig. 9 the thrust constraint design (DDLC(T ); R exp = 2) is seen to have diverging values for R, M flap and δ tip .As was the case for power-capture optimization these results are found from investigating the result of the limit in which R exp → 2.Even though the result of R → ∞ is interesting, the corresponding consequence of M flap → ∞ makes this infeasible for practical use, so this will not be studied further here.
AEP-optimized rotor
As mentioned in Sect.3, the variables considered for optimization of AEP are C T ( Ṽ ) and R. In this formulation, C T can be adjusted independently for each wind speed, which ideally can be achieved through blade pitch control.The relative radius R couples the rotor operation across all wind speeds, as it is necessarily constant.Based on initial studies, the optimizer targets solutions with three distinct operational ranges, which, ordered by wind speed, are as follows: operation with maximum power coefficient (max C P ); operation at constraint limit (constant thrust T ); and operation at the rated power.
This can be used to make C T a function of R, thereby decreasing the optimization problem to an unconstrained optimization in one variable ( R).The C T function is given as where the last equation needs to be solved to get C T ; the solution is a third-order polynomial, which is more easily solved numerically.
The only free parameter that needs to be determined to find the optimal AEP is R.The optimization problem can be reformulated as Wind Energ.Sci., 5, 155-170, 2020 www.wind-energ-sci.net/5/155/2020/ The problem can be solved with most optimization solvers since the AEP can be computed explicitly if R is given.The optimization problem was solved with the L-BFGS-B algo-rithm described in Zhu et al. (1997) though the use of SciPy (Millman and Aivazis, 2011).Examples of the resultant power and thrust curves can be seen in Figs. 10 and 11, for DDLC(M flap ) and DDLC(δ tip ), respectively.Looking at Fig. 10 (R exp = 3) it is clear that the power and thrust curves have changed quite substantially, compared to the baseline Betz rotor (dashed curves).The thrust curve does not have a sharp peak anymore but rather a flat plateau.As mentioned in the Introduction this www.wind-energ-sci.net/5/155/2020/Wind Energ.Sci., 5, 155-170, 2020 is often referred to as thrust clipping.It comes from the DDLC equation (Eq.44) which shows that C T ∝ Ṽ −2 , and since thrust is proportional to T ∝ C T Ṽ 2 , it means that the thrust is constant.As mentioned, the region where the rotor is thrust clipped is also where the DDLC is active, so opposed to the baseline and LIR rotor, the DDLC is active over a larger range of V .The larger range of V is also partly why R = 44.6 %, which is a huge increase.As a result, it also leads to a large increase, with AEP = 19.9%.This is a very large change in R and the feasibility of such a design is doubtful.As it is shown later, the change in maximum loads (see Fig. 13) leads to a significant change in loads with L exp > R exp .
A more realistic design for modern turbines is found in Fig. 11 (R exp = 5).Here the changes are fewer but still significant with R = 10.7 % and AEP = 5.8 %.It shows the same shape as the thrust-clipped curve, but now it is over a smaller range of V .As mentioned in the Introduction, thrust clipping was also found by Buck and Garvey (2015a) to be a beneficial way to lower CoE.
In Figure 13.DDLC R exponent (R exp ) vs. relative maximum load ( Lmax ).The plot looks similar to Fig. 5 but Lmax is the change in maximum loading.As an example, when thrust (T ) is −30.8 % for R exp = 3 it means that the maximum thrust (for any wind speed) is 30.8 % lower than the maximum thrust for the baseline (which happens just before the rated wind speed).Notice that the range for the y scale is much larger in this plot than for the power-capture-optimized rotor.The potential reduction is more, but it comes with the consequence that L exp > R exp grows faster even for high values of R exp .
tor are summarized in Fig. 14 with a table and a sketch that shows the relative changes.As was the case for powercapture optimization and LIR optimization, some values diverge when R exp → 2, and the results are found by investigating this limit.But since it has no practical value, further explanation is omitted here.
Effect on loads
In Fig. 13 a plot of the relative change in maximum loads as a function of the DDLC R exponent.The relative max load ( Lmax ) does not compare the loads at each Ṽ but rather the max load for the baseline at Ṽ = 1 (rated wind speed) to the max load for the optimized rotor for any Ṽ .The plot in Fig. 13 is similar to the plot in Fig. 5 with the difference being that it is the relative change in maximum loads, independent of wind speed at which it occurred.Comparing the two plots, one should note the range for the y scale in the two plots, with ues of R exp (> 5).A summary of the AEP-optimized rotor can be seen in Fig. 14, where a table of four different designs (R exp = 2, 3, 5, 6) shows the relative change in AEP, radius, thrust, root-flap moment and tip deflection.
Summary of findings
In Table 1 the tables shown in Figs. 6, 9 and 14 are summarized.It compares the different optimizations to each other.
As seen from the tables, the largest increase in P /AEP is found using AEP optimization, which also leads to the largest increase in rotor radius ( R).It also shows that using thrust clipping seems to be a better operational strategy than low induction, as the design-driving constraint can be met over a larger range of wind speeds and low induction is only needed around maximum thrust and not at low wind speeds.
In all three optimization cases, the optimization of the design with thrust constraint (DDLC(T ); R exp = 2) leads to divergent values for R and the loads.In all cases the result is found by investigating the behavior of the limit when R exp → 2. Since this is not thought to be of much practical value, the details are not provided here.
Limitation of the study and possible improvements
The study shows that for a rotor constraint by a static aerodynamic DDL there is a benefit to lowering the loading and increasing the rotor size in terms of power/AEP.But, as it was found by Bottasso et al. (2015), having a rotor with the same load constraint and increasing the radius does not mean that the cost is the same or that it is cost optimal.They found that the increase in AEP did not compensate for the added cost from increasing the rotor radius.This problem of cost vs.benefit is not directly addressed in this paper, but by the DDLC δ tip+mass , a constraint in which the mass is kept constant.It is thought to be a better approximation for a rotor with a fixed price -but this assumption needs to be tested.
Another issue that is not taken into account in this study is the influence of the turbines self-weight.As was found by Sieros et al. (2012) the self-weight becomes more important for larger rotors.To accommodate for the added mass, a penalty could be added which should scale as R or R3 for top head mass and static blade mass moment, respectively.As discussed above, there could also be a constraint implemented that will keep the mass or the mass moment in the optimization.Again this is a limitation of the study.
The fidelity of the models is also a limitation.Even though 1-D aerodynamic momentum theory is a common approximation to do for first-order studies in rotor design, it is well known that the constantly loaded rotor is not possible to realize, and when losses are included the constantly loaded rotor is not the optimal solution anymore.At the same time, if it was possible to decrease the load at the tip more than at the root, it would lead to less tip deflection than a constantly loaded rotor with a similar C T .Extending the model to be able to handle radial load distribution is one way of adding detail to the model that could lead to even larger improve-Wind Energ.Sci., 5, 155-170, 2020 www.wind-energ-sci.net/5/155/2020/For modern turbine design, it is often the case that the structural design is determined by the aeroelastic extreme loads, such as extreme turbulence or gusts.With the simplicity of the models in this study, this is not taken into consideration.But if the extreme load happens in normal operation there will likely be a direct relationship between the steady and extreme loads, meaning that a decrease in steady loads will also lead to a decrease in the extreme load.This is an assumption that should be tested in future work.If the design-driving load is happening in nonoperational conditions, e.g., extreme wind in parked conditions, grid loss or subcomponent failure, then the analysis tool cannot be directly applied.
Conclusions
A first-order model framework for the analysis of wind turbine rotors was developed based on aerodynamic 1-D momentum theory and Euler-Bernoulli beam theory.This framework introduces the concept of design-driving load (DDL) for which a generalized form has been developed in which loads only differ by a scaling exponent R exp , e.g., thrust scales as R exp = 2, root-flap moment as R exp = 3 and tip deflection as R exp = 5.Despite the simplicity of the model, this study has shown important trends in how to design rotors for maximum power capture.It has been shown that the potential increase in power capture is very dependent on the relevant constraint, e.g., thrust as the constraining load compared to the more restrictive tip deflection.Furthermore, it was concluded that the best way to design a rotor for increased power capture using aeroelastic considerations is not to maximize C P but rather to relax C P and operate at lower loading (lower C T ).How much one should relax C P depends on the chosen design-driving constraint (R exp ).The results for optimizing for power capture are summarized in Table 1 (Opt.PC).
The optimization of power capture determines the best possible design for a given wind speed.By considering the annual energy production (AEP), an optimal design across the range of operational wind speeds can be found for a given wind speed frequency distribution.Optimal AEP was considered with two different approaches, namely the lowinduction rotor (LIR) and full AEP optimization.For LIR, the C T value below the rated power was set to the value found from power-capture optimization for the chosen R exp .Then the radius was increased compared to the power-captureoptimized rotor, since it will reach the rated power earlier with the same rotor size.A summary of the results can be seen in Table 1 (Opt.LIR).
For the full AEP optimization, C T was allowed to take on any positive value below the Betz limit (0 ≤ C T ≤ 8/9) for all wind speeds.The optimal AEP is obtained for a rotor that operates in three distinct operational regimes: operation with maximum power coefficient (max C P ); operation at constraint limit (constant thrust T ); and operation at the rated power.
www.wind-energ-sci.net/5/155/2020/Wind Energ.Sci., 5, 155-170, 2020 The results from the optimization are summarized in Table 1 (Opt.AEP).It shows significantly larger relative improvements in power/energy compared to power-capture-and LIRoptimized rotors.This comes at the cost of a larger increase in rotor radius.In the range where the optimum turbine operates at the constraint limit, the thrust curve is clipped (in a manner also known as peak shaving or force capping).This is a control feature used for many contemporary turbines, so it is interesting that this study, independent of this knowledge, shows that thrust clipping is a very efficient way to increase energy capture while observing certain load constraints.It is also the main reason behind the relatively large possible improvements in AEP, as the constraint limit is met over a larger range of wind speeds.
In spite of relatively crude model assumptions made, this paper provides profound insight into the trends of rotor design for maximum power/energy, e.g., the use of thrust clipping.As wind turbine rotors continue to develop towards larger diameters with slender (more flexible) blades, the type of design-driving load constraint also evolves.With the present model framework, the conceptual implications of this development become clearer; an increase in AEP of up to 5.7 % is possible compared to a traditional C P -optimized rotor -without changing technology, using bend-twist coupling or other advanced features.Finally, this work has demonstrated an approach to formulate an optimization objective that couples power and load/structural response though the power-capture optimization.This approach may be extended into less crude model frameworks, e.g., by introducing radial variations in rotor loading.
Figure 1 .
Figure 1.Relationship between normalized rotor load C T and power coefficient C P from 1-dimensional momentum theory.Note that around the Betz limit a small change in C T does not lead to a proportional change in C P ; this is illustrated by C T and C P .
Figure 2 .
Figure 2. (a) The dimensionless power and thrust for the baseline rotor as a function of wind speed.Overlaid (in blue) is the Weibull wind speed frequency distribution used throughout (IEC-class III: V avg = 7.5; k = 2).(b) C T and C P as a functions of wind speed.These curves reflect how most turbines are operated today, targeting the maximum power coefficient below the rated power, which leads to a thrust peak just before the rated power.
Figure 3 .
Figure3.Assumed spar-cap structure with dimensions: H is the total build height, h is the space between planks and B is the plank width.
Figure 4 .
Figure 4. (a) Optimal C T as a function of the constraint R exponent (R exp ).(c) R exp vs. C P ; notice that the optimal C P curve has a steeper slope and hugs the baseline closer than C T .(b) R exp vs. relative change in radius R. (d) R exp vs. relative change in power capture ( P ).Despite the similar shape of the curves, a difference between the two is that P (R exp → 2) = 50 %, while R(R exp → 2) → ∞.The vertical lines represent each of the example constraints ( * DDLC: design-driving load constraint).
Figure 5 .
Figure 5. Relative change in different rotor load parameters ( L) depending on DDLC.The scaling of loads have the form L = C T R L exp ; e.g., L exp = 2 scales as the rotor thrust T and L exp = 5 scales as the tip deflection δ tip .Each curve depicts how a load parameter would change depending on the design-driving constraint.As an example, consider a design limited by tip deflection DDLC(δ tip ), i.e., R exp = 5, which matches the dashed green line.Tip deflection meets the requirements, while thrust (T ) is lowered by 6.6 % and flap moment M flap by 4.4 %.
Figure 6 .
Figure 6.Sketch of a turbine with the load/structural response The zoomed-in figure shows the radius increase ( R) and the change in tip deflection ( δ tip ) for two different DDLCs (bold black line is the baseline).The table shows the relative change in power, radius and load/structural response for different DDLCs.R exp = 2 is a thrust constraint design, R exp = 3 is a flap moment constraint design, R exp = 5 is a tip-deflection constraint design and R exp = 6 is the tip deflection+constant mass constraint design.
Figure 7 .
Figure 7. Power and thrust curves for a low-induction rotor (solid lines), designed using the present method with the DDLC exponent R exp = 3, which corresponds to an M flap constraint.The dashed line is the baseline rotor optimized for a max C P .
Figure 8 .
Figure 8. Power and thrust curves for rotor with the DDLC exponent R exp = 5 (solid lines), corresponding to a δ tip constraint.The dashed line is the baseline rotor optimized for max C P .
Figure 9 .
Figure 9. Sketch of a turbine with the load/structural response outlined.The zoomed-in figure shows the radius increase ( R) and the change in tip deflection ( δ tip ) for two different DDLCs (bold black line is the baseline).The table shows the relative change in power, radius and load/structural response for different DDLCs.R exp = 2 is a thrust constraint design, R exp = 3 is a flap moment constraint design, R exp = 5 is a tip-deflection constraint design and R exp = 6 is the tip deflection+constant mass constraint design.
Figure 10 .
Figure 10.Power and thrust curve for an AEP-optimized rotor (solid lines) where the DDLC exponent is R exp = 3, which is equivalent to a constraint on M flap .The dashed line is the baseline rotor optimized for the max C P below the rated power.
Figure 11 .
Figure 11.Power and thrust curve for an AEP-optimized rotor (solid lines) where the DDLC exponent is R exp = 5, which is equivalent to a constraint on δ tip .The dashed line is the baseline rotor optimized for the max C P below the rated power.
Figure 12.DDLC exponent (R exp ) vs. (a) relative change in radius ( R) and (b) relative change in AEP ( Ã EP).The plot contains both of the changes for the cases of the low-induction rotor (LIR opt.; dashed-dotted black line) and the AEP-optimized rotor (AEP opt.; black solid).The changes in both AEP and radius are much larger for the AEP-optimized rotor.
Fig. 13 having the larger range.It also means that the relative change in the loads for the AEP-optimized rotor experiences a larger relative change.But it also has the consequence that loads with L exp > R exp grow faster, especially for larger valwww.wind-energ-sci.net/5/155/2020/Wind Energ.Sci., 5, 155-170, 2020
Figure 14 .
Figure 14.Sketch of a turbine with the load/structural response outlined.The zoomed-in figure shows the radius increase ( R) and the change in tip deflection ( δ tip ) for two different DDLCs (bold black line is the baseline).The table shows the relative change in power, radius and load/structural response for different DDLCs.R exp = 2 is a thrust constraint design, R exp = 3 is a flap moment constraint design, R exp = 5 is a tip deflection constraint design and R exp = 6 is tip deflection+constant mass constraint design.
Table 1 .
Overview of the optimization results from optimizing power capture (Opt.PC), low-induction rotor (Opt.LIR) and annual energy production (Opt.AEP). | 11,247.6 | 2019-07-08T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Numerical Analysis of the Convective Heat Transfer Coefficient Enhancement of a Pyro-Breaker Utilized in Superconducting Fusion Facilities
The conductive components of the pyro-breaker in the quench protection system (QPS) have high current density, a large number of electrical contacts and high thermal flux. The water system needs to meet the requirements of cooling and arc extinguishing at the same time. In a previous study, the bottleneck of the steady-state capacity appeared in the barrel conductor of the commutation section, which has a cylindrical cavity. The thermal stability of the commutation section at 100 kA level was simulated in ANSYS/Workbench. The results indicate a certain level of enhancement of the convective heat transfer coefficient of the cavity is required to reach the current capacity. However, the fluid flow inside the cavity is very complex, and the convective heat transfer coefficient is difficult to calculate. In this paper, Computational fluid dynamics (CFD) is applied to the optimization of the cooling water system of the pyro-breaker. By studying the enhancement method of convective heat transfer, optimization of the structure and processing method of the water channel are proposed. The convective heat transfer coefficients of the cylindrical cavity in these optimizations were calculated in CFX. A set of optimizations of the cavity, which can meet the requirements of China Fusion Engineering Test Reactor (CFETR), were obtained and verified by experiments.
Introduction
China Fusion Engineering Test Reactor (CFETR) has recently proposed several conceptual designs. The current of magnet power supply of CFETR will reach the level of 100 kA [1,2]. In the quench protection system (QPS), a pyro-breaker is used as a backup protection switch to ensure the safety of the superconducting fusion device [3][4][5][6]. As the pyro-breaker is connected in series in the main circuit, the current in the pyro-breaker will reach 100 kA in a steady state.
Due to the driving mechanism and design requirements of the pyro-breaker, it contains multiple electrical contacts and small cross-sections in the conductors, which will bring tremendous thermal load to the breaker [3,4]. Without reliable and sufficient cooling, the temperature rise will increase the resistance of the breaker and may even lead to disoperation of the explosives. Proper cooling can limit the size of the breaker and guarantee its safety in a steady state. Two methods are normally used to improve the heat exchange efficiency of a cooling water system. One is to increase the water flow rate, which normally leads to an increase of inlet water pressure [7,8]. This method may be limited by peripheral equipment and the sealing condition of the breaker. The other method is to enhance the convective heat transfer [9,10], which is more valuable when the general structure and flowrate of the cooling water system are determined.
Computational fluid dynamics (CFD) has been utilized in many studies investigating cooling systems. In [11], CFD was applied to study the influence of the film-hole position on internal and external heat transfer. A series of proposed configurations with different positions of film holes were parameterized to conduct the CFD calculations. In [12,13], the numerical investigations of a heat exchanger channel in the form of a cylindrical tube with a thin insert were presented. The insert intensifies the heat transfer between the pipe wall and the gas by using the phenomenon of thermal radiation absorption. In [14], numerical simulation results were compared to acquire optimized heat transfer enhancement schemes such as placing transversal ribs and V-shaped ribs in the flow channel.
In this paper, CFD has been used to simulate the fluid stream and the convective heat transfer coefficient of the complex cooling water channel in the pyro-breaker. A certain convective heat transfer coefficient of the cylindrical cavity, which can meet the requirements of CFETR, has been obtained by the thermoelectric coupling simulation of the pyro-breaker in ANSYS/Workbench. By studying the enhancement methods of convective heat transfer, optimizations to the structure and processing method of the cavity are proposed. The convective heat transfer coefficients with different structures and processing conditions have been calculated in CFX. A set of optimizations for the cavity that can meet the convective heat transfer enhancement demanded is obtained and verified both by simulations and experiments.
The rest of this paper is organized as follows. Section 2 describes the design challenge of the pyro-breaker and presents a new pyro-breaker model based on previous work. Section 3 analyses the feasible methods to enhance the convective heat transfer of the cavity in the pyro-breaker and presents the numerical and experimental verification of these methods. Section 4 provides the conclusion of this work and possible future research directions for the pyro-breaker.
Previous Work
The pyro-breaker discussed in this paper is a backup protection switch in the QPS of superconducting fusion devices. To fill the gap between the existing pyro-breaker in China and the one in the ITER project ('the way' in Latin, an international superconducting fusion project) and to provide a design basis for the QPS of CFETR in the future, the pyro-breaker described in this paper is designed to withstand a steady-state current of 100 kA.
By analysing the concept and structure of the ITER pyro-breaker [4], an integrated type of pyro-breaker, as shown in Figure 1, has previously been designed and tested [15,16]. The pyro-breaker has a steady-state current capacity of 40 kA. Meanwhile, the cooling water in the commutation section (CS) directly contacts the barrel conductor, where the explosive is installed. To guarantee the successful cut of the The conductive components of the pyro-breaker have multiple electrical contacts and small cross-sections, which lead to a high heat flow, as shown in Table 1 [15]. The steady-state temperature rise of the pyro-breaker is critical to the entire design process to Energies 2021, 14, 7565 3 of 11 ensure that no severe vaporization appears near the metal-water contacting surfaces and the temperature of the device is limited to within the safe range for explosives. In previous simulations and experiments, the maximum temperature rise of the pyro-breaker in a steady state appeared in the insulation section (IS) [16]. However, there is no direct contact between the cooling water and the conductors in the IS. Moreover, the IS is designed with several tubes to cool down the conductors. The convective heat transfer within the tubes is easy to calculate and the enhancement can be achieved by enlarging the diameter of the tubes. Meanwhile, the cooling water in the commutation section (CS) directly contacts the barrel conductor, where the explosive is installed. To guarantee the successful cut of the conductor, annular grooves are distributed on the conductive cylinder. These grooves result in a large current density at the cross-sections. The steady-state temperature rise of the CS must be limited to within a reasonable range to avoid failure of the seal or explosive disoperation. Unlike the tubes, enlargement of the diameter of the cavity inside the barrel is more complex. It affects the sealing condition and the entire structure of the CS, requiring an increase in the explosive dosage. Hence, the bottleneck of the steady-state capacity of the pyro-breaker appears in the barrel conductor in the CS.
New Pyro-Breaker Model
Due to the compact structure, complex cooling water system and high sealing requirements, it is very difficult to further improve the current capacity of the integrated type pyro-breaker. Based on the existing design and calculation, a separated type pyro-breaker is proposed in this paper, as shown in Figure 2. The CS and the IS of the pyro-breaker are separated and connected by a bus bar. The cooling water channels in CS and IS are independent from each other, which makes it easier to modify the structure of the cooling water channels. Meanwhile, the cooling water in the commutation section (CS) directly contacts the barrel conductor, where the explosive is installed. To guarantee the successful cut of the conductor, annular grooves are distributed on the conductive cylinder. These grooves result in a large current density at the cross-sections. The steady-state temperature rise of the CS must be limited to within a reasonable range to avoid failure of the seal or explosive disoperation. Unlike the tubes, enlargement of the diameter of the cavity inside the barrel is more complex. It affects the sealing condition and the entire structure of the CS, requiring an increase in the explosive dosage. Hence, the bottleneck of the steady-state capacity of the pyro-breaker appears in the barrel conductor in the CS.
New Pyro-Breaker Model
Due to the compact structure, complex cooling water system and high sealing requirements, it is very difficult to further improve the current capacity of the integrated type pyro-breaker. Based on the existing design and calculation, a separated type pyro-breaker is proposed in this paper, as shown in Figure 2. The CS and the IS of the pyro-breaker are separated and connected by a bus bar. The cooling water channels in CS and IS are independent from each other, which makes it easier to modify the structure of the cooling water channels. The cooling water channels of the pyro-breaker need to be designed according to the distribution of the small sections and the contact surfaces. The cooling water system in the CS is composed of tubes and a cylindrical cavity, as shown in Figure 3. The convective heat transfer in the tube contains two groups of straight tubes (Tubes I) and two groups of combined tubes (Tubes II). The cylindrical cavity is inside the barrel conductor. Deionized water flows through the cavity, which consumes the huge thermal load generated by the resistance of the barrel conductor. The cooling water channels of the pyro-breaker need to be designed according to the distribution of the small sections and the contact surfaces. The cooling water system in the CS is composed of tubes and a cylindrical cavity, as shown in Figure 3. The convective heat transfer in the tube contains two groups of straight tubes (Tubes I) and two groups of combined tubes (Tubes II). The cylindrical cavity is inside the barrel conductor. Deionized water flows through the cavity, which consumes the huge thermal load generated by the resistance of the barrel conductor. The thermal performance of the CS is simulated under different sets of convective heat transfer coefficients in ANSYS/Workbench with the environment temperature of 25 °C , as shown in Table 2. One set of convective heat transfer coefficient values, which can meet the cooling requirements under the current of 100 kA, is obtained (Set 3). The simulation result is shown in Figure 4. The maximum temperature appears on the barrel conductor, which is 72.11 °C . The thermal performance of the CS is simulated under different sets of convective heat transfer coefficients in ANSYS/Workbench with the environment temperature of 25 • C, as shown in Table 2. One set of convective heat transfer coefficient values, which can meet the cooling requirements under the current of 100 kA, is obtained (Set 3). The simulation result is shown in Figure 4. The maximum temperature appears on the barrel conductor, which is 72.11 • C. in the CS is composed of tubes and a cylindrical cavity, as shown in Figure 3. The convective heat transfer in the tube contains two groups of straight tubes (Tubes I) and two groups of combined tubes (Tubes II). The cylindrical cavity is inside the barrel conductor. Deionized water flows through the cavity, which consumes the huge thermal load generated by the resistance of the barrel conductor. The thermal performance of the CS is simulated under different sets of convective heat transfer coefficients in ANSYS/Workbench with the environment temperature of 25 °C , as shown in Table 2. One set of convective heat transfer coefficient values, which can meet the cooling requirements under the current of 100 kA, is obtained (Set 3). The simulation result is shown in Figure 4. The maximum temperature appears on the barrel conductor, which is 72.11 °C . In engineering heat transfer, the forced convection heat transfer in the tube can be calculated by equations. However, the forced convection heat transfer in the cylindrical cavity is complex and difficult to calculate directly. Therefore, this paper will focus on the enhancement of the convective heat transfer coefficient of the cylindrical cavity in the CS.
Feasible Methods
Convective heat transfer coefficient enhancement is normally realized by improving the uniformity of fields, such as the velocity field and temperature field, or reducing the angle between the velocity vector and the heat flux vector. In engineering practice, there are two methods to enhance the convective heat transfer coefficient: enhancement with no Table 3 [17]. The feasibility of these methods needs to be considered from two aspects: whether they affect the conductor performance of the pyro-breaker or the arc extinguishing medium. Suitable methods to enhance the convective heat transfer coefficient for the cavity in the presented pyro-breaker include surface expansion, surface roughening and jet strengthening. The surface expansion method, however, involves an increase in explosive dosage and other problems caused by the enlargement of the barrel conductor. Therefore, the surface expansion method is not considered in this paper.
Numerical Calculation
Jet strengthening is an important method to improve the convective heat coefficient. For the cylindrical cavity discussed in this paper, jet strengthening can be realized by increasing the flow rate of cooling water and changing the structure of the inlet water channels. Compared with the integrated type pyro-breaker, the separated type pyrobreaker has a more flexible structure, which is easier to optimize. In a previous study, the influence of changing the inlet angle of the water on the convective heat transfer coefficient was analysed [16]. In this paper, the structure with a 50 degree inlet angle has been selected for further analysis.
In the integrated type pyro-breaker, the structures of the upstream and downstream conductors are different. This will lead to an asymmetry of the inlet and outlet water channels. The CS in the separated type pyro-breaker is designed with symmetrical inlet and outlet water channels, which are able to improve the uniformity of the velocity field. Numerical models are built in CFX [18] at the full scale of the cavity to calculate, compare and analyse the optimizations based on the suitable methods for the enhancement of convective heat transfer of the cavity in the presented pyro-breaker.
The boundary conditions of the fluid domain are shown in Table 4. Unlike the other steady-state conditions, the inlet speed and the wall roughness are set in order to compare the results. The k-ε and k-ω two-equation models are used as turbulence models in the simulation. They offer a good compromise between numerical effort and computational accuracy. One of the advantages of the k-ω formulation is the near-wall treatment for low-Reynoldsnumber computations. The model does not involve the complex nonlinear damping functions and is therefore more accurate and more robust [19].
Numerical calculations of the heat transfer are performed using the ANSYS-CFX code. The continuity Equation (1), the momentum Equation (2), and the total energy Equation (4) are applied as governing equations [19].
where the stress tensor τ is related to the strain rate by Equation (3): where h tot is the total enthalpy, related to the static enthalpy h (T, p) by: ∇ · (U · τ) represents the work due to viscous stresses and is called the viscous work term. This models the internal heating by viscosity in the fluid and is negligible in most flows. U · S M represents the work due to external momentum sources and is currently redundant. More details about these equations can be found in [19].
Different sizes are selected for mesh generation to study the grid independence. Table 5 shows the grids for the symmetrical structure under an inlet speed of 1 m/s and a smooth wall roughness. The average calculated convective heat transfer coefficients are compared under different body sizes. It was observed that the deviation between the size of 1 mm and 2 mm is only 0.75%. Thus, for further calculations, a structural and hexagonal mesh ( Figure 5) with body size of 2 mm is chosen.
As shown in Figure 6, the velocity stream and the convective heat transfer coefficients of asymmetric and symmetric structures under different inlet water velocities are simulated. Figure 6a and Figure 6b illustrate the velocity stream and the convective heat transfer coefficients with an inlet velocity of 1 m/s. Figure 6c and Figure 6d illustrate the velocity stream and the convective heat transfer coefficients with an inlet velocity of 4 m/s. The result shows that the maximum velocity in the asymmetric structure is higher than in the symmetric structure, while the symmetric structure has a more uniform velocity stream. The convective heat transfer coefficient of the symmetric structure is significantly higher than the asymmetric structure. The difference is more pronounced with the increase of inlet water velocity. As shown in Figure 6, the velocity stream and the convective heat transfer coefficients of asymmetric and symmetric structures under different inlet water velocities are simulated. Figure 6a and Figure 6b illustrate the velocity stream and the convective heat transfer coefficients with an inlet velocity of 1 m/s. Figure 6c and Figure 6d illustrate the velocity stream and the convective heat transfer coefficients with an inlet velocity of 4 m/s. The result shows that the maximum velocity in the asymmetric structure is higher than in the symmetric structure, while the symmetric structure has a more uniform velocity stream. The convective heat transfer coefficient of the symmetric structure is significantly higher than the asymmetric structure. The difference is more pronounced with the increase of inlet water velocity. The thermal load generated by the barrel conductor is mainly consumed through the inner surface. The contact surfaces of the barrel conductor with the upstream and downstream conductors in the integrated type are located on both the outer surface and the inner surface. The surface roughness of the entire barrel conductor is R30. The barrel conductor in the separated type pyro-breaker is designed to only contact other conductors on the outer surface. Therefore, changing the roughness of its inner surface will not affect the contact resistance. The influence of surface roughening on the convective heat transfer coefficient can also be simulated in CFX by changing the parameters of the surface conditions. Convective heat transfer coefficients with a surface roughness of R30 and R100 are simulated. As shown in Figure 7, although an increase in the surface roughness reduces the fluid velocity, the convective heat transfer coefficient is obviously improved. Table 6 shows the average calculated convective heat transfer coefficient of the cavity under different structural optimizations. The convective heat transfer coefficient The thermal load generated by the barrel conductor is mainly consumed through the inner surface. The contact surfaces of the barrel conductor with the upstream and downstream conductors in the integrated type are located on both the outer surface and the inner surface. The surface roughness of the entire barrel conductor is R30. The barrel conductor in the separated type pyro-breaker is designed to only contact other conductors on the outer surface. Therefore, changing the roughness of its inner surface will not affect the contact resistance. The influence of surface roughening on the convective heat transfer coefficient can also be simulated in CFX by changing the parameters of the surface conditions. Convective heat transfer coefficients with a surface roughness of R30 and R100 are simulated. As shown in Figure 7, although an increase in the surface roughness reduces the fluid velocity, the convective heat transfer coefficient is obviously improved. Table 6 shows the average calculated convective heat transfer coefficient of the cavity under different structural optimizations. The convective heat transfer coefficient of the symmetrical structure with a surface roughness of R100 and an inlet water velocity of 4 m/s can meet the demanding requirements of a 100 kA pyro-breaker, as calculated by ANSYS/Workbench.
Experiments
To verify the feasibility of the convective heat transfer coefficient methods, the thermal-electric simulation of the CS with different structures and surface roughness are simulated in ANSYS/Workbench under a current of 40 kA and an inlet water velocity of 1 m/s. The environment temperature is set to 25 °C. The convective heat transfer coefficients are calculated by CFX. The results are shown in Table 4.
Three prototypes are manufactured to conduct the experiments to compare with the
Experiments
To verify the feasibility of the convective heat transfer coefficient methods, the thermalelectric simulation of the CS with different structures and surface roughness are simulated in ANSYS/Workbench under a current of 40 kA and an inlet water velocity of 1 m/s. The environment temperature is set to 25 • C. The convective heat transfer coefficients are calculated by CFX. The results are shown in Table 4.
Three prototypes are manufactured to conduct the experiments to compare with the simulation results. To simplify the experiments, only the barrel conductor and related conductors are selected for the prototypes. All the prototypes are tested under the current of 40 kA for 1 h with an inlet water velocity of 1 m/s. The actual environment temperature is 23.4 • C. Thermal resistors are mounted on the barrel conductors.
Results and Discussion
The results of the experiments are shown in Figure 8. The rising speed and the steady value of the average measured temperature on the barrel conductor of the symmetrical structure are significantly lower than the asymmetrical structure. The temperature difference between a surface roughness of R30 (76.64 • C) and R100 is relatively smaller but has an obvious effect on the heat transfer enhancement. This indicates that the selected methods are feasible to enhance the convective heat transfer coefficient.
As the average convective heat transfer coefficient of the cavity is unable to be measured, the calculated convective heat transfer coefficient of the cavity is verified by the comparison between the simulation and the experiment of the barrel conductor, as illustrated in Table 7. The temperature rise for different prototypes has been simulated with the calculated convective heat transfer coefficient of the cavity in ANSYS/Workbench. The measured temperature rises have good consistency with the simulated temperature rises in all three prototypes. For the asymmetrical structure, the difference in the temperature rise between the simulation and the experiments is larger than that in the symmetrical structure. This may be due to the uniformity of the velocity field becoming even more severe under non-ideal conditions, which, on the other hand, proves the advantage of a symmetrical structure in the water channels. with the calculated convective heat transfer coefficient of the cavity in ANSYS/Workbench. The measured temperature rises have good consistency with the simulated temperature rises in all three prototypes. For the asymmetrical structure, the difference in the temperature rise between the simulation and the experiments is larger than that in the symmetrical structure. This may be due to the uniformity of the velocity field becoming even more severe under non-ideal conditions, which, on the other hand, proves the advantage of a symmetrical structure in the water channels. h-the calculated convective heat transfer coefficient of the cavity, T 1 -simulated temperature on the barrel conductor, T 2 -measured temperature on the barrel conductor, ∆T 1 -the simulated temperature rise, ∆T 2 -the measured temperature rise.
Conclusions
According to the recently proposed conceptual design of the China Fusion Engineering Test Reactor (CFETR), the current of the magnet power supply of the CFETR will reach a level of 100 kA. This demands the pyro-breaker in the quench protection system (QPS) has the same current capacity in a steady state. In the existing research, the bottleneck of the steady-state capacity of the pyro-breaker appears at the barrel conductor in the commutation section.
Based on the existing design and calculation, a separated type pyro-breaker is proposed in this paper. By studying the convective heat transfer enhancement methods, jet strengthening and surface roughening are considered to be the proper method to enhance the heat transfer for the presented breaker. A symmetrical water channel is designed for the new pyro-breaker to realize the jet strengthening. Computational fluid dynamics (CFD) is applied to calculate and compare the convective heat transfer coefficient of the inner cavity of the barrel conductor under different water channel structures and surface roughness values. Simulations and experiments are conducted on the three prototypes and the feasibility of the enhancement methods are verified. It can be concluded that the new pyro-breaker model with a symmetrical water channel structure and wall roughness of R100 can fulfil the demanding requirements of 100 kA current capacity. The proposed new model of the separated type pyro-breaker will be the design basis for further development of the pyro-breaker in CFETR. The symmetrisation of the water channel and the roughening of the heat exchange surface can also be used in other cooling water systems to achieve convective heat transfer enhancement.
Future work is required to improve the accuracy of the CFD model of the pyro-breaker by studying the turbulence model sensitivity. Additionally, only the fluid domain is discussed in this paper. Finally, the conductor should be included in the future model and the conjugate heat transfer should be considered to improve the efficiency and accuracy of the numerical calculation. | 5,840.8 | 2021-11-12T00:00:00.000 | [
"Physics",
"Engineering"
] |
Improving Mechanical, Electrical and Thermal Properties of Fluororubber by Constructing Interconnected Carbon Nanotube Networks with Chemical Bonds and F–H Polar Interactions
To improve the properties of fluororubber (FKM), aminated carbon nanotubes (CNTs-NH2) and acidified carbon nanotubes (CNTs-COOH) were introduced to modulate the interfacial interactions in FKM composites. The effects of chemical binding and F–H polar interactions between CNTs-NH2, CNTs-COOH, and FKM on the mechanical, electrical, thermal, and wear properties of the FKM composites were systematically investigated. Compared to the pristine FKM, the tensile strength, modulus at 100% strain, hardness, thermal conductivity, carbon residue rate, and electrical conductivity of CNTs-NH2/CNTs-COOH/FKM were increased by 112.2%, 587.5%, 44.2%, 37.0%, 293.5%, and nine orders of magnitude, respectively. In addition, the wear volume of CNTs-NH2/CNTs-COOH/FKM was reduced by 29.9%. This method provides a new and effective way to develop and design high-performance fluororubber composites.
Introduction
In recent years, a rubber material that can withstand strong corrosion and high temperatures in a harsh environment is urgently needed in the petrochemical, automotive, and aerospace fields [1][2][3]. The copolymer consisting of hexafluoropropylene, vinylidene fluoride, and tetrafluoroethylene with a cure site monomer, constituting a special polymer material with a large number of fluorine atoms in its structure, has an incredibly high resistance to chemical media and can be used for a long time under a high temperature of 250 • C [4]. However, it still suffers from poor mechanical and electrical properties and inadequate wear resistance [5,6]. In order to overcome these problems, researchers have proposed enhancing the performance of fluororubber (FKM) by the filler modification method.
Carbon nanotubes (CNTs) are widely used in rubber modification due to their excellent reinforcing, thermal, and electrical properties [7][8][9][10]. With the wide application of CNTs in FKM, the enhancement effect of modified CNTs on FKM has gradually attracted the attention of researchers [11][12][13]. Heidarian et al. [14] found that the hydrogen bonding and compatibility between acidified carbon nanotubes (CNTs-COOH) and FKM are stronger compared to CNTs in their research work. Meanwhile, a double cross-linked network formed between aminated carbon nanotubes (CNTs-NH 2 ) and FKM was reported by Gao et al. [15]. It was demonstrated by experimental data that the covalent bond (C=N bond) existing between CNTs-NH 2 and FKM effectively enhanced the thermal conductivity and tensile properties of FKM by 17.1% and 65.8% compared to CNTs-COOH. In addition, similar results were found in the research work of Yang et al. [16]. As a result, an increase in the cross-link density or type of interaction force can significantly improve the properties of fluororubber. According to the previous work [17], we found that this method is also applicable to the two-filler system. Therefore, we needed to find a filler that could interact with both CNTs-NH 2 and FKM to improve the performance of nanocomposites more effectively. Among the types of polar interactions, non-covalent bonds, including hydrogen bonds, π-π bonds, and so on, have great advantages in modulating interfacial interactions without strict reaction conditions [18,19].
Thus, in this contribution, we propose a method for enhancing the mechanical, thermal, and electrical properties of FKM by forming F-H polar interactions and C=O bonds with FKM and CNTs-NH 2 through reactive groups (-COOH) on the surface of CNTs-COOH. This method not only has a facile preparation process but can also effectively improve the interfacial interaction between the filler and the FKM matrix. In addition, the wear resistance of the FKM composites was systematically investigated. This new approach promises greater competitive advantages in high-temperature, wear-resistant-sealing, and battery separator applications.
Preparation of CNTs-NH 2 /CNTs-COOH/FKM Composite
Mixing the pristine FKM with pre-blended ZnO and TAIC was performed using a mixer (KHB8, Guangdong Lina Industrial Co., Ltd., Dongguan, China) with a roll temperature of 50 • C. After the FKM was rolled and cut three times from each side of the mill, CNTs, CNTs-COOH, and CNTs-NH 2 were fed into the mill. Trigonox ® 101-50D was added in sequence and then rolled and cut 6 times on the open mill or until it became homogenized. After 24 h, re-milling was performed with a roll temperature of 30 • C. The rubber sheet was pressed with a flat vulcanizing machine (XH-406B, Xihua Testing Instrument Co., Ltd., Dongguan, China) at 177 • C under a pressure of 10 MPa for 7 min. Finally, the vulcanizate was post-cured at 230 • C for 2 h, and the as-prepared sheet was denoted as CNTs-NH 2 /CNTs-COOH/FKM composite. The reaction mechanism of CNTs-NH 2 /CNTs-COOH/FKM nanocomposites is shown in Figure 1. In order to compare the effects of different interaction forces on their properties, FKM, CNTs-COOH/FKM, and CNTs-NH 2 /FKM were also prepared in this study according to a similar procedure, and their corresponding formulations are listed in Table 1.
Characterization
Microstructures of the nanofillers and the fracture surface morphology of different FKM nanocomposites after tensile test were examined by scanning electron microscopy (SEM, Nova 200 NanoLab, Hillsboro, OR, USA) at an accelerating voltage of 5 kV.
The functional groups of MWCNT-A and RGO were examined using Fourier transform infrared spectroscopy (FTIR, Spotlight 200i, Perkin Elmer, Waltham, MA, USA) with a wavenumber range of 400-4000 cm −1 and a resolution of 2 cm −1 .
A total of 30-50 mg of post-cured sample was subjected to TGA runs from 40 • C to 600 • C. This was carried out on a PerkinElmer thermal analysis system (Pyris 1 TGA, Perkin Elmer, Waltham, MA, USA) at a scan rate of 10 • C/min in a nitrogen atmosphere.
The tensile properties of different FKM nanocomposites were measured using a universal material-testing machine (Instron 5982, Instron, Boston, MA, USA) according to GB/T 528-2009 at a crosshead speed of 300 mm/min. The shore A hardness was measured by using an LX-A sclerometer (JLX-A, Qingbo, China) according to ASTM D 2240.
The electrical conductivity of different FKM nanocomposites was measured by using a digital ultra-high resistance microcurrent-measuring instrument (EST121, EST, Beijing, China) at a voltage of 220 V. The electrical conductivity value was calculated according to Formula (1) where σ is the conductivity (S/cm), d is the thickness of the sample (cm), and Rv is the volume resistance of the sample (Ω). The dimensions of the specimen were 18 mm × 18 mm × 2 mm, and each test was performed on five specimens. The thermal conductivities of different FKM nanocomposites were tested via a thermal conductivity tester (TC 3000, Xiatech Instrument Factory, Xi'an, China) using a transient hot-wire method at 20 • C. The circular specimens were prepared with a diameter of 30 mm and a thickness of about 2 mm.
A DIN Abrasion Resistance Tester (GT-7012-D, Gotech Testing Machines, Dongguan, China) was used to measure the mass abrasion of nanocomposites. The volumetric wear consumption of nanocomposites was calculated by Formula (2) where A is the wear volume (mm 3 ), ∆m is the mass abrasion (mg), Q is the abrasion of standard rubber (mg), and S is the specific gravity of the sample (mg/mm 3 ).
Morphology of CNTs, CNTs-COOH, and CNTs-NH 2
The microscopic morphologies of several carbon materials are shown in Figure 2. All three types of carbon nanotubes have a long, thin, and linear shape. In addition, we can clearly observe the strong agglomeration effect and mutual entanglement in the unpredispersed carbon nanotubes.
China) was used to measure the mass abrasion of nanocomposites. The volumetric wear consumption of nanocomposites was calculated by Formula (2) where A is the wear volume (mm 3 ), Δm is the mass abrasion (mg), Q is the abrasion of standard rubber (mg), and S is the specific gravity of the sample (mg/mm 3 ).
Morphology of CNTs, CNTs-COOH, and CNTs-NH2
The microscopic morphologies of several carbon materials are shown in Figure 2. All three types of carbon nanotubes have a long, thin, and linear shape. In addition, we can clearly observe the strong agglomeration effect and mutual entanglement in the unpredispersed carbon nanotubes.
Chemical Composition Analysis of CNTs, CNTs-COOH, and CNTs-NH2
As shown in Figure 2d, the microstructure of the carbon nanotubes has been analyzed by FTIR spectra. The intense peaks at 1709 cm −1 for CNTs-COOH and CNTs-NH2 can be assigned to the stretching vibration of the C=O of -COOH groups on CNTs-COOH and
Chemical Composition Analysis of CNTs, CNTs-COOH, and CNTs-NH 2
As shown in Figure 2d, the microstructure of the carbon nanotubes has been analyzed by FTIR spectra. The intense peaks at 1709 cm −1 for CNTs-COOH and CNTs-NH 2 can be assigned to the stretching vibration of the C=O of -COOH groups on CNTs-COOH and the C=O of -CONH-groups on CNTs-NH 2 [15,20]. The bands at 2852 cm −1 and 3031 cm −1 correspond to the C-H skeleton vibration [21]. Figure 3 shows the SEM images of the fracture surfaces of different composites. We can observe the microscopic morphologies of FKM, CNTs/FKM, CNTs-COOH/FKM, CNTs-NH 2 /FKM, and CNTs-NH 2 /CNTs-COOH/FKM. In the red box in Figure 3a, we can see the larger ZnO particles appearing on the surface of the fluororubber. As shown in the red boxes of Figure 3b,c, we can observe partial agglomerations of the carbon nanotubes, similar to those depicted in Figure 2. After the composites were pulled off by external force, some carbon nanotubes were extracted and exposed in the fracture surfaces (Figure 3d,e). Figure 3 shows the SEM images of the fracture surfaces of different composites. We can observe the microscopic morphologies of FKM, CNTs/FKM, CNTs-COOH/FKM, CNTs-NH2/FKM, and CNTs-NH2/CNTs-COOH/FKM. In the red box in Figure 3a, we can see the larger ZnO particles appearing on the surface of the fluororubber. As shown in the red boxes of Figure 3b,c, we can observe partial agglomerations of the carbon nanotubes, similar to those depicted in Figure 2. After the composites were pulled off by external force, some carbon nanotubes were extracted and exposed in the fracture surfaces (Figure 3d,e).
Chemical Composition Analysis of FKM Nanocomposites
To qualitatively analyze the chemical structures of the substances, we used FTIR spectra for the characterization of the composites. In Figure 3, we can see that several composites have absorption peaks of different intensities at 893 cm −1 , 1037 cm −1 , 1127 cm −1 , 1393 cm −1 , 1692 cm −1 , and 2963 cm −1 , which correspond to the -CF3 group, C-N bond, -CF2group, -CF-group, C=C bond, and C-H absorption peaks of fluororubbers [22,23]. Among them, the non-conjugated C=C bond at 1692 cm −1 can tentatively prove the occurrence of the defluorination hydrogenation reaction as well as the oxidation reaction during the vulcanization process [23]. The peak area ratio of C-N bonds to -CF3 (AC-N/A-CF3) can be
Chemical Composition Analysis of FKM Nanocomposites
To qualitatively analyze the chemical structures of the substances, we used FTIR spectra for the characterization of the composites. In Figure 3, we can see that several composites have absorption peaks of different intensities at 893 cm −1 , 1037 cm −1 , 1127 cm −1 , 1393 cm −1 , 1692 cm −1 , and 2963 cm −1 , which correspond to the -CF 3 group, C-N bond, -CF 2 -group, -CF-group, C=C bond, and C-H absorption peaks of fluororubbers [22,23]. Among them, the non-conjugated C=C bond at 1692 cm −1 can tentatively prove the occurrence of the defluorination hydrogenation reaction as well as the oxidation reaction during the vulcanization process [23]. The peak area ratio of C-N bonds to -CF 3 (A C-N /A -CF3 ) can be calculated by using the -CF 3 absorption peak, which is not involved in the reaction in fluororubber, as the reference peak; thus, it was derived that the A C-N /A -CF3 of CNTs-NH 2 /FKM and CNTs-NH 2 /CNTs-COOH/FKM are 0.38 and 0.65, respectively. This indicates that the content of the C-N bond in CNTs-NH 2 /CNTs-COOH/FKM has increased. This proves that the amino group of CNTs-NH 2 and the carboxyl group of CNTs-COOH chemically reacted during the vulcanization process. COOH, and CNTs-NH 2, the tensile strength of the different FKM composites improved; the reason could be ascribed to the honeycomb network structure formed by the linear CNTs in the FKM matrix, which causes the FKM composites to need a greater external force to produce deformation [5]. In particular, CNTs-NH 2 can form a chemical crosslinking bond, i.e., a C=N bond, with the molecular chain of the fluororubber [15], while CNTs-COOH can form hydrogen bonds and C-N bonds with fluororubber and CNTs-NH 2 , respectively. Therefore, when the two modified carbon nanotubes were added to FKM at the same time, the tensile strength of CNTs-NH 2 /CNTs-COOH/FKM reached 17.4 MPa and increased by 112.2% compared with the pristine FKM. This can be interpreted as the network structure of the linear material and the interaction force formed between the fillers and the fluororubber causing the network structure inside the FKM to be less susceptible to damage, thus improving the tensile strength of CNTs-NH 2 /CNTs-COOH/FKM.
Mechanical Properties
The elongation at break and modulus at 100% strain of the modified FKM composites are reported in Figure 4b,c. Due to its own rigid qualities, the addition of carbon nanotubes makes the fluororubber much more rigid, thus lowering the elongation at break [16,24]. Therefore, the elongation at break of fluororubber was reduced to different degrees when three different modified carbon nanotubes were added to FKM alone. When both CNTs-COOH and CNTs-NH 2 were added to fluororubber, the decrease in the elongation at break was comparable to that of CNTs/FKM. This can be explained by the fact that the interaction between the fillers and the rubber causes the molecular chains to grow, but at the same time, the increase in the cross-link density prevents the molecular chains from slipping easily, thus causing a decrease in the elongation at break. The results in Figure 4c support the above findings. In addition, the modulus at 100% strain of fluororubber with the addition of two fillers increased by 587.5%. Since the presence of reactive groups on the surface of CNTs leads to the growth of molecular chains in the FKM matrix, the fluororubber containing modified CNTs experiences lower tensile stress when subjected to an equivalent amount of external force. Meanwhile, the interactions in CNTs-NH 2 /CNTs-COOH/FKM not only led to an increase in molecular weight and cross-link density, but also relatively reduced the adverse effect of the free ends, thus improving its performance. It has been further demonstrated that increasing the interaction force between the filler and fluororubber is necessary to improve their tensile properties. [25], FKM/Diatomite/silica [26], FKM/EG [27], FKM/M-MWCNT [16], FKM/NG [28], FKM/GNP 2 [29], FKM/oSep [30] and FKM/m-SiCNWs [2].
Wear Resistance
The wear resistance of fluororubber seals has a direct influence on their service life in industrial applications. In addition, the cross-link density, self-lubricating properties, and its own strength can improve the wear resistance of fluororubber. As one-dimensional [25], FKM/Diatomite/silica [26], FKM/EG [27], FKM/M-MWCNT [16], FKM/NG [28], FKM/GNP 2 [29], FKM/oSep [30] and FKM/m-SiCNWs [2]. The hardness of different FKM composites was tested and the results are shown in Figure 4d. It can be seen that the hardness of the pristine FKM was only 57. With the addition of CNTs, CNTs-COOH, and CNTs-NH 2, the hardness of all the different FKM composites improved. However, the hardness of the CNTs/FKM, CNTs-COOH, and CNTs-NH 2 /FKM/FKM composites did not show a significant difference. Whereas after the addition of CNTs-COOH and CNTs-NH 2 at the same time, the hardness of CNTs-NH 2 /CNTs-COOH/FKM was increased by 44.2% relative to the raw rubber. This is related to the strength of carbon nanotubes and the formation of multiple interactions.
Wear Resistance
The wear resistance of fluororubber seals has a direct influence on their service life in industrial applications. In addition, the cross-link density, self-lubricating properties, and its own strength can improve the wear resistance of fluororubber. As one-dimensional nanomaterials with high strength and stiffness, carbon nanotubes can directly improve the wear resistance of fluororubber. The wear volumes of different fluororubber composites are shown in Figure 5. From the figure, it is clear that the wear volume of FKM shows a gradual decrease with the enhancement of interfacial interactions. In addition, the wear volume of raw rubber was 124.1 mm 3 . When CNTs-NH 2 and CNTs-COOH were added to fluororubber simultaneously, the wear volume of CNTs-NH 2 /CNTs-COOH/FKM was reduced by 29.9%. This is because the chemical bonds and H-F hydrogen bonds formed by the two carbon materials and the fluororubber molecular chains, respectively, increased the cross-linked species and cross-linked density of the composites, which resulted in lower volumetric wear under wear conditions. [25], FKM/Diatomite/silica [26], FKM/EG [27], FKM/M-MWCNT [16], FKM/NG [28], FKM/GNP 2 [29], FKM/oSep [30] and FKM/m-SiCNWs [2].
Wear Resistance
The wear resistance of fluororubber seals has a direct influence on their service life in industrial applications. In addition, the cross-link density, self-lubricating properties, and its own strength can improve the wear resistance of fluororubber. As one-dimensional nanomaterials with high strength and stiffness, carbon nanotubes can directly improve the wear resistance of fluororubber. The wear volumes of different fluororubber composites are shown in Figure 5. From the figure, it is clear that the wear volume of FKM shows a gradual decrease with the enhancement of interfacial interactions. In addition, the wear volume of raw rubber was 124.1 mm 3 . When CNTs-NH2 and CNTs-COOH were added to fluororubber simultaneously, the wear volume of CNTs-NH2/CNTs-COOH/FKM was reduced by 29.9%. This is because the chemical bonds and H-F hydrogen bonds formed by the two carbon materials and the fluororubber molecular chains, respectively, increased the cross-linked species and cross-linked density of the composites, which resulted in lower volumetric wear under wear conditions.
Electrical Properties
To research the effect of several carbon nanotubes on the electrical conductivity of the FKM composites, the electrical conductivity of different FKM nanocomposites was tested by a digital ultra-high resistance microcurrent-mearing instrument, and the results are shown in Figure 6. The conductivity of the pristine FKM was 3.7 × 10 −15 S/cm. The addition of the linear carbon material formed a conductive network in the fluororubber matrix to improve its conductivity. However, the carbon nanotubes increased their own defects after their modification by the acid treatment, thus reducing their electrical conductivity. In addition, when treated with an amine modification, the conductivity of the carbon nanotubes decreased, but they were still able to adequately improve their conductivity compared with CNTs-COOH. Meanwhile, CNTs-NH 2 was able to form a chemical bond with FKM using itself as the cross-linking center, which constituted an effective conductive network [15]. On this basis, the addition of CNTs-COOH completed the conductive network of FKM and improved the conductivity of the CNTs-NH 2 /CNTs-COOH/FKM composite by nine orders of magnitude compared with the pristine FKM. We find that this method of establishing a double-network model of crosslinking and conductive networks can stabilize the conductive network, which is consistent with the results reported in the literature [31].
ductivity. In addition, when treated with an amine modification, the conductivity of the carbon nanotubes decreased, but they were still able to adequately improve their conductivity compared with CNTs-COOH. Meanwhile, CNTs-NH2 was able to form a chemical bond with FKM using itself as the cross-linking center, which constituted an effective conductive network [15]. On this basis, the addition of CNTs-COOH completed the conductive network of FKM and improved the conductivity of the CNTs-NH2/CNTs-COOH/FKM composite by nine orders of magnitude compared with the pristine FKM. We find that this method of establishing a double-network model of crosslinking and conductive networks can stabilize the conductive network, which is consistent with the results reported in the literature [31].
Thermal Properties
The thermal conductivity, TG, DTG, and carbon residue rate of the different composites are shown in Figure 7. From Figure 7a, it can be seen that the thermal conductivity networks formed by these three carbon nanotubes in the fluororubber matrix effectively enhanced the thermal conductivity of fluororubber. When CNTs-NH2 and CNTs-COOH were added simultaneously, the thermal conductivity of the CNTs-NH2/CNTs-COOH/FKM composite (0.2660 W·m −1 K −1 ) was improved by 37.0% compared with the pristine fluororubber (0.1942 W·m −1 K −1 ). This is attributed to the incorporation of highly thermal conductive carbon materials and their formation of a three-dimensional thermal conductive network within the fluororubber. This greatly improves the thermal conductivity pathway within the fluororubber, thus enhancing the fluororubber's thermal conductivity. Figure 7b,c show the TG and DTG of several composites. From these two figures, we can glean that the initial decomposition temperatures of the several composites are 437.6 °C, 444.3 °C, 444.3 °C, 442.8 °C, and 435.6 °C. Furthermore, we can find that the addition of a moderate number of carbon fillers can improve the thermal stability of fluororubber. This is because the carbon nanotubes have good heat conductivity and form a thermally conductive network with the molecular chains of fluororubber, which can rapidly transfer the external heat to the interior of the polymer and disintegrate the structure of the polymer in advance. When the thermal network formed by the carbon fillers is improved, the thermal
Thermal Properties
The thermal conductivity, TG, DTG, and carbon residue rate of the different composites are shown in Figure 7. From Figure 7a, it can be seen that the thermal conductivity networks formed by these three carbon nanotubes in the fluororubber matrix effectively enhanced the thermal conductivity of fluororubber. When CNTs-NH 2 and CNTs-COOH were added simultaneously, the thermal conductivity of the CNTs-NH 2 /CNTs-COOH/FKM composite (0.2660 W·m −1 K −1 ) was improved by 37.0% compared with the pristine fluororubber (0.1942 W·m −1 K −1 ). This is attributed to the incorporation of highly thermal conductive carbon materials and their formation of a three-dimensional thermal conductive network within the fluororubber. This greatly improves the thermal conductivity pathway within the fluororubber, thus enhancing the fluororubber's thermal conductivity.
Polymers 2022, 14, x FOR PEER REVIEW 9 of 11 performance of the composite will be affected and degraded. Whereas when the thermal stability of the carbon fillers is superior, the thermal properties of the composite will be improved. Therefore, after the addition of 5 phr CNTs-NH2 and 5 phr CNTs-COOH, the thermal network is superior and the thermal stability of the fluororubber decreases.
The carbon residue rate of the five composites at 600 °C are shown in Figure 7d. The carbon residue rate of pristine FKM was only 4.6%. After the addition of carbon materials, the carbon residue rate was significantly increased. By combining the above thermal conductivity and thermal stability, the higher thermal conductivity of CNTs-NH2/CNTs-COOH/FKM can accelerate the process of carbon formation and form a dense carbon layer on the surface of the specimen faster than several other composites, improving the carbon residue rate of fluororubber.
Conclusions
In this study, we reported the enhancement of a carbon nanotube network based on chemical binding and F-H polar interactions in nanocomposites. The experimental results show that the mechanical, electrical, thermal, and wear resistance properties of the composites are greatly improved by the modulation of the interfacial interactions in the composites. The tensile strength, modulus at 100% strain, and hardness of CNTs-NH2/CNTs-COOH/FKM were increased to 17.4 MPa, 11.0 MPa, and 82.2, which were 112.2%, 587.5%, and 44.2% higher compared to the pristine FKM. The reason is that the cellular network This is because the carbon nanotubes have good heat conductivity and form a thermally conductive network with the molecular chains of fluororubber, which can rapidly transfer the external heat to the interior of the polymer and disintegrate the structure of the polymer in advance. When the thermal network formed by the carbon fillers is improved, the thermal performance of the composite will be affected and degraded. Whereas when the thermal stability of the carbon fillers is superior, the thermal properties of the composite will be improved. Therefore, after the addition of 5 phr CNTs-NH 2 and 5 phr CNTs-COOH, the thermal network is superior and the thermal stability of the fluororubber decreases.
The carbon residue rate of the five composites at 600 • C are shown in Figure 7d. The carbon residue rate of pristine FKM was only 4.6%. After the addition of carbon materials, the carbon residue rate was significantly increased. By combining the above thermal conductivity and thermal stability, the higher thermal conductivity of CNTs-NH 2 /CNTs-COOH/FKM can accelerate the process of carbon formation and form a dense carbon layer on the surface of the specimen faster than several other composites, improving the carbon residue rate of fluororubber.
Conclusions
In this study, we reported the enhancement of a carbon nanotube network based on chemical binding and F-H polar interactions in nanocomposites. The experimental results show that the mechanical, electrical, thermal, and wear resistance properties of the composites are greatly improved by the modulation of the interfacial interactions in the composites. The tensile strength, modulus at 100% strain, and hardness of CNTs-NH 2 /CNTs-COOH/FKM were increased to 17.4 MPa, 11.0 MPa, and 82.2, which were 112.2%, 587.5%, and 44.2% higher compared to the pristine FKM. The reason is that the cellular network structure formed by CNTs-NH 2 and CNTs-COOH in the FKM matrix restricts the deformation of molecular chains. In addition, the chemical binding and F-H polar interactions make the composites better able to absorb stress without being easily damaged when subjected to external forces, thus enhancing their tensile strength, modulus at 100% strain, and hardness. However, the reduced deformation of the molecular chains apparently decreases the composites' elongation at break. The dominance of the carbon nanotube network based on strong interaction forces is also reflected in the wear resistance properties. The wear volume of CNTs-NH 2 /CNTs-COOH/FKM is only 87 mm 3 , which is 29.9% higher compared to the pristine FKM. Additionally, these two strong interfacial interactions resulted in a more robust conductive and thermally conductive network in the FKM matrix, which enhanced the electrical conductivity and thermal conductivity of CNTs-NH 2 /CNTs-COOH/FKM by nine orders of magnitude and 37.0%, respectively. The composite's thermal conductivity affected by the 10 phr carbon materials is slightly stronger than its own thermal stability, so the initial decomposition temperature of CNTs-NH 2 /CNTs-COOH/FKM slightly decreased by 2 • C compared with the pristine FKM, and the carbon residue rate improved by 293.5%. This effective and mild method provides a reference for the preparation of high-performance fluororubbers and opens up broad application prospects. | 6,123.6 | 2022-11-01T00:00:00.000 | [
"Materials Science"
] |
Confinement dependence of electro-catalysts for hydrogen evolution from water splitting
Summary Density functional theory is utilized to articulate a particular generic deconstruction of the electrode/electro-catalyst assembly for the cathode process during water splitting. A computational model was designed to determine how alloying elements control the fraction of H2 released during zirconium oxidation by water relative to the amount of hydrogen picked up by the corroding alloy. This model is utilized to determine the efficiencies of transition metals decorated with hydroxide interfaces in facilitating the electro-catalytic hydrogen evolution reaction. A computational strategy is developed to select an electro-catalyst for hydrogen evolution (HE), where the choice of a transition metal catalyst is guided by the confining environment. The latter may be recast into a nominal pressure experienced by the evolving H2 molecule. We arrived at a novel perspective on the uniqueness of oxide supported atomic Pt as a HE catalyst under ambient conditions.
Introduction
Molecular hydrogen produced by water splitting constitutes the archetypical energy carrier in chemistry and is a main target process for the future harvesting of solar energy. Today, water splitting represents large economical values, i.e., it comprises a significant fraction of the total industrial electric energy consumption in the USA [1]. Decisive factors jointly determining the efficiency of the electrochemical process are the reactions at the oxidizing anode as well as at the hydrogen evolving cathode. In two inspiring experimental studies [2,3], Subbaraman et al. reported enhanced hydrogen evolution activity in water splitting by tailoring TM(OH) 2 -Pt electro-catalyst/electrode assemblies, where TM represents Mn 2+ , Fe 2+ , Co 2+ and Ni 2+ . The role of these hydroxides was to catalyze water dissociation. In this context, the objective of the present study is to contribute a novel descriptor for the electro-catalytic hydrogen evolution reaction (HER). It offers a complementary perspective on a recent study addressing the oxidation of zirconium alloys by water [4,5]. The overall reaction (1) is taken to occur by water utilizing hydrolysis to penetrate the oxide scale along hydroxylated grain boundaries, see Figure 1a, (2) These hydroxide ions subsequently react with transition metal decorated sites (see Figure 1a) and zirconium metal to produce ZrO 2 in conjunction with transient transition metal associated hydride-proton (hydroxide) pairs (see Figure 1b) to restore the ZrO 2 grain boundary according to (3) This can be subdivided into an anode process (4) where the [Zr IV -O-Zr IV ] oxide grain boundary is recovered, and a cathode process (5) is employed to decide the oxidation state X. The subsequent chemical drive for H 2 release into the confining grain boundary determines M and recovers the [Zr IV -O-M X ] site (cf. Figure 1c) (6) Indeed, Equation 6 was found to be decisive for the fraction of hydrogen atoms not forming H 2 but being absorbed in the Zr alloy according to (7) For completeness, a 1.1 eV/H 2 drive to release H 2 from the confining grain boundary was computed according to (8) Utilization of the hydride-proton recombination channel (see Figure 1b), the correlation between the computed reaction energies for the HER, Equation 6, and the experimental hydrogen pick-up fractions (HPUF), i.e., the fraction of the hydrogen, which does not undergo hydrogen evolution but are instead picked-up by the alloy during zirconium oxidation by water, leads to a model as displayed in Figure 1e. It is noteworthy, that the energetics for the chemical reaction step in Equation 6 offers a measure of the confinement-dependent cathodic overpotential for the HER along the reaction channel (Equations 2-6). The relevance of the reversible hydride-proton recombination reaction (Equation 6) has recently been proposed in case of a nickel electro-catalyst supported by seven-membered cyclic diphosphine ligands containing one pendant amine, with the Ni supporting the hydride and the amine providing the proton in the hydride-proton recombination reaction [6].
Results and Discussion
In the following we introduce and employ the notion of "confinement effect" as a steric Pauli repulsion type interaction between H 2 and a hydroxylated interface (see Figure 1c) upon hydride-proton recombination. First, we employ this notion in the context of hydride-proton recombination reactions to demonstrate how it decides which oxidation state X of metal ion M minimizes the overpotential for the HER, as quantified by the reaction Equation 6 (cf. Figure 1d,e). Second, it is shown how the emerging understanding is naturally extended to include electro-catalysts for HER under ambient conditions.
Impact of confinement on HER during zirconium oxidation by water
To investigate the confinement effect on the HER, we consider the zirconium oxidation by water (see Figure 1d). The difference between the two horizontal lines at 2.9 eV and at −0.2 eV corresponds to the 3.1 eV/H 2 [9] thermodynamic drive for HER in case of Zr oxidation by water under ambient conditions (e) Comparison of theoretical data (green) and experimental HPUF data (black); * from [7] and o from [8]. The theoretical data is a weighted average between TM 2+ and TM 3+ . The black dashed line is HPUF in pure ZrO 2 from [7]. The blue dashed line corresponds to HE from Zr 4+ hydride at GB with Na + , Ca 2+ and Sc 3+ spectator. Sc 3+ corresponds to the top line, Na + to the middle line and Ca 2+ to the bottom line. Figure 1d. The enlarged region exposes the overpotentials for the elements in the Pt and Au groups. Note that Pt + associated hydride displays a negative overpotential implying that it is more stable than the H 2 (g) asymptote (lower dashed line).
( Figure 2). The line at 0 eV represents the energy of a free H 2 molecule at 0 K. The line at 1.1 eV represents the energy cost at 0 K for bringing a free H 2 molecule into the confinement represented by Figure 1c. The line at −0.2 eV is owing to the increase in entropy when a water molecules H 2 O(l) is consumed (−70.0 Jmol −1 K −1 [10]) and a H 2 (g) molecule is formed (130.7 Jmol −1 K −1 [10]) at 298 K and 100 kPa (compare Equation 1), while neglecting changes in entropy in Zr upon oxidation.
Thus, a perfect electro-catalyst would exhibit an enthalpy change of 0 eV for the HER under ambient conditions. Moreover, it is inferred that a perfect electro-catalyst, which passes the HER into this hydroxylated interface via Equation 6 prior to the subsequent H 2 release under ambient conditions, must display 1.3 eV/H 2 overpotential, i.e., (1.1 − (−0.2)) eV/H 2 . Equivalently, in case of the HER into the interface, any residual drive towards H 2 formation relative to the line at 1.1 eV/H 2 corresponds to a local overpotential for the HER into the confining interface. A correlation emerges between a greater overpotential and a lower hydrogen pick-up fraction (HPUF, see Figure 1d and 1e). Thus the well-known effect of Ni 2+ to cause detrimental hydrogen pick-up was explained by its reluctance to release H 2 into the hydroxylated internal inter-grain interface [4,5]. From the overall agreement between reaction energies for Equation 6 and the experimentally reported HPUF's, it was concluded that "anti-catalysts" are preferred in order to mitigate the HPUF. In case of zirconium oxidation by water, these "anti-catalysts" are ions, which conserve significant fractions of the drive for hydrogen evolution by forming highly reactive metastable hydrides. These species are contrasted by Co 2+ and Ni 2+ , which catalyze the HER when H 2 is released into the highly constraining interface (see Figures 1d and 1e).
A stability check on the semi-quantitative validity of the model was offered by a comparison of the experimental 44% hydrogen pick-up fraction of pure zirconium (corresponding to the black horizontal dashed line at 1.5 eV/H 2 ) and model calculations for the hydride-proton recombination energetics employing the inert Na + , Ca 2+ , and Sc 3+ as "dummy" ions in the positions of the transition metal ions (see the three blue horizontal dashed lines in Figure 1e).
On HER at ambient conditions -a consistency check
According to the above understanding, which ions constitute viable electro-catalysts in the absence of confinement or at atmospheric pressure? In as much as the drive for HER comprises the relaxation of the resulting oxy-hydroxy ions coordinating the transition metal ion [4], it is suggested that besides being able to form the hydride intermediate, metals with low oxidation states and large ionic radii should be considered in order to minimize their affinities to the oxide surrounding. This characterization clearly points to the noble metal ions as candidates for electro-catalysts. Additional requirements for any successful electro-catalyst include sufficient electron conductivity of the oxide matrix supporting the catalyst as well as electric contact to the electrode itself. Finally, the "water dissociation catalysis" put forward by Subbaraman et al. [2,3] is used to infer that hydroxylated interfaces provide natural channels for proton transport to the oxy-hydroxide supported electro- catalytic site. A schematic representation of this understanding of the electrode/electro-catalyst assembly is provided in Figure 3.
Employing the above described hydroxylated inter-grain interface model as a generic supporting matrix for the eletrocatalytic process, we evaluate the energetics for the hydride-proton recombination reaction and arrive at a possible descriptor for the HER, which is also applicable under ambient conditions. This facilitates a procedure for the screening among candidate electro-catalysts.
Indeed, in Figure 2, a descending staircase-like curve for electro-catalysts is arrived at for the reaction energy corresponding to Equation 6. Starting at the hydroxylated zirconia inter-grain confinement, where Co 2+ and Ni 2+ are the obvious candidate catalysts, we approach the ambient conditions step by step by considering the embedded Ag + , Ni + , Cu + , Pd + , Au + and eventually Pt + . The Sabatier principle applies in two ways. Firstly, X in M X can be made to satisfy the requirement that the reactant in Equation 6 forms spontaneously [4,5]. Secondly, the environment confining H 2 in Equation 6 can be tuned to equalize the stabilities of reactants and products, so that any drive to release H 2 into a confinement is balanced owing to the replacement of the three-centered hydride by an oxy-bridge (Equation 6). It is gratifying to find that the oxy-hydroxide supported Au + and Pt + sites become preferred only when approaching ambient conditions, as oxides of noble metals are generally unstable, a property which is often associated with their softness. Consequently, the drive to replace the hydride ion by an oxygen ion is weak, and hence the H 2 release is expected to require a loose confinement for these systems. This is in contrast to harder ions, which form more stable oxides. Interestingly, in case of Pt + , the hydride comes out more stable than the H 2 (g) limit. This implies that the embedded Pt + site could constitute an efficient absorber of H 2 in the gas phase under ambient conditions -a purely chemical property. The semi-quantitative nature of the methodology does not allow for precise predictions of absolute numbers (see horizontal "error bar" in Figure 1e). However, it may be that the overpotentials reported for the Pt-based catalysts are related to the coverage dependence of the electrochemical decomposition of the Pt + associated hydride compound. Detailed properties of the embedding materials (e.g., electron conductivity) could cause the additional variations of the overpotential observed by Subbaraman et al. [2].
Interestingly, +2 is not considered a relevant oxidation state in case of Pt under ambient conditions for the electro-catalytic reaction path involving the hydride-proton recombination reaction (see Figure 2). This result is due to the strong binding of +2 to the oxy-hydroxide ligands upon H 2 release, violating Sabatiers principle.
In conclusion, the present approach offers a complementary computational strategy to rank catalysts for HER from water splitting. The complex modelling of heterogeneous HER electro-catalysis at the interface between composite catalyst/ support and a water based electrolyte is subdivided into a chemical oxide hydrolysis step (Equation 2), an electro-chemical redox step (Equation 3, Equation 4 and Equation 5), followed by the chemical hydride-proton recombination step (Equation 6). This conceptual deconstruction aims for supporting the prediction of novel approaches to improve on existing electro-catalyst/electrode assemblies. Thus, the design of the aqueous electrolyte/substrate system impacts only the hydrolysis step (Equation 2). The oxidation state X of M X is decided by Equation 5, while the choice of supported HER catalyst M X is determined according to Equation 6 by the confinement effect in conjunction with Sabatier's principle.
For the HER step, a recently proposed alternative to the Volmer-Heyrovsky mechanism was employed [4,5]. Rather than electron-proton discharge over an M-H moiety resulting in the conversion of 2H into H 2 , the HER investigated here results from a hydride-proton recombination reaction. While the protons constitute hydroxides in Equation 6, which is nonsignificant due to their ubiquity in aqueous media, an observation of three-center hydride intermediates is the sought-after "smoking-gun" evidence for the proposed mechanism.
Computational details
The Perdew-Burke-Ernzerhof generalized gradient approximation PBE GGA [11] as implemented in the DMOL3 engine [12,13] in the Material Studios program package [14] was employed in conjunction with a double-ζ numerical basis set with an extra polarization function on each heavy atom and a p-function on each hydrogen atom. Systematic spin polarized calculations were performed. A 4 × 4 × 1 k-point set for sampling the Brillouin zone was compared to a 2 × 2 × 1 k-point set, and the latter was found to suffice. In order to reduce the computational effort, inert electron shells were described effectively by means of the semi-core pseudopoten-tials. Zero-point corrected free energies were compared to noncorrected reaction energies and the differences were deemed negligible.
The grain boundary model (cf. Figure 1a) was constructed by inserting one unit cell of monoclinic ZrO(OH) 2 (5.4 Å × 10 Å × 5.4 Å) in between two supercells of monoclinic ZrO 2 (5.4 Å × 10.8 Å × 5.4 Å), where the unit cell doubling is in the b-direction. The stability of the model has been extensively investigated, including full geometry optimization, when arriving at the foundation of [4]. The choice of the grain boundary model is far from unique. Here, it is the success in reproducing the experimental volcano shape curve (cf. Figure 1d) which renders the present investigation meaningful. The grain boundary model employed to evaluate the reaction energy of Equation 6 was subjected to periodic boundary conditions, where the studied super-cell contained approximately 50 atoms. The number of hydrogen atoms, i.e., hydroxides and hydride, varied. This was because the oxidation states of the transition metal ions were controlled by adding (removing) hydrogen atoms to (from) the super-cell. This way, neutral super-cells were employed in all cases. When evaluating Equation 6, all bond distances and bond angles associated with atoms in the super-cell were optimized, while the super-cell dimensions were kept constant. | 3,469.2 | 2014-02-24T00:00:00.000 | [
"Chemistry"
] |
Probabilistic analysis and resistance factor calibration for deep foundation design using Monte Carlo simulation
The method of incorporating the sources of parameter uncertainty is crucial when conducting probabilistic analysis for service limit state (SLS) design of a deep foundation. This paper describes the method of using Monte Carlo simulation for probabilistic analyses and for calibration of resistance factors of drilled shafts at SLS. The paper presents discussions on the finding of an impossible case, where the different combinations of load, variability of soil strength and target probability of failure made it impossible to calibrate the SLS resistance factors. Resistance factors for drilled shafts in shale are introduced, and were found to be responsive to load levels. The higher load level, the lower the resistance factor. These findings help smooth the transition from allowable stress design to load and resistance factor design for geotechnical engineers.
Introduction
Geotechnical engineers have been working to transition from allowable stress design (or working stress design), which has been used for many years, to load and resistance factor design (LRFD). In allowable stress design, every input parameter is treated as deterministic, and the uncertainty in each design step is combined into one global factor called the "factor of safety." In LRFD, a design starts with identifying all possible failure modes or limit states. The design reaches a limit state when a component of the structure does not fulfill its prescribed function. The LRFD limit states often are separated into ultimate limit state (ULS) and service limit state (SLS) categories. The ultimate limit state relates to geotechnical strength failures; for example when the applied load is equal to the resistance. The SLS is when a component of the structure deforms beyond a prescribed amount; for example when the vertical displacement of a drilled shaft is larger than the prescribed limiting settlement. In a general form, the performance function, denoted as g, is the difference between the nominal resistance R and the nominal load Q as in Eq. (1): When the performance function g is equal to or less than zero, it defines an unsatisfactory performance region; however, if g is larger than zero, this indicates a satisfactory performance region. For probabilistic analyses, the resistance R and load Q are probabilistic parameters, each having its own distribution as shown in Fig. 1. The overlap area under the two curves in Fig. 1 is associated with the area of the failure region, which refers to the probability of failure for the design. (adapted from Allen et al., 2005) Methods to evaluate SLS for deep foundations have been proposed. Zhang and Chu (2009) proposed partial factors to satisfy serviceability limits for several different settlement prediction methods, with target reliability indices ranging from 1.0 to 2.5. The partial factors are roughly equivalent to resistance factors that range from 0.2 to 0.5; however, the partial factors are strictly only appropriate for use with nominal working loads equal to 50 percent of the ultimate foundation capacity. Resistance factors proposed by Misra and Roberts (2009) for establishing an allowable shaft capacity at the SLS range from approximately 0.25 to 0.55 for a target reliability index of 2.6. However, the resistance factors were found to depend on foundation length and diameter in addition to the variability of the soil-shaft interface resistance. Phoon et al. (1995) similarly proposed deformation factors (i.e., SLS resistance factors) for drilled shafts in medium, stiff, and very stiff clay with different coefficients of variation (COV) for undrained shear strength. The proposed factors ranged from 0.48 to 0.65, but are strictly appropriate for a target reliability index equal to 2.6.
Because of these constraints and challenges, current AASHTO LRFD provisions (AASHTO, 2014;Brown et al., 2010) adopt load and resistance factors of unity for SLS design. This position realistically reflects temporary adoption of historical design practices because of the current lack of practical methods for implementing probabilistically calibrated load or resistant factors for the SLS. This paper describes a proposed procedure that allows SLS design to be performed to achieve some desired target reliability without requiring case-specific calibration or more rigorous reliability-based design.
Background
Several probabilistic approaches are used in reliability-based design and in the LRFD resistance factor calibration. The most frequently used methods are the first-order second-moment (FOSM) method, the first-order reliability method (FORM), and the Monte Carlo simulation method. Details about the methods have been described in the literature (Ang and Tang, 2004;Baecher and Christian, 2003;Griffiths and Fenton, 2007;Harr, 1987). FOSM is based on a Taylor series expansion of a performance function (Baecher and Christian, 2003). FORM is the linear approximation of a limit state (Phoon et al., 2003;Phoon and Kulhawy, 2008), which utilizes the performance function g, which is defined as zero at the limit state. The approach is based on assumptions that all input parameters are normally distributed, and that the limit state is also a normally distributed variable. FOSM and FORM cannot be used with different types of variable distributions. Also, the two approaches usually provide some 'first order' approximations. The Monte Carlo simulation method utilizes random number simulation to extrapolate probability density function values (Baecher and Christian, 2003;Harr, 1987). The inputs for a simulation process for a variable are its mean value, either standard deviation or coefficient of variation (COV), as well as its type of distribution. Any input can be set as a probabilistic variable if its mean value, standard of deviation or COV, and the distribution function type are provided. According to Baecher and Christian (2003), the Monte Carlo technique has the advantage because it is relatively easy to implement on a computer and can deal with a wide range of functions.
The major disadvantage is that the results may converge very slowly. As stated by Allen et al. (2005), when "a closed-form solution is either not available or is considered too approximate, Monte Carlo simulation can be performed." The Monte Carlo simulation method is more flexible and rigorous, and if enough simulations are generated, the results approach exact solutions; thus, the Monte-Carlo simulation method was used in this research for probabilistic analyses.
Shaft head displacement calculation using the t-z method: The load transfer method, or t-z method, is often used to calculate shaft head displacement (O'Neil and Reese, 1999;Misra and Roberts, 2009;Brown et al., 2010). The method requires predictive models for ultimate unit side and tip resistance well as load transfer models to predict mobilization of resistance along the shaft. Models for the ultimate unit side and tip resistance (Eqs. (2) and (3)) were developed from a large collection of load test measurements for full-scale drilled shafts founded in shale throughout the state of Missouri (Loehr et al., 2011).
where q s is the ultimate unit side resistance and q p is the ultimate unit tip resistance. The variability and uncertainty associated with these models were quantified by a coefficient of variation of 0.66 and 0.25, respectively. Load transfer models were developed from measurements for a large collection of full-scale load tests on shales in the states Missouri, Kansas, Colorado (Vu, 2013). Models for the unit side and tip resistance (Eqs. (4) and (5)) drawn from this work are: where t and q are normalized unit side and tip resistance, respectively; z and w are normalized displacement along the shaft side and tip, respectively; and a and b are fitting parameters derived from the load test measurements. The standard deviation of the t-z model is 0.17, and the q-w is 0.14 (Vu, 2013).
Methods
The factored strength approach (Becker, 1996;Salgado, 2008) in which the geomaterial strength is factored and was used in this research because it offers greater flexibility and a potential for greater precision due to the resistance factors, which are easily related to the variability and uncertainty present in relevant design parameters (Becker, 1996, Vu and Loehr, 2015, 2017. Most design methods for drilled shafts in shale/rock are based on the uniaxial compressive strength (UCS). The SLS resistance factor, 4; is therefore applied to UCS to account for the uncertainty present in a design. The factored uniaxial compressive strength, UCS * , is calculated as Then UCS * is used as an input for the t-z method to determine factored shaft head displacement, y * , in the same manner as the traditional approach of using UCS to determine shaft head displacement y. The SLS design check is then based on the requirement that the factored displacement, y * , be less than some established allowable or limiting settlement, y a where the SLS is enforced by the criterion (Vu and Loehr, 2017): SLS resistance factors were calibrated using a computer program written in MATLAB Ò to implement load transfer analyses using the finite element method and the Monte Carlo simulation technique (Vu, 2013;Vu and Loehr, 2017). The program computes the top of foundation's vertical displacement under a given probabilistic load based on the following proposed procedure: 1. Generate probabilistic values for dead load (DL), live load (LL), shaft stiffness (EA), material strength (UCS), and ultimate unit side and tip resistance (q s and q p , respectively) according to specified distributions of the parameters.
Randomly generated values of EA, UCS and q s are different for different element of the shaft; 2. Generate probabilistic load transfer (i.e. t-z and q-w) functions for each element according to the variability and uncertainty associated with the load transfer functions; 3. Determine the foundation displacement for each set of probabilistic parameter values; 4. Establish the number of "SLS failure cases", n f , associated with the predetermined target probability of failure p f ; 5. Determine the factored displacement, y * corresponding to the number of SLS failure cases, n f by sorting the computed displacements in descending order and taking the ðn f þ 1Þ th displacement value as y * ; the computed displacement is equal to the value of y * . Other parameters were set to their mean values; 7. Compute the SLS resistance factor as in Eq. (8): The procedure was developed so that if a design uses a factored UCS * and satisfies Eq. (7), the design will achieve a predetermined target probability of failure.
Input Parameters for Probabilistic Analysis of SLS: For an SLS design based on the t-z method, there are a total of 11 deterministic and probabilistic variables, resulting in a total of 24 inputs, not including type of probabilistic distribution. The shaft length and the probability of failure are the only two variables that are considered deterministic. All 24 inputs are listed below: a) Geomaterial strength and its variability/uncertainty, represented by its coefficient of variation (two inputs); b) Dead load and its variability/uncertainty (two inputs); c) Live load and its variability/uncertainty (two inputs); d) Shaft length, considered deterministic (one input); e) Shaft diameter and its variability/uncertainty (two inputs); f) Concrete Young's modulus and its variability/uncertainty (two inputs); g) Probability of failure, considered deterministic (one input); h) t-z and q-w fitting parameters (four inputs for two pairs of fitting parameters) and their standard deviations (two inputs), and; i) Ultimate unit side resistance (two inputs for two parameters), ultimate unit tip resistance (two inputs) and their coefficients of variation (two inputs).
Monte Carlo Simulation: Inputs for a Monte Carlo simulation of a variable include the variable's mean value, coefficient of variation (COV) or standard deviation, and its distribution type. Monte Carlo simulation uses random number simulations to establish the probability density function of parameter values for every probabilistic variable (Fig. 2). All 24 listed inputs were used to generate the probability density function of the output. For one simulation, one value is taken from the probability density function of each probabilistic parameter, and then, along with the other deterministic variable values, all values are put into a load-transfer model to calculate one value of shaft head vertical displacement. The process is repeated for n simulations to obtain the shaft head displacement probability density function.
Since Monte Carlo simulation method is an approximate method, its accuracy is largely dictated by the number of simulations n that are performed. Allen et al. (2005) stated that 5,000 to 10,000 simulations or more are needed to adequately define the distribution of the limit state function for a probability index of b T also ¼ 2.3 to 3.0, which is greater than is usually required for SLS. Harr (1987) used the binominal distribution function and reliability theory to show that if it is desired that "the Monte Carlo simulation not to differ by more than 1% from the estimated value with 99% confidence", 16,641 trials would be required. Two examples were set up to determine an SLS resistance factor, which is an indirect measure related to the reliability or probability of failure (Vu, 2013), with the resulting resistance factors are plotted in Fig. 3. The resistance factors are almost identical when the number of simulations exceeds 5000. In this research, the number of simulations was chosen to be 30,000.
Random number generation of variables: In geotechnical engineering, the most frequently used distribution types for probabilistic variables are the normal and log-normal distributions (Phoon et al., 1995;Duncan, 2000;Baecher and Christian, 2003;Allen et al., 2005). The appeal of the normal distribution is that it is mathematically convenient; it accurately reflects many measurements, and it is commonly used in practice. The normal distribution is bell-shaped (Fig. 4). However, the normal distribution often includes some negative values, which are impractical and unacceptable for many SLS design problems. The log-normal distribution type reflects data where the natural logarithms of the data are normally distributed. The shape of the distribution is an eccentric bell with a much longer tail ( Fig. 4). This type of distribution is strictly non-negative and is used more often. In this research, the types of distributions for the input variables are chosen based on field test data, or taken from well-established literature.
Method to Generate Normally Distributed Parameter Values:
If the mean m , standard deviation s , and distribution type of a parameter are known, the Monte Carlo approach can simulate n numbers of random parameter values that have the same mean, standard deviation and distribution type. In MATLAB Ò , for a variable that is normally distributed with mean m and standard deviation s, a random parameter value set from n simulations can be produced using the Eq. (9): where randn is a command to generate an array of n random numbers that have standard normal distribution with a mean of zero and standard deviation of unity.
If the data are highly variable and the standard deviation s is large, it is possible for the process to produce negative values that are non-real. Generated negative values are replaced with positive, near-zero values (10 À6 for all cases). This approach is more beneficial than to assume a lognormal distribution even though the data show normal distribution characteristics, which is the common practice found in literature.
Method to Generate Generation of Lognormally Distributed Parameter Values:
In order to generate a set of a variables L, which are lognormally distributed with mean m and standard deviation s, a transformation step must be performed (Eqs. (10), (11), (12), and (13)). The logarithm of the variable L is N ¼ lnðLÞ: N is a normally a distributed variable with mean l and standard deviation x. The relationships between the mean l and standard deviation x of variables L and mean m and standard deviation s of the normally distributed variable N are: The variable N can be generated using following function: The final step is to obtain the data set L is by taking the exponential of the values in N: Possible and Impossible Case: Cases have been reported wherein randomly simulated loads were higher than the randomly simulated shaft resistances (or shaft capacities). When this happens, the shaft head displacement for these cases cannot be calculated. If the number of these cases is larger than the target probability of failure, as illustrated in Fig. 5, no resistance factor can be obtained to achieve the SLS target probability of failure. This situation is called the "impossible" case. The low shaft resistance comes from a combination of resistance components, such as small t value (from t-z model) or small UCS.To illustrate this concept, an example of 300 Monte Carlo simulations was run to obtain a histogram of resulting shaft displacements in a percentage of the shaft diameter (%D). For a simulation when the randomly generated load was higher than the randomly generated capacity, the solution for that simulation did not converge, and the displacement was assigned an arbitrary large displacement, i.e., 14% of the diameter. Out of 300 simulations, there were 16 simulations where loads were higher than shaft capacity as in Fig. 5 (right).
If the target P f was of 1/100 (<16/300), then the resistance factor cannot be determined; however, if the target P f is 1/15 (>16/300) then the resistance factor can be obtained. This research uses normalized load q, which is the ratio of load (sum of dead load and live load) and the nominal shaft capacity (Vu and Loehr, 2017). When the normalized load is high, and uncertainty and variability of the resistance of soil/ shale properties also are high, the load distribution and the resistance distribution "move" closer together (Fig. 1), the resistance distribution becomes wider, and the overlap area becomes larger. This means that the failure cases are more likely to occur.
In an SLS design, if the designed shaft has conditions of possible case, three different ways exist whereby conditions can be moved into the impossible case. The designer could increase the shaft length or diameter, so the normalized load q is reduced where technically the resistance distribution is shifted farther from the load distribution. Theoretically, the designer can change the COV of UCS by conducting more site exploration tests, or the designer can increase the target probability of failure P f , although this is not practical.
The impossible case for a certain normalized shaft length is formed by a combination of normalized load q, COV of UCS and the target probability of failure P f . The boundary of the case was found by making the number of the impossible cases equal to the SLS probability of failure. The case boundaries for a normalized shaft length L/D of 10.0 are presented in Fig. 6 and Table 1. As shown in Fig. 6, four curves are associated with four target probabilities of failure. The left-and-under area of each curve is a possible case area, while the right-and-above area is the impossible case area. Here, the target probability of failure cannot be achieved no matter how small the resistance factor is, and the case is unfavorable for a design.
Results & discussion
Resistance factors were calibrated for drilled shafts at SLS at different P f , L/D, normalized load, and COV of UCS. The inputs and sources are presented in Table 2, only shaft diameter and length are considered deterministic. Fig. 7 presents resistance factors for P f ¼ 1/25, L/D ¼ 10. More resistance factors can be found at Vu and Loehr (2017). The resistance factors appear to be low, ranging from 0.10 to 0.36. High variability in the load transfer models and predictive models, together with accounting for more sources of uncertainty in the calibration process, can explain the lower values of these SLS resistance factors for individual drilled shafts in shale. However, an SLS resistance factor that is calibrated while accounting for fewer probabilistic parameters will produce an unconservative design.
As observed in this study, the resistance factor is dependent on load: the higher the normalized load is, the lower the resistance factor is. At a COV of UCS equal to zero, the resistance factor significantly decreases from 0.36 to 0.21 when the normalized load q varies from 0.40 to 0.15. The curve for a higher normalized load of 0.4 truncates when COV of UCS is 0.1, meaning that the target probability of failure, (2015) Ultimate unit tip resistance, q p q p ¼ 43$UCS 0:71 0.25 - Vu and Loehr (2015) 0 higher COV of UCS, and the impossible case is less likely to occur. Fig. 8 can be used to qualitatively explain how the resistance factor is dependent of load, as it displays a highly nonlinear relationship of normalized load versus displacement. The effect of changing the load is inversely proportional to the effect of changing the UCS. As for the strength factor approach, the resistance factor is used to factor or reduce UCS to increase the nominal value of the settlement to the factored settlement y* that is associated with the target probability of failure. The effect of reducing the UCS is similar to the effect of increasing the load. As in Fig. 8, to obtain the same increasing amount of displacement Dd ¼ Dd 1 ¼ Dd 2 , the required change in the normalized load Dq 1 in the flatter zone is much larger than the Dq 2 in the steepter zone. This means that less change in normalized load is required in the flatter zone.
The reduced change in normalized load is analogous to less change in UCS (recall they are inversely proportional), and the less change in UCS means a higher resistance factor is needed to obtain the factored displacement y*, or the resistance factor is higher for the higher normalized load. Since the resistance factor was determined to be a function of normalized load, the design of SLS for drilled shafts becomes a cumbersome process, meaning the engineers need to obtain different resistance factors for different loading or nominal shaft capacity which in turn depends in shaft dimensions. This is possibly the reason why Zhang and Chu (2009) proposed SLS resistance factors strictly only for use with nominal working loads equal to 50 percent of the ultimate foundation capacity, and the resistance factors by Misra and Roberts (2009) were proposed only for specific foundation dimensions. The design procedure proposed presented below overcomes this cumbersomeness and provides a flexibility in design of drilled shaft foundation at SLS.
The results from resistance factor calibration can be used in the following procedure for the design of drilled shafts in shale at the SLS. The procedure is flexible and easy to use, and contains the following five steps: 1. Obtain initial shaft dimensions using strength limit state criteria. From the dimensions, calculate the nominal shaft capacity, R n .
2. Determine normalized load, q based on the factored load for the SLS.
4. Compute factored shaft head vertical displacement, y * using t-z method (can use any software tools e.g., in-house computer codes or commercial software that can model the load transfer response represented by Eqs. (2), (3), (4), and (5) as inputs).
5. Compare y * to the established allowable settlement, y a . If the design requirement (Eq. (7)) is not met then repeat Steps 1 to 5 for increasing drilled shaft dimensions (mostly shaft length) until the design requirement is met.
Example
A drilled shaft is founded in shale with a mean UCS of 500 kPa, coefficient of variation is COV is 0.1 (Fig. 9). Dead load (DL) is 3780 kN and live load (LL) is 1890 kN. The allowable displacement y a , is 15 mm, and the target probability of failure is 1/25. The problem is solved following the 5-step procedure: 1) Use strength limit state requirements to determine initial shaft dimensions: shaft diameter of D ¼ 1.52 m and shaft length of L ¼ 15.2 m. The nominal shaft resistance, R n , is then calculated as 22,900 kN.
Conclusions
Monte Carlo simulation method for probabilistic analyses and for calibration of resistance factors for drilled shafts at SLS is introduced. Recommendations and observations were made advocating random number generation using Monte-Carlos simulations. A discussions on the finding of an impossible case in which resistance factors cannot be calculated in some circumstances is presented. Resistance factors for drilled shafts in shale are introduced, and were found to be responsive to normalized load, and the higher the normalized load, the lower the resistance factor.
Declarations
Author contribution statement Thuy Vu: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper. | 5,635 | 2018-08-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Geology"
] |
Information Dark Energy Can Resolve the Hubble Tension and Is Falsifiable by Experiment
We consider the role information energy can play as a source of dark energy. Firstly, we note that if stars and structure had not formed in the universe, elemental bits of information describing the attributes of particles would have exhibited properties similar to the cosmological constant. The Landauer equivalent energy of such elemental bits would be defined in form and value identical to the characteristic energy of the cosmological constant. However, with the formation of stars and structure, stellar heated gas and dust now provide the dominant contribution to information energy with the characteristics of a dynamic, transitional, dark energy. At low redshift, z < ~1.35, this dark energy emulates the cosmological constant with a near-constant energy density, w = −1.03 ± 0.05, and an energy total similar to the mc2 of the universe’s ∼1053 kg of baryons. At earlier times, z > ~1.35, information energy was phantom, differing from the cosmological constant, Λ, with a CPL parameter difference of ∆wo = −0.03 ± 0.05 and ∆wa = −0.79 ± 0.08, values sufficient to account for the H0 tension. Information dark energy agrees with most phenomena as well as Λ, while exhibiting characteristics that resolve many tensions and problems of ΛCDM: the cosmological constant problem; the cosmological coincidence problem; the H0 tension, and the σ8 tension. As this proposed dark energy source is not usually considered, we identify the expected signature in H(a) that will enable the role of information dark energy to be falsified by experimental observation.
Introduction
Many features of the universe are consistent with the standard ΛCDM model. However, as measurements improve in accuracy, a significant difference or tension has been found between the early and late universe Hubble constant, the H 0 values. Planck measurements of the Cosmic Microwave Background, CMB, originating from redshift, z∼1100, provide a ΛCDM model-dependent value for today's Hubble Constant, H 0 , of 67.4 ± 0.5 km s −1 Mpc −1 [1]. Independent CMB measurements by the Atacama Cosmology Telescope [2] support Planck with a H 0 value of 67.9 ± 1.5 km s −1 Mpc −1 . These values are also consistent with those derived from Baryon Acoustic Oscillations [3].
In contrast to these early universe measurements, H 0 measured in the late universe by a wide variety of techniques yields values closer to 73 km s −1 Mpc −1 . 'Standard candles', provided by type 1a supernovae and Cepheid variable stars, offer distance ladders that yield a value of 74.03 ± 1.42 km s −1 Mpc −1 [4,5]. A recent comprehensive analysis has further confined the value to 73.04 ± 1.04 km s −1 Mpc −1 [6]. Other methods have been devised to be independent of standard candle/distance ladder techniques. For example, time delay measurements of multiple-imaged quasars due to strong gravitational lensing [7] provide a value of 73.3 + 1.7/−1.8 km s −1 Mpc −1 , differing by 5.3σ from the early universe values. Moreover, the size of edge-on galaxy discs have been determined by the geometry of water maser action occurring in those discs [8], yielding 73.9 ± 3 km s −1 Mpc −1 , a value greater than the early universe value with a 95-99% level of confidence. Another technique using measurements of infrared surface brightness fluctuation distances in galaxies [9] provides a value of 73.3 ± 2.4 km s −1 Mpc −1 , again consistent with other late universe values.
Initially, it was thought that this H 0 tension between the early and late universe might be due to systematic errors [10], but over the last couple of years, the many late universe measurements have become more precise and consistent. The persisting H 0 tension implies a problem or tension with the ΛCDM model, even suggesting new physics beyond ΛCDM [4,11,12]. Possible causes of tension include: a late dynamic dark energy; a universe with non-zero curvature; dark matter interaction; an early dark energy; and additional relativistic particles. Natural theoretical values for the cosmological constant, Λ, differ by several orders of magnitude from the value required to explain accelerating expansion. A likely solution to the tension might then be achieved by replacing Λ with a time-dependent dark energy that has a present energy density compatible with the acceleration.
In this paper, we revisit the role of information energy as the source of such dynamic dark energy [13]. Section 2 reviews the expected present information energy density. Section 3 updates previous work with the latest stellar mass density measurements to determine the equation of state parameters. Section 4 shows that the earlier phantom period can quantitatively account for the observed H 0 tension. Section 5 identifies future measurements necessary to confirm or refute this proposed source of dark energy. The discussion in Section 6 shows that information energy could also resolve the cosmological constant problem and the cosmological coincidence problem.
Information Energy as Dark Energy
Landauer's Principle provides an equivalent energy for each bit of information or bit of entropy. Landauer showed that information is physical since the erasure of a bit of information in a system at temperature, T, results in the release of a minimum k B T ln(2) of energy to the system's surroundings [14,15]. Landauer's principle has now been experimentally verified for both classical bits and quantum qubits [16][17][18][19].
A foundational principle has been proposed by Zeilinger [20], whereby the attributes of all particles can be considered at their most basic level as elemental systems with an information content of one bit or qubit. There is a strong similarity between the information energy of such an elemental bit in the universe and the characteristic energy of the cosmological constant [21]. Today, our universe is dominated by dark energy and all matter (baryon + dark), approximately at the ratio of 2:1. If there was no star formation, a representative temperature for matter could be considered to be provided by the temperature, T u , of a radiation component with the same energy density as all matter: where the radiation density constant α = 4σ SB /c, σ SB is the Stefan-Boltzmann constant, and ρ tot is our universe's total matter density (baryon+dark). σ SB is further defined in terms of fundamental constants: Then, we obtain the Landauer equivalent energy of an elemental bit of information in a universe at temperature T u : This Landauer bit energy is defined identically to the characteristic energy of the cosmological constant. The right-hand side of Equation (3) is identical to Equation 17.14 of [22] for the characteristic energy of the cosmological constant-with the sole addition of ln (2) to convert between entropy units-between natural information units, nats, and bits. Information bit energy might then explain the low milli-eV characteristic energy of Λ, which Peebles [22] considered to be 'too low to be associated with any relevant particle physics'.
Clearly, the universe is not a simple, single system, and here, we follow a phenomenological approach, taking into account star formation and other information energy contributions. Table 1 [23,24], typical temperatures, equivalent information energy total, and that total information energy relative to the universe's total baryon, mc 2 . It is evident from Table 1 that at present, stellar heated gas and dust make up the dominant information energy contribution, as other sources of information energy are miniscule in comparison. The N∼10 86 bits at typical gas and dust temperatures, T∼10 7 , have an equivalent total N k B T ln(2) energy of ∼10 70 J, directly comparable to the ∼10 70 J equivalent mc 2 energy of the universe's~10 53 kg baryons. Stellar heated gas and dust information equivalent energy should then be included alongside the mc 2 equivalence of matter in universe energy accounting. Information energy from stellar heated gas and dust could account for today's dark energy density using accepted physics, relying solely on the experimentally proven Landauer's Principle, combined with realistic entropy estimates, and without invoking any new physics at this stage.
The information energy of stellar heated gas and dust has previously been shown [13,[25][26][27] to provide a near-constant dark energy density in the late universe, effectively emulating a cosmological constant; at earlier times, this dark energy contribution was phantom. The overall time variation-present constant energy density plus earlier phantom-has been shown to be consistent with the Planck-combined datasets [13]. In the next section, we update that previous work by including more recent stellar mass density measurements.
Dynamic Information Energy: Time History
In order to include the information energy of stellar heated gas and dust in the accounting of the universe's energy, we need to describe its variation over time by identifying how total bit number, N(a), and typical temperature, T(a), vary as a function of the universe scale size, a, related to redshift, z, by a = 1/(1 + z).
Firstly, we assume that within any sufficiently large volume, the average temperature, T(a), representative of the stellar heated gas and dust, varies in proportion to the fraction, f(a), of baryons that have formed stars up to that scale size. We can determine the history of f(a) by the plotting in Figure 1, a survey of measured stellar mass densities per co-moving volume as a function of scale size, a.
determine the history of f(a) by the plotting in Figure 1, a survey of measured stellar mass densities per co-moving volume as a function of scale size, a.
In Figure 1, there is a significant change around redshift, z∼1.35, from a steep gradient in the past to a weaker gradient in recent times. Fitting power laws to data points on either side of z = 1.35, we find power laws of a +1.08 ± 0.16 for z < 1.35, and a +3.46 ± 0.23 for z > 1. 35. Then, we assume the average stellar heated gas and dust baryon temperature, T, proportional to the fraction of baryons in stars, f(a), also varied as a +1.08 ± 0.16 for z < 1.35, and a +3.46 ± 0.23 for z > 1. 35. Measured mean galactic electron temperatures over the range 0 < z < 1 [58] show a similar temperature time variation as Figure 1, supporting our use of stellar mass densities as a proxy for the gas and dust temperature time variation.
In Figure 1, there is a significant change around redshift, z∼1.35, from a steep gradient in the past to a weaker gradient in recent times. Fitting power laws to data points on either side of z = 1.35, we find power laws of a +1.08±0.16 for z < 1.35, and a +3.46±0.23 for z > 1. 35. Then, we assume the average stellar heated gas and dust baryon temperature, T, proportional to the fraction of baryons in stars, f(a), also varied as a +1.08±0.16 for z < 1.35, and a +3.46±0.23 for z > 1. 35. Measured mean galactic electron temperatures over the range 0 < z < 1 [58] show a similar temperature time variation as Figure 1, supporting our use of stellar mass densities as a proxy for the gas and dust temperature time variation.
We consider two possibilities for the time variation of total stellar heated gas and the dust bit number, N(a). In the first case, we assume that N(a) simply varies, directly proportional to volume, as a 3 . In the second case, we assume that the total bit number of any large co-moving volume is governed by the Holographic Principle [59][60][61] and varies with the volume's bounding area as a 2 . While the Holographic Principle is generally accepted for black holes at the holographic bound, the holographic bound of the universe is ∼10 123 bits, and the general principle remains only a conjecture for universal application to cases well below the holographic bound [61].
We wish to compare the time variation of these information energy models against that of the cosmological constant. The Friedmann equation [62] expresses the Hubble parameter, H(a), in terms of its present value, the Hubble constant, H 0 , and dimensionless energy density parameters, Ω, expressed as a fraction of today's total energy density. Assuming that the curvature term is zero, and that the radiation term has for some time been negligible compared to the other terms, the ΛCDM model is described simply by Equation (4).
for present energy fractional contributions, Ω tot from all matter (dark+baryons), and Ω Λ from the cosmological constant. Total information equivalent energy, given by N k B T ln (2), is proportional to both N(a) and T(a); it is thus proportional to a 3 f(a) in the volume model, and proportional to a 2 f(a) in the holographic model. These correspond to the information energy density terms Ω IE (f(a)/f (1)) and Ω IE (f(a)/f(1)) a −1 , respectively. Then, if the cosmological constant is negligible and information energy provides the sole source of dark energy, we obtain Equations (5) and (6).
Information-Holographic model: In Figure 2, we compare the effects of these two models for an information energy source of dark energy against the cosmological constant using the present ΛCDM values, setting Ω tot = 0.32 and Ω Λ = Ω IE = 0.68, and applying the power-law fits in Figure 1 for f(a).
We can see from the upper plot of Figure 2 that the total energy density of the holographic model and that of the cosmological constant nearly coincide, while that for the volume model clearly predicts significantly different total energy densities. The lower plot of Figure 2 emphasizes these differences by plotting the percentage difference in the expected Hubble parameter for the information energy models relative to that of the cosmological constant model. The volume model differs significantly from the cosmological constant, peaking at 7% around a = 0.67. Such a difference, at z = 0.5, from that expected for a cosmological constant should have easily been observed directly by existing expansion measurements, and for this reason, we hereafter concentrate on the holographic model. The holographic model difference is less than 1% for a > 0.4 and for a < 0.2, peaking at only 1.8% around a = 0.33.
The time variation of a dark energy density is proportional to a −3(1+w) , where w is the equation of the state parameter for that dark energy. Recently, z < 1.35, T(a) varied as a +1.08±0. 16 , N(a) as a +2 , hence total stellar heated gas and dust information energy varied as a +3.08±0. 16 , providing a near-constant energy density varying only as a +0.08±0.16 , corresponding to the equation of the state parameter, w = −1.03 ± 0.05. Then, the information energy of stellar heated gas and dust in the recent period has the characteristics of dark energy, since f(a) closely follows the a +1 gradient that would lead to a near-constant information energy density and emulate a cosmological constant, w = −1. Thus, information energy can provide a quantitative account of dark energy, accounting for both the present energy value, ∼10 70 J, and the recent period of near-constant energy density.
In comparison, during the earlier period, z > 1.35, T varied as a +3.46±0.23 , total information energy varied as a +5.46±0.23 , providing a phantom energy with an energy density increasing as a +2. We can see from the upper plot of Figure 2 that the total energy density of the holographic model and that of the cosmological constant nearly coincide, while that for the volume model clearly predicts significantly different total energy densities. The lower plot of Figure 2 emphasizes these differences by plotting the percentage difference in the expected Hubble parameter for the information energy models relative to that of the cosmological constant model. The volume model differs significantly from the cosmological constant, peaking at 7% around a = 0.67. Such a difference, at z = 0.5, from that expected for a cosmological constant should have easily been observed directly by existing expansion measurements, and for this reason, we hereafter concentrate on the holographic model. The holographic model difference is less than 1% for a > 0.4 and for a < 0.2, peaking at only 1.8% around a = 0.33.
The time variation of a dark energy density is proportional to a −3(1+w) , where w is the equation of the state parameter for that dark energy. Recently, z < 1.35, T(a) varied as a +1.08±0. 16 , N(a) as a +2 , hence total stellar heated gas and dust information energy varied as a +3.08 ± 0.16 , providing a near-constant energy density varying only as a +0.08 ± 0.16 , corresponding to the equation of the state parameter, w = −1.03 ± 0.05. Then, the information energy of stellar heated gas and dust in the recent period has the characteristics of dark energy, since f(a) closely follows the a +1 gradient that would lead to a near-constant information energy density and emulate a cosmological constant, w = −1. Thus, information energy can Difference in the Hubble parameter, H(a), to be expected from an information energy source of dark energy relative to that resulting from a cosmological constant. Both plots assume the power-law fits in Figure 1 data.
Information Energy Can Account for H 0 Tension
Results of experiments to measure the dark energy equation of state, w, often assume a simple shape for the w(a) timeline, using a minimum number of parameters. Most astrophysical datasets, including Planck data releases [1,[63][64][65], have been analyzed to deduce cosmological parameters using the Chevalier, Polarski, Linder, CPL description [66]: w(a) = w 0 + (1 − a)w a . This assumes a smooth variation of w(a) from w o + w a at very early times, a << 1, through to w o today (a = 1).
The 2013-2018 Planck data releases include several dataset combinations, where Planck data have been combined with other types of measurement and analyzed using the CPL parameters. Although the resultant likelihood regions of w o − w a space always include the cosmological constant, consistent with ΛCDM, there is a clear overall bias towards an early phantom dark energy (Figure 36 of [63], Figure 28 of [64], Figure 30 of [1]). Most of the likelihood area is located in the space where w o + w a < −1, the phantom shaded area of Figure 30 in [1].
The information energy equation of the state parameter values, w = −1.03 ± 0.05, z < 1.35, and w = −1.82 ± 0.08, z > 1.35, correspond to the CPL parameters, w 0 = −1.03 ± 0.05, w a = −0.79 ± 0.08, located close to the center of these maximum likelihood regions in w o − w a space. While the volume model would lead to easily identifiable differences from ΛCDM at low z, the holographic model emulates a cosmological constant at low z, and for most phenomena, it would be indistinguishable from ΛCDM. The difference between the information energy CPL parameters and those for Λ, w 0 = −1, w a = 0, is then given by ∆w 0 = −0.03 ± 0.05 and ∆w a = −0.79 ± 0.08. These parameter differences are significant as they closely match the differences previously considered as the possible means by which a dynamic dark energy could account for the H 0 tension. A dynamic dark energy differing from Λ by ∆w 0 = −0.08 and ∆w a = −0.8 has been shown to be capable of accounting for much of the 'H 0 tension', from Figure 4 in [4]. Therefore, information dark energy could quantitatively account for the 'H 0 tension'.
Note that CPL parameters fit the information w(a) values, both today and very early, but information energy exhibits a much sharper transition at z∼1.35 than can be faithfully described by CPL. Clearly, the best fit would be provided by the simple sharp transition description: w = w o = −1.03 for z < 1.35, and w = w o + w a = −1.82 for z > 1.35. At z > 2, dark information energy is negligible, less than 3 percent of total matter energy density; however, it increases rapidly to a near-constant energy density by z∼1. 35. Such a transitional dark energy, with a sharp change in w in the range of 1 < z < 2, with a negligible dark energy density at z > 2, has previously been shown to be capable of largely accounting for both the 'H 0 tension' and also the 'σ 8 tension' between early and late universe values of the matter fluctuation amplitude [67].
Information Dark Energy Is Falsifiable by Experiment
The dark energy properties of the information energy identified above might still be just a fortuitous coincidence, and in order to confirm or refute this proposed source of dark energy, we need to predict the value of some future measurement(s).
The main detectable effect of dark energy is the resulting accelerating expansion of the universe. Unfortunately, as information dark energy has closely emulated a cosmological constant in recent times, any differences will be hard to measure. Nevertheless, the clearest verification of information energy's role as the source of dark energy would be provided by measuring the expected small difference in the Hubble parameter from that of the cosmological constant (Figure 2, lower plot). This difference is a direct result of the earlier phantom period of information energy caused by the steeper stellar mass density gradient at z > ∼1. 35. This small reduction in H(a) is bounded at low redshift by the location of the change of gradient in the Figure 1 measurements, and at higher redshift by the much higher matter energy densities swamping any dark energy contribution.
Power laws were used above primarily to facilitate estimates of the equation of state parameters. Now, we wish to avoid the possibility that the expected difference is an artefact of fitting power laws. Accordingly, in Figure 3, we also apply a sliding average of the The dark energy properties of the information energy identified above might still be just a fortuitous coincidence, and in order to confirm or refute this proposed source of dark energy, we need to predict the value of some future measurement(s).
The main detectable effect of dark energy is the resulting accelerating expansion of the universe. Unfortunately, as information dark energy has closely emulated a cosmological constant in recent times, any differences will be hard to measure. Nevertheless, the clearest verification of information energy's role as the source of dark energy would be provided by measuring the expected small difference in the Hubble parameter from that of the cosmological constant ( Figure 2, lower plot). This difference is a direct result of the earlier phantom period of information energy caused by the steeper stellar mass density gradient at z > ∼1. 35. This small reduction in H(a) is bounded at low redshift by the location of the change of gradient in the Figure 1 measurements, and at higher redshift by the much higher matter energy densities swamping any dark energy contribution.
Power laws were used above primarily to facilitate estimates of the equation of state parameters. Now, we wish to avoid the possibility that the expected difference is an artefact of fitting power laws. Accordingly, in Figure 3, we also apply a sliding average of the Figure 1 measurements (Figure 1, grey line) to generate a data-driven f(a), independent of fitted function. In Figure 3, both methods predict similar values for the signature that would be produced by an information dark energy. The sliding average predicts a maximum difference of −2.2% at z∼1.7, while the power-law fits predict a maximum of −1.8% at z∼2. There is a clear prediction of a measurable reduction in H(a) relative to the cosmological constant over a specific limited redshift range, and hence constitutes a means by which information dark energy can be falsified experimentally. In Figure 3, both methods predict similar values for the signature that would be produced by an information dark energy. The sliding average predicts a maximum difference of −2.2% at z∼1.7, while the power-law fits predict a maximum of −1.8% at z∼2. There is a clear prediction of a measurable reduction in H(a) relative to the cosmological constant over a specific limited redshift range, and hence constitutes a means by which information dark energy can be falsified experimentally.
Discussion
The present information dark energy value, obtained directly from realistic estimates of bit numbers and temperatures, could account for the accelerating expansion. The late time near-constant energy density follows directly from the measured stellar mass density gradient, providing a T(a) variation closely proportional to a +1 , combined with the total N(a) proportional to a +2 , assuming the general holographic principle. Strong additional support for an information dark energy is provided by its ability to resolve significant problems or tensions that otherwise remain unexplained and incompatible with the standard ΛCDM model:
H 0 and σ 8 Tensions
In Section 4, we saw that an information dark energy can account for much of the 'H 0 tension'. The closest CPL parameter description for information energy is identical to the values suggested by Reiss et al. [4] for a dynamic dark energy explanation. A more appropriate description of information energy, accounting for the sharp transition around z∼1. 35, is identical to the transitional dark energy model [67] that can account for both the H 0 tension and the σ 8 tension.
Cosmological Constant Problem
Theoretical estimates for a non-zero value of Λ differ by a massive factor of ∼10 120 from the actual value required to account for observed accelerating expansion. Despite the lack of any quantitative physical explanation, Λ has been accepted hitherto primarily because of its simplicity and ability to fit the data [68]. Before the expansion of the universe was found to be accelerating, Weinberg [69] considered the most likely value for Λ to be zero. Accounting for all dark energy with information energy would allow Λ to take that preferred zero value, and the information dark energy would effectively resolve the 'cosmological constant problem'.
Cosmological Coincidence Problem
Star formation needed to have advanced sufficiently for information dark energy to be strong enough to initiate accelerating expansion. Star formation also needed to have advanced sufficiently for there to be a significant likelihood of intelligent beings evolving to observe this acceleration. Therefore, it does not seem to be such a coincidence that we are living when the expansion is accelerating. This effectively removes the 'why now?' of the 'cosmological coincidence problem' (for example, see [70]).
Cosmic Isotropy
We expect information energy, which is dependent on structure and star formation, to be both temporal and spatially dynamic. Above, we used the universe averaged temporal variation to determine equation of state parameters. Recently, large 5σ significance directional anisotropies have been observed in the value of H 0 [71], calling into question the cosmic isotropy assumption of the cosmological principle. Such directional anisotropies should be expected from the spatial dynamic aspect of an information dark energy located in the stellar heated gas and dust of structures.
Falsifiable
Note that the predicted ∼2% difference in the curve of H(a) at z∼2, as shown in Figure 3, is close to the detection limit of next generation instruments. For example, the ESA Euclid science requirement document [72] states that the aim is to measure H(a) down to an accuracy of 1-2% in the range of 0.5 < z < 2. Notwithstanding the resolution limits of present instrument designs, this clear prediction will still enable information dark energy to be falsified experimentally in the near future.
Instead of waiting for sufficiently high-resolution measurements of H(a) to become available, another method of verifying the role of information dark energy would be to determine whether the observed directional anisotropies in H 0 [71] are related to the distribution of stellar heated gas and dust in the structures of the universe.
Information Dark Energy Compared to Λ and Quintessence
In the discussions above, we have shown that information energy in the late universe closely mimics a cosmological constant. Information energy, IE, can just replace Λ in the ΛCDM model, effectively creating an IECDM model. Then, as the only observable effect of dark energy is via the accelerating expansion, this model should be as consistent with other phenomena as ΛCDM, while also resolving the H 0 and σ 8 tensions, the cosmological constant and cosmological coincidence problems, and removing the cosmic isotropy assumption of ΛCDM. Table 2 summarizes and shows that information dark energy compares favourably against the two main dark energy theories: the cosmological constant and scalar fields/quintessence. Table 2 shows that the equivalent energy of the information/entropy associated with stellar heated gas and dust has many of the characteristics required to be the source of dark energy. In our modified Friedmann Equations (5) and (6), this energy equivalence of information is used in the same way as, and alongside, the mc 2 energy equivalence of matter. We have not needed to identify specific processes nor required information to be destroyed any more than matter needs to be destroyed in order to consider these equivalent energies.
If information dark energy is indeed found to account for the accelerating expansion, then three further aspects should also be considered:
Constant Information Energy Density from Feedback?
The advent of accelerating expansion has been associated with directly causing a general reduction in galaxy merging and a reduction in the growth rate of structure and the rate of star formation [73,74]. This effect is evident in Figure 1 in the clear change of the stellar mass density gradient at z∼1.35, from a +3.46±0.23 to a +1.08±0. 16 , coincident with the start of dark energy acceleration effects. Assuming dark energy is information energy, once the information energy density, which increased with star formation, was strong enough to initiate acceleration, the acceleration in turn slowed down star formation, acting as a feedback that directly limited the growth rate of information energy itself. The resulting a +1.08±0.16 gradient that we observe in Figure 1 is then significant as this range encompasses the specific gradient of a +1 , which should be the natural feedback limited stable value expected from our information energy explanation for dark energy. The constant information energy density at z < 1.35, mimicking a cosmological constant, is then a direct result of this feedback limiting. Moreover, in order for feedback to operate in this way, information energy would need to be the major, or sole, source of dark energy.
Can Information Energy Also Emulate Dark Matter Effects?
In this work, we have concentrated on considering the dark energy aspects of information energy. However, another aspect that should be considered is that information energy might contribute to some effects previously attributed to dark matter. We have shown that information energy from stellar heated gas, primarily located around structures, has an energy density at a similar order of magnitude to total matter. Now space-time will be equally distorted by accumulations of matter and equivalent accumulations of energy. Then, information energy will distort space-time, adding to some extent a local attractive force emulating that of gravity from an unseen mass. While on the scale of the universe, total information energy as a dark energy is effectively repulsive thus causing the expansion to accelerate, any extra local distortions to space-time around structures caused by information energy will be effectively attractive and mimic dark matter. Then, by the nature and location of information energy in stellar heated gas and dust, it will be hard to distinguish such effects from those usually attributed to dark matter.
A high correlation has been found in [75,76], showing that dark matter effects in a range of galaxies are fully specified by the location of the baryons. This observation is considered difficult to reconcile with ΛCDM, and the Modified Newtonian Dynamics, MOND, has been suggested as a possible explanation. Equally, this observation might be explained by the information energy of stellar heated gas and dust, contributing effects similar to those produced by dark matter. The strongest dark matter effects in clusters of galaxies are found in the brightest and therefore highest-temperature galaxy [77], again consistent with the highest information energy densities located where stellar heated gas and dust occur at high temperatures and densities.
Clusters of colliding galaxies are often considered to provide some of the strongest evidence for the existence of dark matter. Optical observations show stars pass through the collision largely unhindered, whereas X-ray observations show the galactic gas clouds colliding, slowing down, or even halting. The location of dark matter is then identified from the use of lensing measurements [78][79][80]. A study of the Bullet cluster, and of a further 72 mergers, both major and minor, finds no evidence for dark matter deceleration, with the dark mass remaining closely co-located with the stars and structure. Information energy could equally explain these effects attributed to dark matter, as information energy from stellar heated gas and dust passes, along with the stars, straight through the collision of galaxies. Any contribution of information energy to dark matter effects could be determined by identifying whether the location of the stellar heated gas and dust within galaxies is related to the distribution of dark matter effects observed within those galaxies. New weaklensing measurements of galaxies [81] promise to measure such effects and distinguish between the various proposed causes.
A Different Future?
The ΛCDM model assumes that universe expansion will continue accelerating forever in this dark energy-dominated epoch. A dark energy provided by the information energy of stellar heated gas and dust suggests a different future for the universe. The fraction of baryons in stars must stop increasing at some time, since f(a) < 1 by definition. Eventually, f(a) will decrease as more stars die out than new ones are formed. It is estimated that the future maximum star formation might be as little as only 5% higher than today [82]. At some point, the information dark energy density will fall, and the expansion of the universe will cease accelerating and revert back to deceleration.
Summary
The approach employed in this work has emphasized the two preferred requirements of cosmology [68]: simplicity (wielding Occam's razor), and naturalness (relying on mostly proven physics, with a strong dependence on empirical data). The information energy of stellar heated gas and dust could provide a dynamic dark energy that overcomes several of the problems and tensions of ΛCDM. It is therefore important to consider performing the falsification measurements suggested above so that such a simple concept can be confirmed, or refuted.
Funding: This research received no external funding. | 8,069.6 | 2022-03-01T00:00:00.000 | [
"Physics"
] |
Visualizing Local Electrical Properties of Composite Electrodes in Sulde All-Solid-State Batteries by Scanning Probe Microscopy
Studies on local conduction paths in composite electrodes are essential to the realization of high-performance sulde all-solid-state lithium batteries. Here, we directly evaluate the electrical properties of individual LiNi 1/3 Mn 1/3 Co 1/3 O 2 (NMC) electrode active material particles in composite positive electrodes by scanning probe microscopy (SPM) techniques. Kelvin probe force microscopy (KPFM) and scanning spreading resistance microscopy (SSRM) were combined. The results indicated that all NMC particles exhibit a charged state with increasing potential, but low electronic conduction paths exist at point contacts of some NMC particles. Furthermore, the I-V characteristics measured by conductive-atomic force microscopy (C-AFM) suggest that these specic NMC particles show low charge-discharge reactivity. The results of the SPM techniques indicate that poor conduction locally limits the charge-discharge reactivity of electrode active materials, leading to the degradation of battery performance. Such SPM combination accelerates the morphological optimization of composite electrodes by facilitating the investigation of the intrinsic electrical properties of the electrodes.
Introduction
Studies on local conduction paths in composite electrodes are essential to the realization of highperformance sul de all-solid-state lithium batteries. Here, we directly evaluate the electrical properties of individual LiNi 1/3 Mn 1/3 Co 1/3 O 2 (NMC) electrode active material particles in composite positive electrodes by scanning probe microscopy (SPM) techniques. Kelvin probe force microscopy (KPFM) and scanning spreading resistance microscopy (SSRM) were combined. The results indicated that all NMC particles exhibit a charged state with increasing potential, but low electronic conduction paths exist at point contacts of some NMC particles. Furthermore, the I-V characteristics measured by conductive-atomic force microscopy (C-AFM) suggest that these speci c NMC particles show low charge-discharge reactivity. The results of the SPM techniques indicate that poor conduction locally limits the chargedischarge reactivity of electrode active materials, leading to the degradation of battery performance. Such SPM combination accelerates the morphological optimization of composite electrodes by facilitating the investigation of the intrinsic electrical properties of the electrodes.
All-solid-state lithium batteries (ASSLBs) are promising power sources for next-generation low-carbon societies 1,2 . One of their primary advantages is their high safety factor as they replace ammable organic liquid electrolytes with non-ammable inorganic solid electrolytes (SEs). Recently, Kato et al. developed sul de super-ionic conductors with a higher Li + ionic conductivity (25 mS cm -1 ) than conventional organic liquid electrolytes 3 . Bulk-type ASSLBs contain composite electrodes consisting of electrode active materials and SEs to form electron-and Li + -ion conduction paths. However, compared to conventional liquid electrolytes, it is di cult to generate a large number of conduction paths in ASSLBs because solidsolid interfaces tend to cause contact loss, resulting in inadequate electrochemical reactions.
To investigate the ionic and electronic conductivity of composite electrodes, Siroma et al. designed an AC impedance technique based on the transmission-line model 4 . Our group prepared ion-blocking and electron-blocking cells and used both AC and DC techniques for composite positive electrodes consisting of LiNi 1/3 Mn 1/3 Co 1/3 O 2 (NMC) positive electrode active materials and Li 3 PS 4 glass SEs of various compositions at 0% and 50% state-of-charge (SOC) 5 . At SOC 0%, the observed ionic conductivity was higher than the electronic conductivity. At SOC 50%, electronic conductivity increased by more than one order of magnitude compared to that at SOC 0% and was higher than or equal to the ionic conductivity of the sample.
To analyze conduction networks in electrodes in detail, it is important to study local electrical properties in composite electrodes exhibiting complex morphologies. Investigating the reaction distribution in electrodes enables us to estimate local electrical properties indirectly. Recently, the SOC and Li distributions in composite positive electrodes in sul de ASSLBs were studied by X-ray absorption spectroscopy 6 and particle-induced X-ray emission/particle-induced gamma-ray emission 7,8 , respectively.
We also demonstrated the reaction uniformity of composite electrodes by Raman imaging [9][10][11][12] . It could be inferred that charge-discharge reactions are not adequate in aggregated electrode active materials due to their small number of Li + -ion conduction paths.
Although scanning probe microscopy (SPM) can be used to directly investigate the electrical properties of composite electrodes [13][14][15][16][17] , it has not been applied to study bulk-type sul de ASSLBs because of challenges associated with handling sul de SEs in air. Masuda et al. conducted operando Kelvin probe force microscopy (KPFM) on bulk-type oxide ASSLBs 18,19 . They investigated the internal electrical potential distribution in composite positive electrodes and proved that the conduction paths in these electrodes were disconnected in the rst charging process.
Scanning spreading resistance microscopy (SSRM) and conductive-atomic force microscopy (C-AFM), which measure current and resistance distributions, respectively, are also based on SPM. These techniques have been used for investigating high-resistance areas to understand the degradation in electrode active materials 15,20 . Zhu et al. 21 and Yang et al. 22 used a C-AFM technique of measuring I-V characteristics to investigate Li-ion diffusion energy barriers related to charge-discharge reactions in electrode active materials. They reported that the grain boundaries exhibited a lower barrier than did grain interiors in thin-lm positive electrodes. To investigate the local electrical properties of composite electrodes in detail, SPM techniques such as KPFM, SSRM, and C-AFM should be employed simultaneously.
Herein, we showed for the rst time monitoring of local electrical properties of sul de composite positive electrodes in bulk-type ASSLBs by using various SPM techniques in a high vacuum state (~10 -5 Pa). The same areas in the electrodes composed of both NMC and Li 3 PS 4 SEs were evaluated by KPFM, SSRM, C-AFM, and scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM-EDX). We directly investigated the potential distribution in the electrodes with and without initial charging by KPFM to discuss about SOC distributions in each NMC particle. The distribution of local resistance in the composite electrodes was examined by SSRM. We evaluated changes in the local conductivity before and after charging from the results of resistance changes and compared them with previously reported ndings on the variation in electronic conductivity 5 . Current distribution maps and I-V curves of different NMC particles and the SE were obtained via C-AFM. We examined the reactivity of the NMC particles with higher and lower resistances in terms of their I-V characteristics. Furthermore, voltage and resistance changes in the NMC particles were evaluated using KPFM and SSRM before and after the charge test. We also discussed the correlation between these changes and the morphology of the composite electrodes.
The combination of these different SPM techniques for composite electrodes allowed us to successfully detect NMC particles that exhibited electrical properties different from those of others; these different characteristics are one of the main reasons for the poor conductivity observed in all-solid-state batteries.
Results
SPM analysis for composite positive electrodes. SPM measurements were conducted on composite positive electrodes consisting of NMC and Li 3 PS 4 glass SEs before and after the initial charge test. Figure S1 (Supplementary information) shows the initial charge curve of the all-solid-state cell, which was charged to 4.4 V (vs. Li + /Li) at 25 °C at a current density of 0.13 mA cm -2 ; it exhibited an initial charge capacity of 164 mAh g -1 . Before SPM measurements, we conducted ion milling on the cells to prepare at samples. Subsequently, KPFM, C-AFM, SSRM, and SEM-EDX were conducted sequentially on the composite positive electrodes (Figure 1). An air-protected sample holder was used to transfer samples between the ion milling apparatus, SPM, and SEM. The negative electrode side was placed in contact with an insulating tape on the sample holder (details of the measurement conditions are included in the Methods section). Figure S2 shows an SEM image of the composite positive electrode after ion milling. Then SPM and SEM-EDX measurements were conducted for the same areas on the electrodes near the SE layer. The potential, current, and resistance distributions were measured by KPFM, C-AFM, and SSRM, respectively, in the composite positive electrodes. In the setup used for both C-AFM and SSRM, a bias voltage of -2 V was applied on the sample holder. In general, the SSRM technique results in a wide measurement range of 7 orders of magnitude, thereby enabling the analysis of composite materials exhibiting large differences in resistance. Meanwhile, by using sweeping bias voltage, C-AFM yields the I-V characteristics of the composite electrodes and SEs. We investigated I-V characteristics of the SE separator layer by C-AFM and con rmed that the bias voltage of -2 V did not induce SE decomposition ( Figure S3; see details in the Methods section). Finally, a bias voltage of -2 V was found to be suitable for detecting small currents (10 pA) and high resistances (10 9 Ω) in composite electrodes. Figures 2 and 3 show the SEM image, EDX mappings, and SSRM, C-AFM, and KPFM images of the composite electrode before cell operation and after the initial charge test, respectively. S and Ni could be detected in the EDX maps of SE and NMC, respectively. Furthermore, all SSRM, C-AFM, and KPFM images overlapped both the NMC and SE areas, indicating that all measurements could be carried out successfully in the same area. The SSRM, C-AFM, and KPFM images of the electrode captured before the charge test indicated minimal differences between individual NMC particles ( Figure 2(d)-(f)). In the SSRM image, it can be observed that the resistance at the center of the NMC particles and SE/NMC interfaces was lower than that in other NMC areas. However, the range of these values was 1.8-2.2 × 10 9 Ω, indicating that all areas in the composite electrodes showed a similar resistance before charging. In contrast, the resistance of some delithiated NMC particles varied from that of other NMC particles in the charged sample ( Figure 3(d)). Most of the NMC particles contacted each other su ciently and exhibited a resistance of 10 7 Ω, while some NMC particles 'point-contacted' with other particles as shown in the SEM image (Figure 3(a)) with a yellow broken line; these particles exhibited a higher resistance of 10 9 Ω. Such higher resistance may be attributed to poor electronic conduction due to inadequate contact among these NMC particles. In the C-AFM image (Figure 3(e)), the contrast of these NMC particles and SE was similar, indicating that NMC particles in minimal contact with other particles exhibited lower electronic conductivity due to their higher resistance. Meanwhile, the resistance of SE remained constant at 10 9 Ω after charging.
The contact potential difference (V CPD ) between the tip and positive electrode was measured by KPFM.
As shown in Figure 1, the negative electrode side was opened and the positive electrode was in electrical contact with the sample holder. To compare KPFM images before and after the charge test, V CPD was converted into V CPD ' by adding the open circuit voltage (OCV), as described in the Methods section.
Hereafter, we shall discuss KPFM results in terms of V CPD '. values. In the composite electrodes before charging, the V CPD ' values of NMC and SE were 2.06-2.12 and 2.22 V, respectively; these values increased to 3.11-3.29 and 2.76 V, respectively, after charging. When compared to SE, NMC exhibited a larger difference in V CPD ' (1.1 V) before and after charging. Although V CPD ' does not correspond with cell voltage quantitatively, we considered that KPFM measurements can be used to evaluate potential changes qualitatively as the V CPD ' of NMC increased after charging in response to an increase in the cell voltage. In the present study, we discuss the potential distribution in each electrode before and after the charge test.
I-V characteristics of electrodes measured by C-AFM. The I-V characteristics of NMC particles and the SEs were compared using C-AFM ( Figure 4). Figure 4(b) shows the I-V curves of NMC ((a1) and (a2)) and SE ((a3) and (a4)) from the C-AFM image before charging (Figure 4(a)). The I-V curves of active electrode materials yield information about their electrical properties at the measuring point 21,22 . Although this experimental technique is typically applied to thin-lm electrodes, we assumed that it could be applied to bulk-type ASSLBs in order to investigate the local charge-discharge properties of a single electrode active material particle in the presence of SE. While the current in the SE was ~0 A, the NMC particles displayed different I-V curves before charging owing to their different charge-discharge reactivities at each single point; NMC (a1) responded with a higher current than (a2). As shown in Figure 4(b), I-V characteristics are locally different even before charging. In the composite electrodes after charging, NMC particles exhibiting higher resistance ((c1) and (c2)) show completely different I-V curves compared to those exhibiting lower resistance ((c3) and (c4)). The latter show higher current responses due to their lower resistances, suggesting that charge-discharge reactions occurred easily at these NMC particles.
Investigation of the I-V characteristics of electrodes by C-AFM can help us understand the local reactivities in charge-discharge reactions.
Potential and resistance distributions of individual NMC particles measured by KPFM and SSRM. We evaluated the electrical properties of individual NMC particles. We selected 14 NMC particles from the KPFM and SSRM images as shown in Figures 5(a)-(d) and evaluated the average V CPD ' and resistance of each particle; the obtained results are shown in Figure 5(e). Before charging, the V CPD ' and resistance values of all the NMC particles were ~2.1 V and 2.1 × 10 9 Ω, respectively. After charging, the value of V CPD ' increased to 3.1-3.3 V, suggesting the occurrence of delithiation in all NMC. The resistance of most of the NMC particles decreased to ~10 7 -10 8 Ω, indicating an increase in their electronic conductivity. This behavior corresponded with our previous observations that the electronic conductivity of composite positive electrodes increased by 1-2 orders of magnitude after charging 5 . However, the resistances of three NMC particles, numbered 03, 08, and 10, were 10 9 Ω even after charging. As described earlier, these particles had minimal contact with other NMC particles. It can be inferred from the V CPD ' results that delithiation occurred in each NMC particle in the charged electrode because V CPD ' of observed NMC particles increased after charging. However, the local electronic conductivity was not uniform as some NMC particles showed higher resistance after charging. This phenomenon was prominent in the case of NMC particles in 'point' contact with other particles without any conductive additive. At high current densities, it is likely that inhomogeneous electronic conduction degrades battery performance due to current concentration. Increasing the amount of electrode active materials 5 and tailoring their sizes 23,24 can improve the electronic conductivity of carbon-free composite electrodes. However, electrode utilization is limited in the case of no or insu cient quantity of conductive additives. Moreover, some reports state that all-solid-state cells degrade in the presence of carbon 25 , which necessitates the optimization of these composite electrodes. Information on local electronic conduction provided by SSRM makes it a powerful tool to optimize electrode design. Our previous studies using Raman imaging indicated that aggregated electrode active materials show a low SOC as they contain only a small number of ionic conduction paths 9 . In contrast, in the present study, SSRM showed that minimal contact between NMC particles resulted in a low local electronic conductivity.
Subsequently, we analyzed the resistance and V CPD ' distribution in the cross-sectional direction. The NMC particles are numbered from the current collector (CC) side toward the SE layer as shown in Figures 5(a)-(d). Figure 5(f) shows the variation in the resistance of the NMC particles before and after the charge test. As described earlier, all the NMC particles in the measurement zone exhibited a similar resistance before charging. At the end of the charge test, the resistance of most of the NMC particles decreased except for the particles numbered 03, 08, and 10, which did not have much contact with other NMC particles. Figure 5(f) indicates that there was no resistance gradient in the cross-sectional direction. However, the V CPD ' distribution was different from that corresponding to resistance of NMC particles. The V CPD ' values of all the NMC particles before and after the charge test are plotted in Figures S4(a) and (b), respectively. Before charging, there was no gradient in the V CPD '; in contrast, after charging, it increased gradually from the SE layer side to the CC side except for particles 03 and 08, which exhibited a higher resistance. The difference in V CPD ' values on the SE layer and CC sides was ~0.2 V (except for particles 03 and 08), indicating a potential gradient in the cross-sectional direction after the charge test. In our previous studies using Raman imaging and optical microscopy, we observed inhomogeneous SOC distributions in composite electrodes after charging 9,10,12,26,27 . The results showed that charge-discharge reactions proceeded preferentially from the SE-layer side because the rate-determining step was related to Li + -ion conduction, indicating that the potential of NMC increased from the SE-layer side. In contrast, the results in Figure S4(b) indicate that NMC particles on the SE-layer side exhibit a lower potential than those on the CC side. In this study, the measurement area was focused on 15 μm 2 near the SE layer of the 50 μm thick electrode layer ( Figure S2). Therefore, KPFM analysis indicates a local potential gradient. To further investigate the presence of a potential gradient across the entire composite electrode, in situ and widerange measurements are required. Figure S4(b) indicates that a potential gradient remained after the charge test in the all-solid-state cells. Moreover, Tanida et al. reported that after relaxation in charged cells with an organic liquid electrolyte, the SOC of LiCoO 2 electrodes became uniform with a local potential difference as the driving force 28 . The behavior of all-solid-state cells is different from that of conventional batteries with an organic liquid electrolyte, possibly because the former contains fewer conductive networks 19,23 . From Figures 5 and S4, it could be inferred that the inhomogeneous resistance and potential distributions are likely to be dependent on the percolation of NMC particles and their distance from the SE layer, respectively.
Discussion
To investigate the electrical properties of individual NMC particles in composite positive electrodes, we conducted SPM and SEM-EDX analyses on the same areas in the electrodes of sul de bulk-type ASSLBs before and after the charge test. The KPFM indicated that the potentials of all NMC particles in the measurement area increased after the charge test, suggesting the delithiation of NMC. The resistance distribution measured by SSRM showed that the resistances of almost all the NMC particles decreased by more than one order of magnitude. This result corresponded with the previous report indicating that the ionic conductivity of composite electrodes increased after charging. However, some NMC particles, which had little contact with other NMC particles, showed high resistance after charging. Therefore, the combination of KPFM and SSRM revealed that charge reactions occurred in the composite electrode, but low electronic conduction paths were formed locally. These speci c low electronic conduction paths are likely to induce battery degradation at high current density. Recent papers have discussed particle sizes of electrode active materials and the necessity of conductive additives in terms of electronic conduction paths in composite electrodes. Investigations of local electronic conduction by SSRM and potential distributions by KPFM contribute to facilitate such optimization of composite electrodes.
Furthermore, for the rst time, we employed C-AFM on composite electrodes in bulk-type ASSLBs to investigate their I-V characteristics. The SE showed a small current response at high bias voltages, suggesting low electronic conductivity and oxidative decomposition. The reactivity of charge-discharge reactions of NMC was investigated by comparing the I-V characteristics of individual NMC particles. In the NMC particles for which SSRM measured lower resistance, higher current responses were recorded, suggesting their high preference for charge-discharge reactions. Therefore, the combination of C-AFM and SSRM demonstrated that decreasing the amount of NMC particles exhibiting high resistance signi cantly increased the charge-discharge reactivity of the electrode.
Finally, combining different SPM techniques enables us to investigate the electrical properties of electrode active materials in composite electrodes, such as their potential, current, and resistance distributions, and I-V characteristics, directly and to diagnose poor conduction paths. This may help in formulating guidelines to improve electronic conduction in composite electrodes for bulk-type ASSLBs. For further understanding of resistance and potential distribution formations during charge-discharge reactions, in situ measurements will be our future research. Moreover, combining other measurements such as Raman imaging will help us to gain a direct relationship between SOC and electrical properties of electrode active materials.
Methods
Fabrication of all-solid-state lithium cells. 75Li 2 S·25P 2 S 5 (mol%) glass SEs were prepared by mechanical milling of Li 2 S (Mitsuwa Chemicals Co., Ltd, 99.9%) and P 2 S 5 (Aldrich, 99%) in a dry Ar atmosphere as described in a previous study 29 Ion milling of all-solid-state cells. The pristine and charged cells were cut with razors and set in an airprotected sample holder in an Ar atmosphere. To obtain at cross-sections, ion milling (ArBlade5000; Hitachi High-Tech Corp.) was conducted on the cells with cooling at -100 °C. We con rmed that the OCV of the cells did not change before and after ion milling.
KPFM analysis. After ion milling, KPFM was conducted on the NMC composite positive electrodes using a scanning probe microscope (AFM5300E; Hitachi High-Tech Corp.). All SPM measurements were carried out without dismounting the cells from the sample holder in a high vacuum (~10 -5 Pa). For KPFM analysis, the negative electrode side was opened and the positive electrode was in electrical contact with the sample holder. This is because a wide at sample can be obtained when ion milling is performed from the positive electrode side, which is in direct contact with the sample holder without an insulating tape. In the present setup (Figure 1), the positive electrode was connected to the ground and V OCV with a minus sign for the cells was expected on the negative electrode side. Generally, KPFM measures V CPD between the sample positive electrode and probe tip. In the case of cells displaying an electromotive force, the observed KPFM voltage included cell voltage as well. Note that measurement areas in the positive electrodes were near ground in the experimental setup, owing to which it was di cult to examine voltage changes before and after the charge test. To overcome this issue, V CPD was converted to V CPD ' (= V CPD + V OCV ) by adding V OCV before and after the charge test (1.16 and 3.74 V, respectively) to V CPD , while assuming that the negative electrode was grounded and the positive electrode was opened. We used a Rh-coated Si cantilever (SI-DF3-R; Hitachi High-Tech Corp.) with a resonance frequency of 23-31 kHz and spring constant of ~1.5 N m -1 ; furthermore, an AC voltage of 0.5 V was applied at a frequency of 26.0 kHz between the probe and sample holder. We evaluated the V CPD ' of each NMC particle in the observation area by selecting each individual particle from KPFM images, generating the corresponding V CPD ' histogram, and evaluating V CPD ' at the peak of the histogram.
C-AFM, SSRM, and SEM-EDX analyses. After KPFM evaluation, C-AFM was carried out at a bias voltage of -2 V on the same areas in the composite electrodes. The Sampling Intelligent Scan (SIS) 30 mode was used for SSRM and C-AFM measurements. A boron-doped diamond-coated Si cantilever (SI-DF40-CD; Hitachi High-Tech Corp.) was used for these studies. The I-V curves of individual NMC particles and SE in the composite electrodes and the SE layer were recorded in the bias voltage sweep range of -2 to 2 V at a scan rate of 4 V s -1 . The I-V characteristics of the SE separator layer were investigated by C-AFM ( Figure S3) to select a suitable bias voltage. A bias voltage was applied at four points on the SE separator layer and swept from -2 to 2 V; Figure S3(b) shows the resultant changes in current. In the range of -2 to 1 V, there was almost zero current, after which it gradually increased to 1 pA during the subsequent voltage sweep to 2 V. These changes indicated that SE was decomposed by oxidation. We selected a bias voltage of -2 V for C-AFM and SSRM measurements to prevent SE decomposition.
After C-AFM analysis, SSRM tests were conducted on the same areas with the same cantilever and measurement setup as those in the C-AFM scheme. We investigated the resistance of each NMC particle in the observation area in a manner similar to that used for evaluating the V CPD ' of NMC particles.
After SSRM testing, the morphology of the electrodes was observed by SEM-EDX (FE-SEM: Regulus8100, Hitachi High-Tech Corp.; EDX: Ultim Max (100 mm 2 ), Oxford Instruments). In all processes (ion milling to SEM-EDX), the same air-protected sample holder with an O-ring was used.
Declarations
Ion milling treatment and SPM analyses. Schematic illustration of the ion milling treatment and KPFM, C-AFM, SSRM, and SEM-EDX analyses. Samples were transferred in an air-protected sample holder and the same areas were analyzed by KPFM, C-AFM, SSRM, and SEM-EDX. respectively. (e) VCPD' and resistance of each NMC particle before (•) and after (■) charge test measured by KPFM and SSRM. The inset shows an enlarged gure of NMC particles before charging.
The numbers 03, 08, and 10 correspond to the NMC particles in Figure 5
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. | 6,007.4 | 2020-08-24T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Reduction the Effect of Heat Transmission for the Heat Capacity of Building Wall in Summer
For the wall model of a building affected by solar radiation, a one-dimensional transient thermal conduction analysis was conducted. The purpose of the analysis was to examine the effect of wall thickness and heat capacity on heat transfer. On the westward wall in summer, the temperature distribution indoor the wall became parabolic. Even after the evening, the heat flux direction was outdoor from the wall and indoor from the wall, even in the conditions where the sol-air temperature was higher than the indoor temperature. The re-emit of the outside surface continued from evening till the morning of the next day. In the daytime, the heat quantity that entered from the outdoor air into the wall body did not all flow through the room, but a part was re-emitted to the outdoor. Particularly in the case of materials with low thermal conductivity and high volumetric specific heat, the effect of re-emit was remarkable. Regarding the amount of re-emit, the woody material with a large volumetric specific heat and the glass wool with a small volumetric specific heat were compared. It was suggested that the heat capacity could reduce the heat flux.
Introduction
Against a backdrop of climate change, the low carbonization of buildings is strongly required as a corporate social responsibility in the sector. In Japan, high insulation of buildings is increasing owing to the Act on the Rational Use of Energy [1].
The evaluation of the thermal insulation performance of a thermal envelope by the Act on the Rational Use of Energy [2] is essentially based on the thermal transmittance. The thermal transmittance is derived by developing the heat balance equation. To simplify the equation, the time change of temperature in the heat balance equation is set to zero. Using this procedure, the time change of temperature and the heat capacity are eliminated from the heat balance equation, and the following characteristics are generated.
a) The heat flux is obtained by multiplying the internal and external temperature differences by the heat transmission coefficient. b) The value and direction of the heat flux of each part of the wall are constant. c) The direction of the heat flux is determined by the sign of the temperature difference between the indoor and the outdoor of the wall. d) The volumetric specific heat and heat capacity do not affect heat flux. e) The wall surface and the internal temperature quickly reach a temperature proportional to the thermal resistance. These assumptions can be applied to a small temperature change or thin wall conditions. However, it has been reported in a previous study on field survey that it is not always possible to explain the heat insulation performance only by the thermal transmittance. In an unsteady environment where the time variation of the surface heat transfer is large, it is necessary to consider the influence of the steady state assumption for materials with large heat capacity. The influence of the heat capacity of the wall on the indoor thermal environment has been investigated. Natural materials such as earth and wood have high volumetric specific heat. These have been used for building materials around the world [3][4][5]. In Japan, wood and mud have been used as building materials [6,7]. Wood is the most common building material. Owing to advancement in cross laminated timber [8] technology in recent years, there is an increase in the amount of wood used for building construction.
In a previous research, thermal conductivity [9] and thermal transmittance [10] were measured, and it was reported that wood has a high thermal diffusion coefficient. Furthermore, a mud wall is composed of mud, straw, bamboo, and wood, and has a large heat capacity [8,9]. In recent years, natural materials have attracted attention owing to their carbon fixing functions and large heat capacities/moisture capacities. Therefore, the development of thermal insulating materials derived from natural materials is being studied. Fukuta [11] developed mats using wood shaving, which is waste from sawing. The mat was developed with thermal insulation performance and sound absorbing performance by thermoforming and adding kenaf fiber as the auxiliary raw material to the scrap of the main raw material. Nakaya [12] measured the thermal conductivity and volumetric specific heat of the wood shaving mat. The thermal conductivity was approximately 0.065 W/(mK), which was influenced by material compounding and mat density. The study also showed that the volume heat was higher than that of mineral fiber insulation. An indoor survey of a building with a large heat capacity reported a phase delay and leveling of room temperature [13][14][15][16]. Nakaya [17] measured the thermal environment for three experimental buildings with different wall types. The measurement values were compared among the experimental buildings. The indoor thermal environment and energy consumption cannot be determined only by the magnitude relation of the thermal resistance, and the influence of the heat capacity can be confirmed. However, in these previous studies, a comparison of measured data only suggests the influence of the heat capacity.
The thermal envelope of the building heat flows through the path of the outdoor surface heat transfer, the heat conduction indoor the wall body, and the indoor side surface heat transfer. On the outdoor surface of the wall, the influence of time fluctuation of short wave radiation is large. Therefore, the time variation of the outdoor surface temperature increases. Heat conduction indoor the wall is a diffusion phenomenon. Therefore, the response of the time change of temperature is slower than at the surface. On the outdoor of the walls affected by solar radiation, the surface temperature changes significantly with time. The response rate of temperature differs between the wall surface and indoor. For this reason, the difference in temperature response between the surface and the indoor will be noticeable for thick walls with large volumetric specific heat. That is, with thermal thick walls, the temperature distribution is not proportional to thermal resistance. There is the possibility that the thermal resistance as well as the heat capacity can be affected by the heat quantity flowing through the wall in an unsteady environment.
In this study, a one-dimensional thermal conductivity analysis was carried out on the western wall in the summer season, the wall temperature distribution was calculated, and the surface heat flux was obtained. Then, the influence of heat capacity on the heat quantity flowing through the wall were examined. Regarding the amount of re-emit, the woody material with a large volumetric specific heat and the glass wool with a small volumetric specific heat were compared.
Particularly in materials with low thermal conductivity and large volumetric specific heat, such as wooden walls and wood fiber mats, the effect of re-emit was remarkable.
Method
The analysis based on the assumption of having the building walls in summer were carried out. The one-dimensional non-steady heat conduction equation was approximated by the backward finite difference method and a solution was obtained. The heat flux was estimated by multiplying the difference between the air temperature and the wall surface temperature by the heat transfer coefficient. The heat transfer coefficient was assumed to be a constant value. The heat transfer coefficient was set to 23 W/m 2 K on the outdoor side and 9 W/m 2 K on the indoor side.The program was written in Visual Basic for Applications (VBA) for Microsoft Excel 2003. The input items were the indoor and outdoor temperature, calculation time interval, thermal properties (thermal conductivity, volumetric specific heat), wall thickness, height, and width of each layer of the multilayer wall. The indoor and outdoor temperature was measured data of the experimental house. The outdoor temperature was the sol-air temperature, and measurement data of the surface temperature of the heat insulating material arranged in the west direction was used. The indoor temperature is the air temperature in the center of the room controlled at 24°C. The calculation time intervals were 10 min intervals, and the spatial division thickness was 1 mm. The output items were the temperature and heat flux. The heat flux was calculated by multiplying the difference between the air temperature and the surface temperature by the surface heat transfer coefficient. The heat quantity was obtained by integrating the heat flux over time. The integration time was 24 h from 00:00 to 24:00. The height and width of the calculated wall body yielded the unit area. The wall model was calculated with one type of material to clarify the thermal characteristics of the material. Table 1 presents the thermophysical properties of the materials. The materials were wood, soil, and insulation (wood fiber mat, high performance glass wool). In this paper, the materials were described as Wood, Mud, Wood Mat, Glass Wool. Wood and soil have a higher volumetric specific heat and a higher thermal conductivity than heat insulating materials. Wood fiber mats have a higher volumetric specific heat and a slightly higher thermal conductivity than glass wool. The volumetric specific heat of wood fiber insulation is approximately 18 times that of glass wool. The wall thickness was calculated at 10 mm intervals in the range 10-100 mm, 50 mm intervals in the range 100-500 mm, and 100 mm intervals in the range 500-1000 mm. The sol-air temperature varied from 19.5 to 62.4°C. The sol-air temperature was higher than the indoor air temperature all day. If the heat capacity is assumed to be zero, the heat flux direction is from the outdoor to the indoor. The outdoor surface temperature of each wall reached the maximum value at 15:10. Along with a decrease in the short-wave radiation of the sun, the outdoor air temperature of the wall sharply declined from evening to night. For example, the outdoor surface temperature of the wood wall decreased by 25 K in 2 h from 16:00 to 18:00 in the evening.
Wall Temperature Distribution and Surface Heat Flux
The time change of the outdoor air surface temperature was mostly affected by radiation heat transfer. Therefore, the surface temperature was affected almost instantaneously with respect to short-wave solar radiation fluctuations. Temperature fluctuation on the outdoor air side surface had a time difference until it affected the internal temperature. The outdoor surface temperature decreased more rapidly than that of the central part of the wall. Therefore, the temperature distribution was parabolic with a vertex. The point where the temperature gradient indoor the wall becomes 0 is indicated by ▼. On the outdoor side of the temperature gradient zero position, the temperature gradient was negative; therefore, the heat flux direction from the outdoor air side. The range where the temperature gradient becomes negative contributes to the re-emit of the heat quantity to the outdoor side. Furthermore, if the temperature of the temperature gradient zero position is high, the re-emit heat flux is large. The temperature gradient zero position of the wood wall and mud wall at 16:00 was higher than that of the wood mat wall and glass wool wall, and the temperature gradient was large. Therefore, the wood and mud wall generated large heat flux toward the outdoor surface. The temperature of the temperature gradient zero position of the wood mat wall and that of the glass wool wall at 18:00 were compared. The temperature of the temperature gradient zero position of the wood mat wall at 35 mm from the outdoor surface was 35°C, and that of glass wool wall at 5 mm from the outdoor surface was 25 °C. Wood mat wall had a temperature gradient zero position indoor the wall body from the glass wool wall, and the range of re-heat flux was wide. The internal temperature distribution of the glass wool wall was linear, and the temperature distribution was close to the steady state. Figure 2 shows the calculation results of the surface heat flux. The side surface heat flux of each wall increased from sunrise to noon due to the influence of the sun. The magnitude of the heat flux varied with the materials of the wall. The highest in the heat flux of the outdoor air-side surface was the mud wall, followed by the wood wall, wood mat wall, and glass wool wall. The heat flux input to the outdoor surface decreased as the thermal conductivity decreased. The heat flux on the outdoor surface of each wall changed from minus after 17:00. This is consistent with the time when the parabolic temperature distribution appeared in Figure 1. From the above, the surface temperature greatly fluctuates on the west side wall in summer due to the influence of solar radiation. The rate of the temperature change differs between the outdoor surface and the indoor of the wall body. Therefore, when the surface temperature decreases, it was shown that the internal temperature distribution had a parabolic shape and re-emit flux was generated. Figure 3 shows the time series changes of the heat flux and the heat transfer on the wall surface. The data was obtained from the analysis of the mud wall thickness of 100 mm. The heat flux was for the outdoor surface and the indoor side surface of the wall. The calculation of the surface heat flux was done by multiplying the difference between the considerably outdoor air temperature (or room air temperature) and the surface temperature by the surface heat transfer coefficient. To calculate the wall model, the one-dimensional heat conduction equation was solved with backward difference. The indoor and outdoor temperature of the building was measured data for Anjo-city, Aichi, Japan. The outdoor temperature was the westward sol-air temperature, while the indoor temperature was the room temperature of the experimental house set to 24°C. The unit of heat quantity flowing through the wall for one day is kJ/(m 2 day). To obtain a breakdown of the flow-through heat quantity, the heat flux for 24 h was integrated on the plus side and the minus side. The breakdown of the heat quantity of the wall is as follows. The plus component on the outdoor is the outdoor inflow (A), the minus component on the outdoor is the outdoor outflowing (B), the plus component on the indoor is the indoor inflowing (a), the minus component on the indoor is the indoor outflowing (b). In addition, the heat quantity balance on the outdoor is the outdoor flow-through heat quantity (C) and the indoor side is the indoor flow-through heat quantity (c). In this study, to consider unsteady heat transfer, the re-emit rate is defined by formula 1 and the reduction rate is defined by formula 2.
Relationship Between Wall Thickness and Heat Flux
Re-emit rate = B/A Reduction rate = c/C (2) Figure 4 shows the heat quantity of the wall surface. Comparing the outdoor outflow heat quantity for each material, the largest was mud, followed by wood wall, wood mat wall, and glass wool wall. This order was the same as the order of the magnitude of heat capacity. It was shown that the heat capacity is proportional to the magnitude of the re-emit quantity. The outdoor flow-through heat quantity (C) and the indoor side indoor flow-through heat quantity (c) were compared. Since the indoor side indoor flow-through heat quantity (c) was smaller than the outdoor flow-through heat quantity (C), the flow-through heat quantity of the room was reduced. From the above results, it was confirmed that the re-emitting that releases a part of the input heat quantity to the outdoor at night occurs on the wall in summer. Furthermore, the thermal physical properties of the wall influenced the magnitude of re-emit. Figure 5 shows the temperature distribution for each thickness at the wood wall. When the wall is thick, the wall indoor of the time change is low. Especially at a wall thickness of 1000 mm, the temperature distribution was almost constant when it was deeper than 300 mm from the outdoor surface. The point where the temperature gradient becomes zero is important because the heat flux direction is bidirectional. The point of zero temperature gradient at wall thickness of 100 mm at 18:00 was a depth of 30 mm from the outdoor surface and the temperature was 35°C. At a wall thickness of 500 mm, the point with zero temperature gradient was 40°C at 50 mm, and at wall thickness of 1000 mm it was 42°C at 50 mm. When the wood wall was thickened, the point of zero temperature gradient was high temperature, and it was in a deep point from the outdoor surface. The range from the point of zero temperature gradient to the outdoor is the heat flux direction from the wall to the outdoor. Therefore, the range contributes to the re-emit quantity input to the wall. Figure 6 shows the heat quantity on the wall surface at a thickness of 1000 mm. A comparison of the heat quantity of the respective surfaces shows that the outdoor inflow (A) and the outdoor outflowing (B) on the outdoor were larger than the indoor inflowing (a) and the indoor outflowing (b) on the indoor surface. Particularly, the mud walls and wood wall, insulation wall (Wood mat) with large volumetric specific heat are remarkable. In the walls with a large heat capacity, heat input and re-emit occurred near the outdoor surface of the wall, which reduced the heat flux through the indoor space. Figure 7 shows the re-emit rate, while Figure 8 shows the reduction rate. The heat conduction analysis determined the heat quantity of the outdoor and indoor surfaces. The analysis factors were the material type and wall thickness. The wall type had 2 conditions, while the wall thickness had 23 conditions of 10 to 1000, totaling 46 conditions. When the re-emit rate increases, the re-emit quantity is subtracted from the heat quantity input to the outdoor surface, which contributes to the reduction of the heat quantity flowing through the wall. As the reduction rate decreases, the flow-through heat quantity decreases. The re-emit rate rapidly increased from a wall thickness of 100 to 200 mm or more. When the re-emit rate reached 50%, the thickness of the wood wall was 137 mm, and that of the mud wall was 223 mm. When the reduction rate reached 50%, the thickness of the wood wall was 147 mm, and that of the mud wall was 243 mm. To examine the upper limit of the re-emit rate and the reduction rate, the analysis was carried out with a wall thickness of 1000 mm. The re-emit rate was 89% for the wood wall and 83% for the mud wall. The reduction rate was 19% for both walls. From the above, it is considered that the upper limit is approximately 80% of the input heat quantity for the effect of reducing the heat quantity flowing through the wall by the heat capacity.
The Thermal Resistance Equivalent to the Thermal
Resistance of the Glass Wool Figure 9 shows the magnitude of the heat quantity flowing through the wall for each material. As the wall thickness increased, the heat quantity flowing through the wall decreased exponentially. The relationship between the wall thickness and the heat quantity flowing through the wall showed an exponential decline in all material types. Figure 10 shows the relationship of the wall thickness where the heat quantity flowing through the wall is equal. To investigate the influence of heat flux reduction on the heat capacity, the glass wool with other materials were compared. The wall thickness of each material was estimated such that it is equal to the through flow heat quantity of the glass wool. The horizontal axis is the thickness of each material (Mud, Wood, Wood mat), and the vertical axis is the thickness of glass wool. Regarding the relationship between the Glass Wool and the wall thickness of each wall body, the linear regression equations described below were obtained.
t GW = 0.67×t woodmat (5) t: thickness [mm] The wall thickness at which the flow-through heat quantity became equal is a linear relation between each material and the small heat capacity (Glass Wool). The soil wall thickness corresponding to the flow-through heat quantity of Glass Wool (t = 100 mm) was 956 mm, that of the wood wall was 248 mm, and that of the Wood mat was 148 mm. Figure 11 shows the relationship of the wall thermal resistance where the flow-through heat quantity is equal. The thicknesses given in equations of 1 to 3 was divided by the thermal conductivity of each material, and the following formula was obtained.
The thermal resistance at which the heat quantity flowing through the wall is the same was compared for the glass wool and other materials. The thermal resistance of mud was 59% of the glass wool thermal resistance, the thermal resistance of wood was 53% of the glass wool thermal resistance, and the flow-through heat quantity was equal. The thermal resistance of the wood fiber mat was 91% of the thermal resistance of the glass wool, and the flow-through heat quantity was equal. It is suggested that in the westward wall in summer, the material with a large heat capacity may reduce the flow-through heat quantity compared with the material with a small heat capacity.
Conclusion
A one-dimensional transient thermal conduction analysis on the wall model of a building affected by solar radiation were conducted. The purpose of the analysis is to examine the effect of heat capacity on heat transfer. Heat flux and heat quantity were obtained from temperature distribution data by analysis. The indoor and outdoor temperature of the building is measured data of Anjo-city, Aichi, Japan. The outdoor temperature is the westward sol-air temperature, while the indoor temperature is the room temperature of the experimental house set at 24°C. The following results were obtained from the calculation.
The wall model was analyzed and the temperature distribution indoor the wall was calculated. As the short-wave solar radiation in the evening declined, the outdoor air surface temperature decreased. However, it takes time for the indoor of the wall to decrease in temperature compared with the outdoor air surface. Therefore, the temperature distribution indoor the wall was parabolic. Even in conditions where the SAT is higher than the room temperature, emit flow occurred from the wall toward the outdoor air. The heat radiation on the outdoor air side wall surface continued from evening till the morning of the next day. From the evening to the next morning, heat re-emit occurred on the surface of the outdoor wall. With the calculated wall thickness of 100 mm, the 24-h heat re-emit quantity of the materials was compared. As a result, it was determined to be in the order mud> wood> wood fiber mat> glass wool, which indicates the magnitude relationship of the volume heat capacity. In addition, since the glass wool has a low heat capacity, heat re-emit on the surface of the outdoor wall hardly occurred. The temperature distribution of the glass wool was the temperature distribution almost in proportion to the thermal resistance.
The value obtained by dividing the heat re-emit on the surface of the outdoor wall by the heat quantity entering the wall from the outdoor is defined as the heat re-emit rate. In the thermal steady state assumption, the heat re-emit rate is 0%. With the wood wall thickness of 50 mm, the heat re-emit rate was approximately 10%. The re-emit rate increased as the wall thickened, 35% for a thickness of 100 mm, and 65% for a thickness of 200 mm. The wall thickness was strongly influenced by the heat re-emit rate. The definition of the reduction rate is the value obtained by dividing the energy quantity entering the room from the wall by the energy quantity entering the wall from the outdoor. The period for integrating the heat quantity is one day from 00:00 to 24:00. Under the assumption of thermal steady state, the heat passage rate is 100%. In terms of the amount of heat flux through the room, the thermal resistance equivalent to the thermal resistance of the glass wool was calculated. Compared with glass wool, the value for wood was 53%, for soil the value was 59%, for wood fiber mat the value was 90%.
This study used only the results of numerical analysis. In fact, many factors such as the moisture content and reflectance were affected. It is necessary to confirm the surface heat flux and the internal temperature distribution of the wall by outdoor experiment. | 5,675.6 | 2018-08-23T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Spin Selectivity Damage Dependence of Adsorption of dsDNA on Ferromagnets
The adsorption of oxidatively damaged DNA onto ferromagnetic substrates was investigated. Both confocal fluorescence microscopy and quartz crystal microbalance methods show that the adsorption rate and the coverage depend on the magnetization direction of the substrate and the position of the damage site on the DNA relative to the substrate. SQUID magnetometry measurements show that the subsequent magnetic susceptibility of the DNA-coated ferromagnetic film depends on the direction of the magnetic field that was applied to the ferromagnetic film as the molecules were adsorbed. This study reveals that (i) the spin and charge polarization in DNA molecules is changed significantly by oxidative damage in the G bases and (ii) the rate of adsorption on a ferromagnet, as a function of the direction of the magnetic dipole of the surface, can be used as an assay to detect oxidative damage in the DNA.
■ INTRODUCTION
The central dogma of biology holds that the nucleobase sequence of DNA carries the specific genetic information that is translated to RNA and proteins, which manifests in an organism's phenotype. While sequence preservation is important, some amount of mutation and/or damage is essential for natural selection and evolution. The base pairs are susceptible to damage arising from cellular respiration, environmental exposure to free radicals, and other factors. 1 An example of such damage is the 7,8-dihydro-8-oxoguanine (OG), or other oxidized guanine products, which results commonly because of the guanine base pair's low reduction potential. Understanding the cellular mechanism for detecting and repairing such damage, to inhibit mutagenesis, is of great interest. 2 Because OG has a minor effect on the structure and stability of DNA, the mechanism through which repair enzymes locate it remains unknown.
We hypothesize that spin polarization differences can be used to distinguish OG DNA from its undamaged form. This hypothesis is motivated by three different observations. First, studies have demonstrated that OG damage improves spinselective electron transmission through DNA. 3 Second, it has been shown that spin polarization in chiral biomolecules (nucleic acids, peptides, and amino acids) affects their interaction with ferromagnetic surfaces. 4,5 Third, the interaction between oligopeptides and double-stranded DNA (dsDNA) molecules, adsorbed on ferromagnetic surfaces, is spin-dependent. 6 To test this hypothesis, we investigated the adsorption of a double-stranded DNA on ferromagnetic (FM) films and compared it to the case of OG-damaged DNA.
We examined the adsorption of four DNA duplexes on Ni/ Au surfaces: one is an unmodified DNA duplex and the other three are different OG-damaged DNA duplexes on FM surfaces. In these experiments, the damaged DNA duplexes differ by the location of an OG damage site, which was systematically varied along the duplex DNA's helix; the undamaged DNA duplex serves as a control system. The FM substrate, a Ni/Au film, was magnetized perpendicular to the substrate plane, and the adsorption was monitored in two different ways, confocal fluorescence microscopy and quartz crystal microbalance (QCM) measurements. Previously, we used QCM to show that the adsorption kinetics of a chiral amino acid, cysteine, on an FM surface is enantiospecific with the FM electrode's magnetization direction along the surface normal. 7 In this work, confocal laser scanning microscopy (CLSM) and QCM studies show that the adsorption rate and the total coverage of DNA on ferromagnetic substrates depend on both the magnetization direction of the FM and the presence of OG damage, as well as its position in the DNA duplex. These findings are rationalized by considerations arising from the chiral-induced spin selectivity (CISS) effect. 8 ■ EXPERIMENTAL METHODS Ferromagnetic Substrate Preparation. The ferromagnetic substrates were prepared by the deposition of a 100 nm thick Ni layer on a p-type (Boron doped) Si ⟨100⟩ ± 0.9°w afer in an e-beam evaporator. An 8 nm Ti layer was coated between the Si and Ni as the adhesive layer. The nickel layer was coated with a 5 nm thin layer of Au. Previous reports described the fabrication of the surface and its use in CISS effect applications. 9,10 The chamber was maintained at a high vacuum (<10 −7 torr) and at ambient temperature during the deposition of the metallic layers. For the quartz crystal microbalance (QCM) measurements, 10 nm of Au was coated on the 100 nm Ni layer. The substrates were diced into 23 × 23 mm 2 sized squares for all of the confocal experiments. Before evaporation, the pieces of substrates were cleaned by boiling in acetone and ethanol, each for 10 min.
DNA Hybridization. DNA with the fully matched sequence and all of the complementary strands tagged with cyanine3 (cy3) dye were purchased from Integrated DNA Technologies (IDT Synthezza, HPLC purified with mass spectroscopy certificate of analysis). The cy3 dye was covalently attached to the 3′ end of the complementary strand. We performed the DNA hybridization as reported elsewhere. 11 Figure S3 shows the CD spectra and the corresponding UV−vis spectra of all of the double-stranded DNA molecules. The concentration was determined from the signature absorption intensity of the double helix DNA at 260 nm using a Thermo Scientific Nanodrop ONE C UV−vis spectrometer. Then, it was diluted with 0.4 M phosphate buffer (pH 7.2) to a final concentration of 0.5 μM. CD spectroscopy was used to confirm the hybridization of the DNA molecules.
Adsorption Kinetics of DNA on Gold-Coated Ferromagnetic Substrates. To verify the importance of the electrons' spin in the adsorption of the DNA molecules, we measured the dependence of the rate of adsorption on the ferromagnetic (FM) substrate when the FM layer was magnetized perpendicular to the surface, directed either away from the surface (Up) or into the surface (Down) by using a permanent 0.42 T magnet. The adsorption occurred through a strong covalent Au−S bond between the gold layer and the thiol-functionalized DNA molecules. All of the OG DNA strands, used in this work, consisted of an identical sequence of base pairs differing only at the location of the OG.
We carried out the adsorption of the molecules using a 0.5 μM solution of the dsDNA in 0.4 M phosphate buffer (pH = 7.2). A 110 μL aliquot of the DNA solution was drop-cast at the center of the MAKTEK glass-bottom-well Petri dish, which was then placed on the microscope stage. The magnet was placed precisely above the ferromagnetic substrate, with the ferromagnetic layer side facing the dsDNA-containing buffer solution. The timer was instantly set, and the images were collected at different time intervals of up to 20 min, with the surface magnetized with the North pole of the permanent magnet either Up (north) or Down (south).
Microscope Setup and Data Analysis. We performed the fluorescence imaging experiments using a ZEISS LSM 800 confocal laser scanning microscope aligned in an inverted fashion. For the current experiments, we used a 561 nm (10 mW) diode laser. The laser beam was focused using a 10× objective lens (EC Plan-Neofluar, N. A. 0.3). We illuminated the sample with 1.5% of the laser power. The emitted fluorescence was collected by the same objective lens and was separated from the excitation beam by placing two dichroic beam splitters in the optical path. It was then routed to an avalanche photodiode (APD) for fluorescence imaging. Before entering the APD, the luminescence beam passed through a narrow pinhole that blocked all of the stray light or fluorescence that comes from the out-of-focus planes of the substrate−solution specimen. The emission was collected from 571 to 700 nm. Here, we focused the z-plane at the surface− solution interface and it was fixed for all of the measurements. The x−y coordinate of the stage was also fixed.
The snaps were acquired using ZEN 2.3 and processed with ImageJ and then in Origin software. We selected the same region from each image with a size of 400 pixels × 400 pixels. The mean fluorescence intensity values were analyzed using identical operations. Each experiment was repeated three times to ensure the reproducibility of the results. The errors shown in the plots were calculated as the standard deviation from the mean.
Open Circuit and Contact Potential Difference Measurements in Quartz Crystal Microbalance. The open circuit and the contact potential difference experiments were performed using a 7.9995 MHz quartz crystal with an EQCM cell attachment and a 430A potentiostat (CH Instruments). The surface area of the crystal was 0.205 cm 2 and was coated with 100 nm of nickel and 10 nm of polycrystalline gold as the working electrode. The counter electrode was a Pt wire, and the reference electrode was Ag/ AgCl (saturated). The sample was first scanned from −0.4 to − 0.9 V, versus saturated Ag/AgCl, at a scan rate of 25 mV/s for 10 cycles to allow the electrochemical setup to equilibrate and give consistent results (shown in Figure S2). Then, the potential was held at −0.9 V for 1 min to fully desorb the DNA Each sample was repeated 6 times.
For contact potential difference measurements, the electrode was first incubated in the 0.5 μM fully matched DNA solution for 30 min to allow for DNA adsorption on the surface and to allow for system equilibration. Then, the sample was scanned from 0 to −1 V versus saturated Ag/AgCl at a scan rate of 25 The adsorption of full-match dsDNA molecules on Ni/Au substrates was analyzed by XPS measurements using a Kratos Axis Ultra DLD spectrometer equipped with a monochromatic Al Kα X-ray source (hν = 1486.6 eV) operating at 75 W. Measurements were performed at a 0°emission angle with respect to the surface normal. Elemental concentrations of S 2p and N 1s were measured from the relative intensities of the surfaceadsorbed dsDNA molecules.
Superconducting Quantum Interference Device (SQUID) Measurements. Magnetic measurements of the (Ni/Au) layer were performed using an MPMS3 SQUID magnetometer (LOT-Quantum Design Inc.). The ferromagnetic layer (Ti 8, Ni 100, Au 5 nm) was coated on a 4 mm × 4 mm sized Si substrate. A magnetic field of up to 6 T was applied out-of-plane to the substrate. We measured the adsorption of full-match DNA and the central OG DNA on the Ni/Au substrate magnetized with the external magnet pointing to the North or South pole for up to 20 min.
■ RESULTS
The real-time adsorption of duplex DNA on an FM substrate was followed by monitoring the time-dependent fluorescence from cy3 dye-tagged dsDNA molecules. The substrate was silicon-coated with a Ti/Ni/Au (8/100/5 nm) film, and it was magnetized with the North pole of the magnetization pointing either toward or away from the adsorbed layer. A permanent magnet was used for magnetizing the Ni/Au film, and the magnet was maintained in proximity to the surface for the duration of the experiments (see Figures 1A and 2A) with its North or South Poles pointing toward the surface. The DNA was adsorbed from a 0.5 μM solution of the dsDNA in 0.4 M phosphate buffer (pH ∼ 7.2). The dsDNA molecules contained OG damage at three different locations, namely, as distal, central, and proximal with respect to the cy3 at the 3′ end on the secondary strand (see Figure 1A and ref6).
The fluorescence (F) was monitored for up to 20 min, starting when the substrate was dipped into the solution containing the DNA in a bottom-well Petri dish. The signal is normalized to the background fluorescence, F 0 , prior to any significant amount of DNA adsorption on the surface. The time was measured from the insertion of the ferromagnetic film-coated substrate to the solution. It was found that the adsorption of all of the DNA duplexes was higher for the South pole of the magnetic field pointing toward the solution than for the North pole oriented toward the solution and that the asymptotic value of the fluorescence intensity for the fully matched, undamaged DNA ( Figure 1B) was higher than that of the damaged DNAs ( Figure 1C−E). The observed difference in the fluorescence intensity as a function of the direction of the magnetic field indicates a spin-dependent interaction. In contrast to the earlier work, 8 which examined the enantiospecific interaction between chiral molecules and an FM substrate for a short time, the studies reported here are extended to long times where the coverage approaches an asymptotic limit.
The difference in the coverage of adsorbed DNA molecules (amount), as a function of the substrate's magnetization direction, was quantified by X-ray photoelectron spectroscopy (XPS). The elemental peaks of N 1s and S 2p confirmed that the DNA molecules were chemisorbed on the surface for both directions of the magnetic field. The limited signal-to-noise ratio caused the elemental S 2p signal at 162 eV to be inconclusive. The changes in the atomic percentage from the elemental N 1s signal at a 401 eV binding energy ( Figure S1) showed a very small difference in the intensity, however. We normalized the intensities of each peak by the Au 4f signal, and for each sample, we measured the intensity at two random points on the surface. Table S1 shows the elemental atomic percentages and normalized percentages that were obtained by this process. Figure 1C−E shows the time-dependent chemisorption for distal, central, and proximal OG-damaged DNAs. The adsorption curves reach different asymptotic intensities based on the location of the damage in the helix. The intensity decreases from distal to central under south magnetization; however, any change from central to proximal OG is not evident. Similar to the undamaged DNA, the difference in the adsorption as a function of the direction of the magnetic field indicates spin selectivity for the adsorption. The effect is larger in the case of the central OG DNA and is the smallest for the distal and proximal OG.
The findings from the fluorescence studies are corroborated by quartz crystal microbalance (QCM) studies. Figure 2A shows a schematic diagram for the electrochemical QCM measurement, which reports the mass of the adsorbed molecules as a function of time. Note that the time scale of the signal is somewhat different from that in Figure 1 because the geometry of the adsorption cell is different. Namely, here the surface, on which the adsorption was measured, was larger than in the case of the fluorescence studies (0.205 vs 0.002 cm 2 , respectively), while the volumes of the solution were 2 mL and 110 μL in the QCM and fluorescence studies, respectively. These differences arise from constraints of the experimental apparatuses.
As shown by Figure 2 and Table 1, the mass changes for the adsorption of all four different DNAs are around 30−40 ng within 300 s. As in the case of the fluorescence studies, the adsorption rate and the amount of molecules adsorbed on the The Journal of Physical Chemistry B pubs.acs.org/JPCB Article surface are higher when the magnetic South pole is pointing toward the molecules. In addition, the differences in the coverage between the North and South pole orientations follow the same trend as in the fluorescence studies. The correlation between the fluorescence and the QCM data indicates that indeed the adsorption depends on the direction of the magnetic field acting on the FM substrate and on the position of the damage in the DNA. In contrast to earlier studies, which found that the rate of chiral molecule adsorption changes on FM substrates but its total coverage does not, 8 we find that both the rate of adsorption and the total coverage of DNA depend on the magnetization of the FM electrodes. When comparing the ratio between the signals observed for the South-and North-magnetized FM films (see Table 2), the two methods provide the same trends; namely, the largest ratio is obtained for the central OG, while the smallest ratio is obtained for the distal OG. However, the ratios are consistently larger for the fluorescence as compared to the QCM studies. A plausible explanation for this difference is a more efficient quenching of the fluorescence from the dye when the North magnetic pole is pointing toward the adsorbed molecules. The spin-dependent electron transfer for the quenching of fluorescence for chiral assemblies 12,13 and the spin-dependent photocurrent from a dye chromophore through a chiral bridge on electrodes 14 has been reported before. This behavior can be rationalized by an electrontransfer-mediated quenching mechanism, in which the DNA preferentially transmits one electron spin direction into the FM over the other, which is affected by the direction of the magnetic dipole of the FM. Hence, spin-dependent fluorescence quenching by electron transfer to the substrate supports the increase in contrast for the adsorption differences with the magnetization direction that are measured by fluorescence, as compared to those measured by the mass changes.
Note that the observed differences in adsorption rates, as a function of the magnetization direction of the substrates, cannot stem from magnetic force acting on the diamagnetic DNA because the observations here depend on the sign of the magnetic field and the Kelvin force does not depend on the sign of the magnetic field.
■ DISCUSSION
The duplex DNAs that were studied have identical base pair sequences and differ only in the presence and/or location of the OG defect. Thus, the difference in the intensity and the dependence on the magnetic field must originate from the difference in the location of the OG. Because the DNA duplexes are chiral, they should become charge and spinpolarized as they approach and bind to the metal surface, according to the CISS effect. 15 Because of the transient spin polarization in the molecule, a spin exchange interaction manifests between the thiol group on the DNA and the magnetized FM substrate, leading to DNA chemisorption, which depends on the FM substrate's magnetization, i.e., spin alignment. Namely, the adsorption will be faster for the case where most of the spins in the Ni layer are aligned antiparallel to the polarized spin at the molecule's binding site, and the difference in the rate of adsorption should correlate with the extent of charge and spin polarization in the DNA. 8 It is interesting that the preferred magnetic field direction, as observed in the current studies, is opposite to that obtained in reference 8. To verify the reason for it, we repeated the experiment with the same DNA sequence used in ref 8 and found that the preferred magnetic orientation is South, as observed in the present study (see Figure S4). In ref 8, the adsorption was controlled by the kinetics and it was performed under different conditions as in the present study. Here, we worked at much lower concentrations and followed the adsorption in situ. Hence, we obtained the thermodynamic equilibrium, as discussed below. We propose that this is the reason for the discrepancy in the results. However, this subject of kinetics vs thermodynamics is the focus of future studies. The last column shows the ratio between the signals obtained by the two methods. The damage in the DNA inhibits the charge polarization; however, it enhances the spin polarization of the electrons that succeed to pass through it. 16 Thus, the trend in the observations with the OG defect position results from the combination of two counteracting effects. The charge polarization is the largest for the damage being farthest from the substrate and is the smallest for the damage being closest to it; see Figure 3A. In contrast, the spin polarization is smallest when the damage is farthest from the substrate and highest when the damage is located closest to the surface. 16 We posit that the difference in the selectivity of the adsorption rate, ΔR, is given by ΔR = k·S·C, where k is a proportionality constant, C is the amount of charge polarization, and S N N N N = + + + is the spin polarization, with N + and N − being the amount of electrons polarized with spin parallel to the direction of polarization and/or opposite to it. As shown schematically in Figure 3A, the maximum of the differential adsorption rate is obtained when the product of the charge polarization and the spin polarization is the largest, which is expected for the case of the OG defect located near the center of the duplex.
In addition to the differences in the adsorption rates, the final coverage of the adsorbed molecules changes with the OG damage location and the magnetization direction of the FM substrate. If the effect of the spin-selective adsorption was purely a kinetic effect, then one would expect that the adsorption would result in the same coverage for all cases at long enough times. The difference in total coverage implies that the Gibbs energy for adsorption changes for the two magnetization directions. To probe the possible reason for the differences in coverage, we used a SQUID magnetometer to measure the magnetic coercivity of the substrates after the adsorption experiments in solution were completed. Figure 3B,C shows the magnetic moment as a function of the magnetic field for surfaces prepared when the substrate was magnetized in one way or the other. For both the full-match and central OG DNAs, the magnetic coercivity is larger for the case in which a North-magnetized FM layer was used to adsorb the DNA, i.e., lower coverage condition, and is smaller for the South magnetization condition.
For the central OG DNA, the magnetic moment increases more sharply with the applied magnetic field and reaches a somewhat larger saturation magnetization in the case when the film was prepared with a North-magnetized FM layer, i.e., lower coverage (shown in Figure S6B) than that of the Southmagnetized layer. In additon, a decrease in the magnetic moment is observed when further increasing the applied field, indicating a diamagnetic contribution. The magnetometry results can be explained if we assume that the adsorbed layer has two effects. First, it increases the anisotropy of the potential affecting the electrons' spins at the interface due to charge transfer between the substrate and the adsorbed DNA and thereby increases the magnetic moment measured at room temperature. Second, it has, by itself, diamagnetic properties.
At lower coverage, the anisotropy effect matters more than the DNA film's diamagnetism, and the magnetic moment increases relative to the bare surface, but for the higher coverage condition, the contribution of the diamagnetism matters more and decreases the magnetic moment back toward that of the bare surface (shown in Figure S6).
To verify the role of the charge transfer, the contact potential difference was measured by cyclic voltammetry for fully matched DNA adsorbed on a surface that is not magnetized, surface-magnetized North, and surface-magnetized South. The measurements were repeated four times, and in each case the sample was collected in the order of no magnet, North, and South poles pointing toward the adsorbed molecules. The contact potential results are shown in Table 3. When comparing each trial, the data are consistent with the contact potential becoming more negative with the higher coverage surface (South pole). The difference between each trial may arise from slight differences in the concentration of DNA in solution. Although these differences are quite small, the data are consistent with the contact potential becoming more negative with the higher coverage surface (South). This finding is consistent with more charge moving from the surface to the adsorbed layer for the North pole aligned toward the molecules, relative to the South pole. This interpretation is consistent with former studies indicating that the amount of charge injected into a layer adsorbed on a ferromagnetic surface depends on the direction of the magnetization of the layer for chiral molecules. 5 These findings show that the difference observed in the coverage correlates with differences in the total charge and the magnetic properties of the DNA layer.
■ CONCLUSIONS
This study reveals that the adsorption of OG-damaged DNA on ferromagnetic substrates is spin-selective and depends on the location of the damage in the DNA helix. For three of the different types of duplexes studied, the adsorption rate was faster and the coverage was higher on a South-magnetized FM surface than on a North-magnetized surface. For the distal OGdamaged DNA, the differences were less significant (similar rates, but somewhat higher coverages in the fluorescence experiments). The dependence on the magnetic field was rationalized in terms of a coupling between charge polarization and spin polarization in the DNA duplex, and the dependence of the adsorption asymmetry on the position of the OG damage was rationalized by differences in the charge polarization through the molecular monolayers. Although these studies were performed on an artificial system, they suggest a new contribution to the interactions between chiral molecules; e.g., when a protein (or enzyme) interacts with DNA, their charge polarizations are accompanied by spin polarizations that can change their interaction strength. Moreover, the strength of the interaction will be sensitive to the damage to DNA and its location. | 5,740 | 2023-03-08T00:00:00.000 | [
"Physics"
] |
Mergeomics: integration of diverse genomics resources to identify pathogenic perturbations to biological systems
Mergeomics is a computational pipeline (http://mergeomics.research.idre.ucla.edu/Download/Package/) that integrates multidimensional omics-disease associations, functional genomics, canonical pathways and gene-gene interaction networks to generate mechanistic hypotheses. It first identifies biological pathways and tissue-specific gene subnetworks that are perturbed by disease-associated molecular entities. The disease-associated subnetworks are then projected onto tissue-specific gene-gene interaction networks to identify local hubs as potential key drivers of pathological perturbations. The pipeline is modular and can be applied across species and platform boundaries, and uniquely conducts pathway/network level meta-analysis of multiple genomic studies of various data types. Application of Mergeomics to cholesterol datasets revealed novel regulators of cholesterol metabolism.
Most non-communicable diseases stem from a complex interplay between multiple genes and cumulative exposure to environmental risk factors 1 . An emerging hypothesis regarding the underlying pathogenic processes is that exposure to genetic or environmental risk factors results in progressive and chronic regulatory perturbations to molecular and cellular processes that would otherwise maintain normal homeostasis. In recent years, the advance of omics technologies has greatly enhanced our ability to test this hypothesis with genome-scale molecular datasets that are also publicly available to the scientific community.
Large-scale data repositories such as dbGaP for population-based genetic datasets 2
and Gene Expression
Omnibus and ArrayExpress for gene expression and epigenomics datasets 3,4 are continuously expanded with new experiments, data acquisition projects such as ENCODE and GTEx are generating multidimensional coherent datasets and the necessary basic framework to bridge the gaps between diverse genomics datasets 5,6,7 .
An isolated omics study can provide only a partial view of the biological system. For example, a genomewide association study (GWAS) can reveal the statistical associations between genetic loci and disease status, and implies causal effects, but pinpointing the causal genes, their corresponding causal variants and mechanisms has proven challenging 8,9 . Additionally, evolutionary constraints restrict the ability of GWAS to detect central regulatory genes 10 , and directly translating genetic associations of common variants with subtle effects may miss novel therapeutic targets. On the other hand, gene expression or epigenomic profiling can detect associations between disease and genes or epigenomic markers, but these associations are correlative in nature. By integrating different types of data, it becomes possible to circumvent the limitations of individual studies and better identify disease-causing DNA variants and their downstream molecular targets. For instance, when DNA and RNA are measured simultaneously, it is possible to determine if a particular genetic variant affects the downstream expression of a gene in a genetics of gene expression or eQTL analysis 11,12 . Furthermore, if a genetic variant resides in a functional site associated with transcription factor binding, epigenetic modification, or protein regulation, as revealed by the ENCODE project 13,14 , it becomes possible to narrow in on the potential targets.
In parallel to large-scale genomic projects, new computational tools are required to convert massive genomics data into biological insights that can lead to novel mechanistic hypotheses. Pathway-based analysis tools such as MAGENTA 15 and iGSEA4GWAS 16 that integrate GWAS with curated pathways and methods that additionally take into consideration of eQTL information 17 have been developed. Beyond knowledgebased pathway analysis, data-driven approaches utilizing gene regulation and protein-protein interaction networks have been developed to identify the most likely pathological perturbations and target genes for disease-associated loci. Network biology aims to identify high-level regulatory patterns that characterize systemic functions with nodes representing genes or other molecular entities and edges representing the relations between nodes 18,19,20 . Networks can be classified into curated knowledge (e.g. metabolic pathways), direct experimental evidence (e.g. protein binding from yeast two-hybrid systems), and statistical models of high-throughput omics data (e.g. gene co-expression). Tools for network modelling of genetic data such as dmGWAS and EW_dmGWAS 21 , DAPPLE 22 and "guilt by association" frameworks 23 are powerful extensions to the basic genetics toolbox. Furthermore, extensive tissue-specific network resources, such as the GIANT database 24 , enable biologists to query the regulatory network context for genes of interest.
Applications of network methods to multiple complex traits such as obesity 25 , type 2 diabetes 26,27,28 , coronary artery disease 29,30 and Alzheimer's disease 19,31 have led to successful identification of gene subnetworks (i.e. specific parts of the full network) of highly interconnected genes that represent pathogenic processes, and their central hubs or key drivers as potential points for intervention.
Despite the above methodological advances, several gaps remain to be addressed. The available methods are typically tailored for a particular combination of datasets (e.g. human genetics with gene expression, or human genetics with pathways or protein-protein interactions), thus lacking the flexibility to accommodate additional data types and multiple datasets from one or more species, tissues and platforms. Additionally, network approaches such as WGCNA 32 and postgwas 33 emphasize the detection of modules of co-operating genes, but validation experiments in the wet lab and therapeutic target selection require narrowing in on strong driver genes at the center of the module. Furthermore, the majority of the network tools start from a limited set of known top loci or genes and focus on ranking candidate genes based on network topology. One example is the GIANT database 24 and the NetWAS tool within, which provide convenient online query tools for such analyses. However, in most cases a full genomic analysis capable of extracting true subtle signals that are well below a significance cutoff from random noise is more powerful to achieve a comprehensive understanding of disease pathogenesis. Therefore, there remains a need for easily usable open source software that is designed for diverse types of genomic data to identify pathways, to model gene networks of diseases, and to pinpoint the key driver genes for further experiments in a streamlined and high-throughput manner.
To meet the challenge, we introduce Mergeomics, a flexible pipeline that integrates genomic associations, tissue-specific functional genomics resources, canonical pathways and weighted gene-gene interaction networks to identify pathogenic subnetworks and their key driver genes. The main components of Mergeomics are designed to 1) identify disease-associated subnetworks by aggregating genomic marker associations over functionally related or co-regulated genes; 2) perform pathway and network-level meta analysis across studies of different design, data type, platform, and species; 3) determine network key drivers by projecting disease subnetwork genes onto one or more system-scale gene or protein interaction networks.
Here, we describe the methodology in detail and introduce new algorithms for pathway and network analyses.
We also report the results from extensive testing of technical aspects such as parameter selection and data preprocessing based on simulated and empirical datasets to demonstrate that Mergeomics is statistically robust and outperforms previous methods. Finally, we applied Mergeomics to circulating cholesterol datasets, a clinically relevant human trait that is a major risk factor for cardiovascular disease. The most distinct aspects of Mergeomics lie in its applicability to both human and animal model studies, and adaptability for various types of association studies from GWAS, mutation burden from exome sequencing studies, and transcriptome-wide association studies (TWAS) to epigenome-wide association studies (EWAS) and metabolite or proteome-wide association studies. The source code for Mergeomics is released as an R package (http://mergeomics.research.idre.ucla.edu/Download/Package/) . Figure 1 shows the information flow within the Mergeomics pipeline. The Marker set enrichment analysis (MSEA), depicted on the left, combines disease association data (e.g., GWAS, EWAS, TWAS), functional genomics data from projects such as GTEx and ENCODE, and functional gene sets such as metabolic and signalling pathways and co-regulated gene modules. MSEA is based on the notion that while it is difficult to say which marker is causal for a disease, if the markers associated with a biological process (via their putative target genes selected based on functional evidence) are enriched for disease association signals, then it is plausible that at least some of those markers and their target genes are involved in causal disease mechanisms. The output from MSEA is a ranked list of gene sets that are significantly enriched for disease markers. We collectively denote these gene sets, which can be pathways, co-expression modules or gene signatures, as disease-associated gene sets. When multiple datasets of the same data type or different data types are available for a given disease or phenotype, the meta-MSEA component that is based on the same principles as MSEA but performs meta-analysis at the pathway or network level can be utilized.
Overview of Mergeomics
The weighted Key Driver Analysis (wKDA, right side in Fig. 1) component of Mergeomics identifies local hubs that are central to the disease-associated gene sets by taking into consideration the network topology and the edge weight information between network nodes. This is accomplished by projecting the diseaseassociated gene sets from MSEA or meta-MSEA onto one or more types of gene or protein interaction networks containing detailed topology information, and then testing if the network neighbourhood of a particular hub shows over-representation of disease-associated genes. Hubs that demonstrate significant enrichment are defined as key drivers of the disease-associated gene sets.
Although MSEA (or meta-MESA) and wKDA are introduced as sequential steps in Mergeomics, they can be performed independently. MSEA or meta-MSEA can be performed without continuing to wKDA, and wKDA can be performed on pre-defined disease-associated genes without running MSEA or meta-MSEA.
Calibration of MSEA
MSEA first converts a gene set from pre-defined functional pathways or co-regulated genes into a set of markers that are likely to perturb the function of the genes based on functional genomics data such as eQTLs and ENCODE information. The disease association P-values for this set of markers are then extracted from the summary statistics of a disease association study of interest. If there are a large number of small P-values in the marker set compared to what can be expected by chance, we conclude that the gene set we started from is enriched for disease associations. Key features of MSEA include: 1) it provides flexibility to accommodate association studies of different types or species, as long as the corresponding association statistics, markergene mapping, and pathway or gene set files are available; 2) it allows flexibility in gene-marker mapping to incorporate appropriate functional genomics information specific to the marker type (e.g., eQTL information between SNPs in GWAS and genes); 3) it allows marker filtering based on dependency measures between markers to select independent markers for statistical testing (e.g., linkage disequilibrium or LD information can be used to correct for linked SNPs in GWAS); 4) it utilizes a new test statistic with multiple quantile thresholds to automatically adapt to different association study datasets involving different sample sizes and statistical power; 5) it implements both marker-based and gene-based permutation strategies to estimate null distributions, with the latter adjusting for shared markers between genes and gene size.
To test the performance of MSEA, we performed simulation tests based on three cholesterol GWAS of varying sample sizes (a Finnish study of 8,330 individuals 34 , the Framingham Heart Study with 7,572 participants 35 , and Global Lipid Genetics Consortium or GLGC with 100,184 people 36 ) and a set of known causal lipid homeostasis genes from the Reactome pathway R-HSA-556833, "the metabolism of lipids and lipoproteins". We resampled genes from this pathway into 100 sets of 25, 100 and 250 genes, respectively, to simulate 300 positive control signals of different magnitudes. Simultaneously, 300 corresponding sets of random gene sets were generated as negative controls. This procedure was repeated 100 times to produce stable statistics, and performance is evaluated as sensitivity, specificity and positive likelihood ratio (Details in methods).
We identified several important parameters that affect the performance of MSEA based on the three cholesterol GWAS datasets, including the percentage of top markers used, linkage disequilibrium cutoff to filter SNPs, and permutation type for null distribution estimation (Supplementary Table 1). First, the signal to noise ratio typically improved when genetic loci with relatively stronger associations rather than the full GWAS associations were used (Supplementary Fig. 1). This confirms previous findings for complex traits that heritability is maximally explained by the top portion of the GWAS SNPs 37 . Second, the effect of LD correction depended on the permutation type, percentage of markers included, and the power of the association study under testing (Supplementary Fig. 1). In the marker-permuted MSEA, the marker labels were permuted to estimate the null distribution of the enrichment score for random expectation; in the genepermuted MSEA, gene labels are permuted while the links between markers and genes are kept intact, thus more consistent with the hierarchical marker-gene-pathway cascade (Supplementary Fig. 2). In general, gene-based permutation is less sensitive to LD for more powered studies such as GLGC but performs better under higher LD cutoff for smaller studies such as Framingham; marker-based permutation is more sensitive to the percentage of markers used in MSEA particularly for smaller studies (Supplementary Fig. 1). In addition, the assignment of markers to their putative targets genes can be defined in multiple ways: chromosomal distance-based assignment based on the locations of gene regions, or functional informationbased mapping determined by empirical data such as tissue-specific eQTLs or ENCODE information.
Although empirical data is biologically more meaningful, to allow comparisons with other methods which mostly implement distance-based mapping, in our simulation analysis we used a window size of 20kb mapping between SNPs and genes by chromosomal location.
Overall, MSEA algorithm that relied on gene-based permutations demonstrated consistently high sensitivity, specificity and positive likelihood ratio with less parameter-dependent fluctuations compared to the markerbased version (Supplementary Table 1, Supplementary Fig. 1). Based on performance testing, we chose to use the top 50% of GWAS loci, an LD cutoff of r 2 < 0.5, and gene permutation as the default setting. Of note, the differences due to datasets were typically larger than those due to parameters when using the genepermuted MSEA (Supplementary Fig. 3).
Performance comparison between MSEA, MAGENTA and i-GSEA4GWAS
MAGENTA 15 and i-GSEA4GWAS 16 are two widely used GWAS pathway analysis tools that are built upon an established gene set enrichment analysis 38 . Both tools estimate the genetic associations for each gene, and then test if the aggregate gene score for a pathway is higher than expected. MAGENTA identifies the peak disease-associated SNP for each gene, and then adjusts the statistical significance of the peak SNP according to the size of the gene, LD and other potential confounders to produce the gene score. i-GSEA4GWAS uses a similar approach where a gene is considered significant if it contains any of the top 5% SNPs, and the pathway score is estimated by comparing the observed ratio of significant genes within the pathway against the expected ratio in the full set of genes that were covered by the GWAS. Compared to these methods, MSEA differs in test statistics, confounder adjustment, and flexibility in data accommodation.
The same simulated positive and negative control pathways that were used for calibrating MSEA were also used to compare the three different methods. Since this approach may give an unfair advantage to MSEA due to optimized calibration towards the positive controls, we also performed additional tests with 1,346 canonical pathways curated by Reactome 39 , BioCarta (http://cgap.nci.nih.gov/Pathways/BioCarta_Pathways) and KEGG 40 . All tests produced similar results: i-GSEA4GWAS lacked specificity and MAGENTA lacked sensitivity, whereas the gene-permuted MSEA provided the best balance and receiver operator characteristics.
The results for the simulated positive and negative controls are depicted in Fig. 2, and the results from the additional tests with the canonical pathways are in Supplementary Table 2. Notably, the superior performance of MSEA over the other two established methods is more obvious when the GWAS involved smaller sample size and heterogeneous population.
Meta-MSEA: pathway-level meta-analysis of multiple association studies
Various factors determine the quality of an association study. For example, GWAS result depends on the sample size, study design, accuracy of the phenotype, ethnicity and the coverage of the genotyping platform.
Among the cholesterol GWAS we chose, the cholesterol level in the Finnish study was measured with a high-sensitivity NMR instrument in an ethnically homogeneous population, whereas the Framingham study relied on standard assays in a more mixed population. These differences may explain the lack of signals at FDR < 25% from the Framingham study in Fig. 2. However, the lipoprotein pathway was at the top of the list for all three GWAS, so a pathway-level meta-analysis can potentially boost weak but consistent signals.
Importantly, unlike the traditional approach where meta-analysis is done at the marker level, this pathwaylevel analysis can bypass the need to match ethnicity or genotyping platforms, an advantage not present in the previous methods.
Mergeomics was specifically designed to produce output that is suitable for pathway-level meta-analysis (meta-MSEA) because pathway enrichment P-values are estimated from null distributions by parametric models (detailed in Methods). This ensures that the reported P-values are always greater than zero, and can be converted back to Z-scores by using the inverse Gaussian density function. i-GSEA4GWAS is an example where this procedure is difficult since only frequency-based P-values are estimated and highly significant signals can be set at P = 0. Table 1 lists the top pathways from meta-MSEA, and the full results for pathway-level meta-analysis are available in Supplementary Data 1. The pathway-level meta-MSEA not only accurately identified major lipoprotein and lipid transport pathways and the receptors that mediate lipid transfer to and from lipoprotein particles, but yielded more significant P-values than those obtained from the pathway analysis of conventional meta-GWAS conducted at the SNP-level. Such superiority of meta-MSEA was consistently observed using simulated gene sets (Supplementary Fig. 4). These results demonstrate the pathway-level meta-analysis is more powerful than the traditional SNP-centric approach to meta-analysis when investigating the genetic perturbations to biological processes, and we have accordingly incorporated the support for multiple association studies in the Mergeomics pipeline. Importantly, this pathway level meta-analysis feature allows integration of different types of omic association data sets. For example, association studies for a particular disease done at genetic, gene expression, epigenetic, and metabolite levels can be meta-analyzed after conducting MSEA on each association dataset, allowing the detection of functional pathways or networks that are perturbed by different types of molecular entities.
Weighted key driver analysis (wKDA) to detect disease regulators
The MSEA or meta-MSEA component of Mergeomics identifies pathways or co-regulated gene sets that are perturbed in a disease. However, the interactions between genes within these disease-associated gene sets are not evident. To this end, a key driver analysis (KDA) was previously developed to detect important hub genes, or key drivers, whose network neighbourhoods are over-represented with disease associated genes 29, respective co-hubs. Second, the co-hub concept is a useful qualitative measure when selecting the most promising subnetworks and key drivers for experimental validation. For instance, if a key driver has co-hubs with known functions, these can give clues as to the role of poorly understood genes. On the other hand, if a key driver is to be perturbed in an experiment, it may be important to incorporate the co-hubs as integral parts of the experimental design.
The main difference between KDA and wKDA involves the counting of the subnetwork members around the hub. In KDA, each node is treated equally without consideration of the edges, and the enrichment is based on the excess proportion of disease genes within the hub neighbourhood. The new wKDA assigns a larger weighting coefficient for a node with higher edge weights than for other nodes in a network neighbourhood (details in Methods). Therefore, if the disease genes have higher edge weights to a hub or its neighbors, the enrichment score will be higher. From a practical perspective, the previous KDA detects key drivers that are connected to a large proportion of the disease-associated subnetwork genes, whereas the wKDA tends to detect key drivers that have high-weight edges to disease genes. In addition to identifying key drivers and cohubs, wKDA also outputs Cytoscape input files for the key drivers and their local subnetworks with disease genes highlighted that could be visualized in the Cytoscape software 42 .
Performance of wKDA and comparison with KDA
To evaluate the performance of wKDA in comparison to the unweighted KDA, we first set up three diseaseassociated gene sets to test against four gene regulatory networks. The three gene sets included two lipid subnetworks (denoted as Lipid I and Lipid II) derived from our previous study 29 Table 3). We organized these networks into two independent weighted adipose networks and two independent weighted liver networks using non-overlapping datasets, where edge weight represents the estimated reliability of an edge between genes.
We used the overlap ratio (defined in Methods) of the identified key driver genes between the two independent networks of the same tissue to assess the predication accuracy of wKDA and KDA. As shown in Fig. 4, the new wKDA outperformed the previous unweighted KDA for all three gene sets against independent networks in both tissues. To test the sensitivity of the key driver approach, we also partially randomized the adipose and liver networks as a model of topological noise. As expected, when some of the edges were randomly rewired, the number of consistent key drivers between two independent networks of the same tissue declined, and when all edges were rewired, no consistent key drivers were detected (Fig. 4).
Notably, wKDA was able to detect consistent signals even when half the network was rewired, thus demonstrating the inherent robustness of the wKDA concept compared to the unweighted version.
Importantly, because wKDA was specifically designed for weighted networks whereas the unweighted KDA mainly focuses on the network topology without considering weight information, key drivers with highweight (i.e., high reliability) edges between subnetwork genes were preferred by wKDA. This difference likely explains the better reproducibility of wKDA compared to the unweighted KDA (Fig. 4
Case study: application of Mergeomics to circulating cholesterol datasets
The cholesterol in low-density lipoprotein particles has been established as a causal biomarker for cardiovascular disease 43 . Multiple large GWAS have revealed a complex genetic regulation of circulating cholesterol that is likely to involve multiple genes 36 , making cholesterol genetics an interesting test case for our pipeline. Furthermore, the decades of research have catalogued the features of cholesterol biosynthesis and lipoprotein transport at the pathway level, which makes it easier to verify that our methodology produces meaningful biological results.
Significant pathways related to total cholesterol are already identified in the aforementioned MSEA and meta-MSEA analyses (Supplementary Data 1). Table 1 lists the top 15 pathways that were genetically perturbed by cholesterol-associated loci based on the ranks in the meta-MSEA results. Aside from the expected hits for lipoprotein transport, several pathways related to cellular lipid trafficking (scavenging receptors and ATP-binding cassette transporters of class A) and lipid metabolism (such as fatty acid, triacylglycerol and ketone body metabolism) were identified. Interestingly, the top hits included 'Cytosolic tRNA aminoacylation' and 'PPAR-alpha activates gene expression', which suggest that these transcriptional regulatory processes are intrinsically intertwined with the traditional concepts of enzyme-driven metabolic pathways in cholesterol biosynthesis and transport.
Because of the presence of overlaps in gene memberships between certain curated pathways, we merged 82 overlapping pathways with meta-MSEA p-value < 0.05 into 43 non-overlapping gene "subnetworks" at a maximum allowed overlap ratio of 0.20, and performed a second run of meta-MSEA using these merged subnetworks to retrieve the top six subnetworks (Supplementary Table 5 Using wKDA, we identified candidate key drivers in the liver and adipose tissues for each of the top six cholesterol-associated subnetworks (top five along with co-hubs listed in Table 2 and full list in Supplementary Data 2). As exemplified in Fig. 5a, the top adipose key driver for Subnetwork 2 is the very long chain acyl-CoA dehydrogenase (ACADVL), which catalyzes the first step in mitochondrial betaoxidation. Notably, the two co-hubs for ACADVL (PPARA and CIDEA) are also highly relevant genes for maintaining lipid homeostasis: PPARA is one of the master regulators of lipid metabolism with clinically approved class of drugs (fibrates) already in use; CIDEA has been linked to apoptosis, and mouse knock-outs have demonstrated significant effects on the metabolic rate and lipolysis 44 . In liver (Fig. 5b), the top key driver of Subnetwork 2 is fatty acid synthase (FASN), which is a key driver in adipose tissue as well. The second top key driver squalene epoxidase (SQLE) and its co-hubs (FDFT1, IDI1, MSMO1, NSDHL, HMGCS1, ALDOC) either catalyze or regulate cholesterol biosynthesis. HMGCR, although not listed as top five key drivers, is a highly significant key driver (P < 10 -14 ) and a co-hub of MMT00007490. Subnetwork 2 and Subnetwork 6 shared multiple common key drivers in the adipose network (Fig. 5a). These included aconitase 2 (ACO2), an enzyme that catalyzes citrate to isocitrate in mitochondrion, and ACADVL and its co-
Discussion
The explosion of genomics data provides unprecedented opportunities to identify important mechanisms of disease across studies. Here we introduce a standardized pipeline to connect disease association studies with functional data and curated knowledge, and apply it to the genetics of cholesterol. We used multiple independent datasets to show how the Mergeomics components (MSEA and wKDA) outperformed previous methods in sensitivity and specificity. The generic nature of MSEA makes it straightforward to apply it to individual genomic association studies in different species and different omics data types. The unique pathway-level meta-analysis feature makes it highly powerful in overcoming population and study design differences to integrate diverse data sources to accurately identify shared biological processes across studies.
The weighted network algorithm for wKDA is equally flexible: it can be applied to diverse biological networks and provides statistical and qualitative information on the key regulating genes in a tissue-and network type-specific fashion. Our case study suggested that there is an underlying gene regulation pattern that involves existing drug targets (such as PPARA and HMGCR) as well as less known genes (such as ACADVL and collagen genes), that can help explain the complex signals from cholesterol genetic studies, and guide the development of novel hypotheses and wet lab experiments. With the release of the R library, we provide the scientific community with easy-to-use tools to make sense of the exiting mass of genomics resources.
Market set enrichment analysis
The default setting of MSEA takes as input 1) summary statistics from genomic association studies, 2) measurement of relatedness or dependency between genomic markers, 3) functional mapping between markers and genes, and 4) functionally defined gene sets (e.g., biological pathways or co-regulated genes).
For GWAS, SNPs are first filtered based on the LD structure to select for only SNPs that are relatively independent given an LD threshold 29 . For other types of association studies, correlations between colocalized markers may be used. For a given gene set, gene members are first mapped to markers based on the functional mapping file and then the disease association p values of the corresponding markers are extracted to test for enrichment of association signals. To test enrichment, both a gene-based analysis and markerbased analysis are implemented in MSEA.
The null hypothesis for the enrichment of association signals within a gene set can be defined as
Marker-based H 0 : Given a set of M distinct markers, these markers contain an equal proportion of positive association study findings when compared to a set of M random markers.
We only focus on distinct markers to reduce the effect of shared markers among gene families that are both close in the genome and belong to the same pathway (and presumably have overlapping functionality). Furthermore, our software has a feature that merges genes with shared markers before analysis to further reduce artifacts from shared markers. The expected distribution of the test statistic under the null hypothesis can be estimated empirically by randomly shuffling the gene or marker labels (Supplementary Fig. 2). The gene-based approach is robust against LD and other artifacts. The marker-based approach is more sensitive, however it requires substantial correction for LD to be reliable and may suffer from artifacts due to the nonrandom positional patterns of gene regions in the genome.
To avoid assessing enrichment based on any pre-defined association study P-value threshold (e.g., p<0.05) which can mean different association strengthes in studies of varying sample size and power, we developed a new test statistic with multiple quantile thresholds to automatically adapt to any dataset: In the formula, n denotes the number of quantile points, O and E denote the observed and expected counts of positive findings (i.e. signals above the quantile point), and κ = 1 is a stability parameter to reduce artefacts from low expected counts for small gene sets.
MSEA performance evaluation
The MSEA within Mergeomics can be reconfigured depending on the type of dataset and study design. We identified several parameters that could affect the performance of the pipeline such as marker filtering by including top associated markers based on a percentage cutoff, dependency or relatedness (such as LD) cutoff for pruning redundant markers, and the mapping between genes and markers. Here we focus on the marker filtering percentage and LD cutoff as they represent the two key technical challenges. Of note, the mapping between genes and markers can be defined empirically 11,13 , but we used a chromosomal distancebased approach for testing to make Mergeomics consistent with most of the current pathway enrichment tools. In fact, for GWAS, the assignment of SNPs to their target genes based on their chromosomal location is the commonly adopted approach in other methods, whereas Mergeomics allows users to apply any available assignment method, including the data from tissue-specific eQTL studies and ENCODE.
High cholesterol is a major risk factor for cardiovascular disease, and cholesterol metabolism and transport is one of the most studied and understood areas of human biology, which makes cholesterol GWAS 36 36 . The GLGC contains the two smaller studies, but as the total overlap was less than 10% between the datasets, we assume that the three GWAS are independent for the purposes of this study. All participants were predominantly Caucasian descent, and we used the corresponding LD data from HapMap 47 and 1000 genomes project 48 in our analyses.
We simulated true positives and true negatives to determine a suitable combination of parameters and to compare performance of different methods. We collected genes from the Reactome pathway R-HSA-556833, "the metabolism of lipids and lipoproteins", treating these genes as true signals related to cholesterol and lipid metabolism. These genes were randomly grouped into 300 positive control pathways, including 100 with size 25, 100 with size 100, 100 with size 250, respectively. Simultaneously, 300 negative control pathways with the same size distribution as the positive control pathways were generated by randomly selecting genes from the non-cholesterol gene pool consists of 8633 genes from the pathway database of information and the edge weight information when available. In wKDA, the network topology is first screened for suitable hub genes whose degree (number of genes connected to the hub) is in the top 25% of all networks nodes (Supplementary Fig. 5, middle box on the left). We further classify these genes as either independent hubs or co-hubs, where co-hub is defined as a gene that shares a large proportion of its neighbours with an independent hub. First, the candidate independent hubs are sorted according to the node degree, from low to high. This is to ensure that we capture local structures rather than one master hub that covers the majority of the network (e.g. housekeeping genes would make poor drug targets due to global side-effects). Next, the sorted hubs are tested one by one for neighbourhood overlaps with the already accepted hubs. If sufficient overlap (as defined under section "definition of overlap between two gene sets" below, default value is 33%) is detected, the current hub is assigned as a co-hub for the previously accepted overlapping hub.
wKDA statistics
Once the hubs and co-hubs have been defined, the disease-associated gene sets that were discovered by the MSEA are overlaid onto the network topology to see if a particular part of the network is enriched for the potential disease genes. First, the edges that connect a hub to its neighbours are simplified into node strengths (strength = sum of adjacent edge weights) within the neighbourhood (Supplementary Fig. 5, Plots B-D), except for the hub itself. For example, the top-most node in Plot C has three edges that connect it with the other neighbors with weights that add up to 7 in Plot D. By definition, the hub at the center will have a high strength which will skew the results, so we use the average strength over the neighbourhood for the hub itself. The reduction of the hub neighbourhood into locally defined node strengths improves the speed of the algorithm and makes it easier to define an enrichment statistic that takes into account the local interconnectivity. In particular, the weighting of the statistic with the node strengths emphasizes signals that involve locally important genes over isolated peripheral nodes. In Plot D of Supplementary Fig. 5, the overlap between the hub neighbourhood and a hypothetical disease-associated gene set is indicated by the circles around the top three nodes. The sum of the respective strengths is 15, which represents 57% of the total sum of 26.4 in the neighbourhood (pie chart in Plot D). The final enrichment score is estimated with the formula below.
The null hypothesis for the enrichment of disease genes within a subnetwork can be expressed as Weighted key driver H 0 : Given the set of nodes adjacent to a key driver, and with each node having a local strength as estimated by their mutual connectivity, the ratio of disease gene-member sum of strengths to the total sum of strengths is equal to the ratio for a randomly selected gene set that matches the number of disease genes.
is estimated based on the hub degree N k , pathway size N p and the order of the full network N, with the implicit assumption that the weight distribution is isotropic across the network.
Statistical significance of the disease-enriched hubs, henceforth key drivers, is estimated by repeatedly permuting the gene labels and estimating the P-value based on the simulated null distribution. To control for multiple testing, we perform adjustments in two tiers. First, the P-values for a single subnetwork are multiplied by the number of independent hubs (Bonferroni adjustment). All hubs with adjusted P > 1 are discarded. For random data, the truncated results will be uniformly distributed between 0 and 1, and hence they can be treated as regular P-values. In the second stage, all the P-values for the subnetworks are pooled and the final false discovery rates are estimated by the Benjamini-Hochberg method 49 .
Performance assessment of wKDA
In this study, we use the Bayesian networks 50, 51 constructed from published genomic studies where both DNA and RNA are collected from adipose and liver tissue samples (Supplementary Table 3 However, if the genes are each regulated by the same SNP, it may be the sign of an incidental co-expression without a direct causal relationship. In a Bayesian model, the uncertainty over the causality is estimated by conditional probabilities between co-expressed genes, and the structure of the resulting network is further constrained to an acyclic topology to ensure computational feasibility 50,51 . We organized the individual Bayesian networks into two independent weighted adipose networks and two independent weighted liver networks from non-overlapping datasets (Supplementary Table 3), where edge weight represents the estimated reliability of a connection or edge between genes based on the consistency of the edge between datasets. Using these networks and three test gene sets related to lipid metabolism as inputs, we ran wKDA and the previously developed unweighted KDA to identify liver and adipose key drivers of the lipid gene sets. To assess the prediction accuracy of wKDA and KDA, we used the overlap ratio of the identified key driver genes (as defined under section "definition of overlap between two gene sets" below) between the two independent networks of the same tissue.
Adaptive Gaussian approximation for estimating P-values in MSEA and wKDA
The exact shape of the null distribution is dependent on the size of the gene set and on the mapping between the genes and the markers (MSEA) or on the size and topology of the gene network (wKDA). To estimate the P-value from these various permutation approaches, we created a generic algorithm for a parametric approximation using the Gaussian function. In the range where a direct frequency-based P-value is accurate (i.e. with 10,000 permutations it is possible to accurately estimate P-values above ~0.001), we found that the Gaussian approximation was highly concordant. For P < 0.001, we found that the Gaussian model produced biologically plausible rankings of statistical significance. We tested other models, but found that the potential benefit from using more long-tailed distributions was outweighed by the difficulties in applying them in practice. For instance, the t-distribution was more conservative than the Gaussian estimate, but assigning an appropriate degree of freedom was problematic given the diverse nature of the null hypotheses.
The parameters from Steps 1-4 can be saved and reapplied to new data, which makes it possible to determine the transformation exclusively based on simulated statistics, and then apply it to the observed test statistic to yield the parametric enrichment score The rationale for Gaussian approximation is based on the attractive analytical properties of Gaussian distributions. Nevertheless, if the approximation is inaccurate, the results can be biased and lead to erroneous conclusion. In particular, any dependencies between markers tend to elongate the tails of the "true" distribution when using marker permutations for the MSEA. For this reason, we also report the raw frequency of false positive findings from the permutation analysis for each gene set.
Definition of overlap between two sets
We define the overlap between two sets A and B as where N denotes the number of items. The ratio is zero when there are no shared genes and one when A = B.
Importantly, the ratio is symmetric for two sets of different sizes i.e. the labels A and B can be swapped without affecting the value of r. This definition is used for the calculation of overlap ratio when merging overlapping significant gene sets from MSEA, determining hub-cohub relationship, as well as evaluating the consistency of key drivers identified in independent Bayesian networks.
Availability
Mergeomics is available as a freely downloadable R package (http://mergeomics.research.idre.ucla.edu/Download/Package/). The package supports full Mergeomics functionality, plus the option to generate Cytoscape input files for quick network visualization. At the download site, sample omics datasets, network models, and a standalone C++ program for performing marker dependency filtering are also provided.
Competing interests
The authors have no competing interest. Table 3). Overlap between the tissue-specific key driver signals across two independent regulatory networks was defined according to the formula in Methods. Overlap ratio was calculated for both original networks and networks rewired at 25%, 50%, 75%, 100%. independent key regulatory genes (genes whose neighbourhood has less than 25% overlap with the neighbourhood of other independent hubs) for subnetwork 2 and subnetwork 6. Subnetwork member genes are denoted as medium size nodes and non-member genes as small size nodes. Top co-hubs (co-hubs with FDR<1e-10 in wKDA) are also highlighted by yellow circle. Only edges that were supported by at least two studies were included. | 9,112.6 | 2016-01-07T00:00:00.000 | [
"Biology",
"Computer Science",
"Medicine"
] |
Improving Reliability of Cloud-Based Applications
. With the increasing availability of various types of cloud services many organizations are becoming reliant on providers of cloud services to maintain the operation of their enterprise applications. Different types of reliability strategies designed to improve the availability of cloud services have been proposed and implemented. In this paper we have estimated the theoretical improvements in service availability that can be achieved using the Retry Fault Tolerance, Recovery Block Fault Tolerance and Dynamic Sequential Fault Tolerance strategies, and we have compared these estimates to experimentally obtained results. The experimental results obtained using our prototype Service Consumer Framework are consistent with the theoretical predictions, and indicate significant improvements in service availability when compared to invoking cloud services directly.
Introduction
With the increasing use of cloud services the reliability of enterprise applications is becoming dependent on the reliability of consumed cloud services.In the public cloud context, service consumers do not have control over externally provided cloud services and therefore cannot guarantee the levels of security and availability that they are typically expected to provide to their users [1].While most cloud service providers make considerable efforts to ensure the reliability of their services, cloud service consumers cannot assume continuous availability of cloud services, and are ultimately responsible for the reliable operation of their enterprise applications.In response to such concerns, hybrid cloud solutions have become popular [2]; according to Gartner Special Report on the Outlook for Cloud [3] half of large enterprises will adopt and use the hybrid cloud model by the end of 2017.Hybrid cloud solutions involve onpremise enterprise applications that utilize external cloud services, for example Paypal Payment Gateway (www.paypal.com),cloud storage service Amazon S3 (aws.amazon.com/s3),or entire SaaS (Software as a Service) applications.With a hybrid delivery model where enterprise applications are partially hosted on premise and partially in the cloud, enterprises can balance the benefits and drawbacks of both approaches, and decide which applications can be migrated to the cloud and which should be deployed locally to ensure high levels of data security and privacy.However, from the reliability point of view, hybrid cloud introduces a number of significant challenges as IT (Information Technology) infrastructure and enterprise applications become fragmented over multiple environments with different reliability characteristics.
Another reliability challenge concerns service evolution, i.e. changes in functional attributes of services that may impact on existing consumer applications.Services are often the subject of uncontrolled changes as service providers implement functional enhancements and rectify defects with service consumers unable to predict when or how services will change [4].Consequently, service consumers suffer service disruptions and are forced to upgrade their applications to maintain compatibility with new versions of cloud services, often without any notification.As the complexity of service-oriented applications grows, it is becoming imperative to develop effective methods to manage service evolution and to ensure that service consumers are protected from service changes.
In this paper we describe the reliability features of the Service Consumer Framework (SCF) designed to improve the reliability of cloud-based enterprise applications by managing service outages and service evolution.In the next section (section 2) we review related literature dealing with the reliability of cloud-based solutions.In section 3 we describe three reliability strategies (Retry Fault Tolerance, Recovery Block Fault Tolerance, and Dynamic Sequential Fault Tolerance) and calculate their expected theoretical impact on the probability of failure and response time.In section 4 we discuss how these reliability strategies are implemented using the SCF framework.Section 5 describes our experimental setup and gives a comparison of the theoretical results calculated in section 3 with the experimental measurements of availability and response time.Section 6 contains our conclusions and proposals for future work.
Related work
Traditional approaches to developing reliable, fault tolerant on-premise SOA (Service Oriented Architecture) applications include fault prevention and forecasting.For example, Tsai, Zhou [5] propose a SOA testing and evaluation framework that implements group testing to enhance test efficiency.thors identify the most critical components of cloud applications and then determine an optimal fault-tolerance strategy for these components.Based on this work, Reddy and Nalini [7] propose FT2R2Cloud as a fault tolerant solution using time-out and retransmission of requests for cloud applications.FT2R2Cloud measures the reliability of the software components in terms of the number of responses and the throughput.Authors propose an algorithm to rank software components based on their reliability calculated using a number of service outages and service invocation.
In recent research, Zhengping, Nailu [2] propose the S5 system accounting framework to maximize reliability of cloud services.The framework consists five different layers: service existence examination, service availability examination, service capability and usability examination, service self-healing layer, and system accounting user interface.Authors also propose a new definition of quality of reliability for cloud services.In another work, Adams, Bearly [8] describe fundamental reliability concepts and a reliability design-time process for organizations.Authors provide a guideline for IT architects to improve the reliability of their services and propose processes that architects can use to design cloud services that mitigate potential failures.More recently, Zheng and Lyu [9] identified major problems when developing fault tolerance strategies and introduced the design of static and dynamic fault tolerance strategies.Authors identify significant components of complex service-oriented systems, and investigate algorithms for optimal fault tolerance strategy selection.A heuristic algorithm is proposed to efficiently solve the problem of selection of a fault tolerance strategy.The authors describe an algorithm for component ranking aiming to provide a practical fault-tolerant framework for improving the reliability of enterprise applications.Zheng, Lyu [10] describe a Retry Fault Tolerance (RFT) that involves repeated service invocations with a specified delay interval until the service invocation succeeds.This strategy is particularly useful in situations characterized by short-term outages.
Focusing on improving the reliability of cloud computing, Chen, Jin [11] present a lightweight software fault-tolerance system called SHelp, which can effectively recover programs from different types of software faults.SHelp extends ASSURE [12] to improve its effectiveness and efficiency in cloud computing environments.Zhang, Zheng [13] propose a novel approach called Byzantine Fault Tolerant Cloud (BFT-Cloud) to manage different types of failures in voluntary resource clouds.BFTCloud deploys replication techniques to overcome failures using a broad pool of nodes available in the cloud.Moghtadaeipour and Tavoli [14] propose a new approach to improve load balancing and fault tolerance using work-load distribution and virtual priority.
Another aspect of cloud computing that impacts on reliability involves service evolution.Service evolution has been the subject of recent research interest [4,[15][16][17][18][19], however, the focus of these activities so far has been mainly on developing methodologies that help service providers to manage service versions and deliver reliable services.In the cloud computing environments where services are provided externally by independent organizations (cloud service providers) a consumer-side solution is needed to ensure the reliability of cloud-based service-oriented application [10].
Reliability Strategies
In this section we discuss three reliability strategies that are implemented in the SCF framework: Retry Fault Tolerance (RFT), Recovery Block Fault Tolerance (RBFT), and Dynamic Sequential Fault Tolerance (DFST).As noted above in section 2, these strategies have been described in the literature for on-premise systems [6], [7], [10].We have adapted the RFT, RBFT and DFST reliability strategies for cloud services to address short-term and long-term service outages, and issues arising from service evolution.Short-term outages are situations where services become temporarily inaccessible, for example as a result of the loss of network connectivity; automatic recovery typically restores the service following a short delay.Long-term service outages are typically caused by scheduled and unscheduled maintenance or system crashes that require service provider intervention to recover the service.Service evolution involves changes in functional characteristics of services associated with functionality enhancements and changes aimed at improving service performance.Service evolution may involve changes to service interfaces, service endpoints, security policy, or may involve service retirement.Most cloud service providers maintain multiple versions of services to limit the impact of such changes on service consumers, and attempt to ensure backward compatibility between service versions.However, in practice it is not always possible to avoid breaking consumer applications, resulting in a situation where service consumers are forced to modify their applications to ensure compatibility with the new version of the service.Service overload occurs when the number of service requests in a given time period exceeds the provider limit.
Retry Fault Tolerance
Retry Fault Tolerance (Figure 1) is a relatively simple strategy commonly used in enterprise application.Using this strategy, cloud services are repeatedly invoked following a delay period until the service invocation succeeds.RFT helps to improve reliability, in particular in situations characterized by short-term outages.The overall probability of failure ( ) can be calculated by: where PF is the probability of failure of the service and m is a number of retry attempts.While RFT reduces the probability of failure, it may increase the overall response time due to delays between consecutive service invocations.The total delay can be estimated as: where D is the delay between retry attempts and is the response time of i th invocation.The above calculations assume independent modes of failure of subsequent invocations; this assumption only holds in situations where the delay D is much greater than the duration of the outage, i.e. for long duration outages the invocation will fail repeatedly, invalidating the assumption of independence of failures of subsequent invocations.
Recovery Block Fault Tolerance
Recovery Block Fault Tolerance (Figure 2) is a widely used strategy that relies on service substitution using alternative services invoked in a specified sequence.
It is used to improve the availability of critical applications.The failover configuration includes a primary cloud service used as a default (active) service, and stand-by services that are deployed in the event of the failure of the primary service, or when the primary service becomes unavailable because of scheduled/unscheduled maintenance.Now assuming independent modes of failure, the overall probability of failure for n services combined can be computed by: where n is the total number of services and is the probability of failure of the i th alternative service.The overall response time ( ) can be calculated by: where is response time of first service invocation and is response time of i th alternative service invocation.In the online shopping scenario illustrated in Figure 3, the composite payment service uses eWay payment service as an alternative (standby) service for the PayPal (primary) service.Assuming that the availability of both PayPal and eWay services is 99.9% (corresponding to an outage of approximately 9 hour per year), and that the probability of failure PF = 0.01 for each service, the overall RBFT probability of failure = 10 -6 , and the overall availability s = 99.9999%(this corresponds to an outage of approximately 5 minutes per year).
Dynamic Sequential Fault Tolerance
The Dynamic Sequential Strategy (Figure 4) is a combination of the RFT and RBFT strategies.When the primary service fails following RFT retries, the dynamic sequential strategy will deploy an alternative service.The overall probability of failure for the n services combined is given by: where ( ) is the probability of failure of the i th alternative service using the RFT strategy calculated in equation (1), and ( ) is the overall availability of the composite service using the DSFT strategy.The overall response time ( ) can be calculated by: where T ( ) is the response time of the first service using the RFT strategy in equation (2), T ( ) is response time of i th alternative service calculated in equation ( 2), and PF ( ) is the probability of failure of the k th alternative service using the RFT strategy calculated using equation (1).Table 1 indicates the suitability of the RFT, RBFT, and DSFT strategies to different types of reliability challenges.
Implementation of Reliability Strategies using the SCF
The SCF framework is designed to manage hybrid cloud environments and aims to address the main issues that impact on the reliability of enterprise applications.The SCF framework implements RFT, RBFT and DSFT strategies and is briefly described in the following sections.The framework consists of four main components: Service Repository, Workflow Engine, Service Adaptors and a Notification Centre.Detail description of the SCF framework can be found in [19].Figure 5 illustrates how service adaptors and the workflow engine can be configured to implement the various reliability strategies.
Service repository
Service repository maintains information about the available services and adaptors, including metadata that describes functional and non-functional attributes of certified services.The information held in the service repository is used to manage services and to design reliable applications.The functional and non-functional QoS (Quality of Service) attributes held in the service repository enable the selection of suitable services by querying the service repository specifying desired service attributes.Services with identical (or similar) functionality are identified to indicate that these services can be used as alternatives to the primary service to implement the RBFT strategy.
Service adaptors
Service adaptors are connectors that integrate software services with enterprise applications.Each cloud service recorded in the repository is associated with a corresponding service adaptor.Service adaptors use a native interface to transform service requests to a request that is compatible with the current version of the corresponding cloud service, maintaining compatibility between enterprise applications and external services.The function of a service adaptor is to invoke a service, keep track of service
Workflow engine
The workflow engine implements service workflows, facilitates service failover and the composition of services.The workflow engine executes workflows and routes requests to corresponding cloud services.Workflows can be configured to implement the RBFT strategy by using a number of alternative services redundantly.Another important function of the workflow engine is load balancing.Service adaptors can be configured as active or stand by.By default, active service adaptors are used to process the requests and stand by adaptors are deployed in situation when the primary (active) adaptor requests fail or when the primary adaptor becomes overloaded.
Notification Centre
The SCF framework maintains execution logs and updates service status records in the service repository.When service faults occur, notification centre notifies application administrators so that a recovery action can take place.The administrators are able to rapidly react to service failures and maintain application availability minimizing downtime.In addition, execution logs are used to monitor services and to analyze QoS attributes.
Experimental Verification of Reliability Strategies
Figure 6 illustrates the experimental configuration that was used to verify the theoretical calculations in section 3. The experimental setup consists of two servers that Fig. 6.Experimental Configuration host the SCF framework and a separate Monitoring Server.Both SCF servers implement identical payment scenario illustrated in Figure 3 using PayPal Pilot service (pilot-payflowpro.paypal.com)and eWay Sandbox (https://api.sandbox.ewaypayments.com).The payment requests are randomly generated and sent to the PayPal and eWay payment servers from two different locations.The US West Server uses Amazon Web Services (AWS) cloud-based infrastructure located on the West Coast of the United States and has a high quality server with a reliable network connection.The Sydney server is a local server in Sydney, Australia with a less reliable Internet connection.
Experimental Setup
We have collected experimental results from both servers for a period of thirty days, storing the data in the logs on the Monitoring Server deployed on AWS.The log data records were analyzed computing the experimental values of availability and response time for the composite payment service using different reliability strategies.Both SCF servers generate payment requests in intervals varying randomly between 5 and 10 seconds, and use the following four strategies: Strategy 1: Payment requests are sent directly to the payment service without applying any reliability strategy Strategy 2: Payment requests are sent to the payment service using RFT strategy with three retry attempts (R=3) and a delay of five seconds (D=5)
Experimental Results
We have collected the payment transaction data independently of the values available from the cloud service providers, storing this information in the log files on the Monitoring Server (Table 2 shows a fragment of the response time measurements).The use of two separate servers in two different locations enables the comparison of availability and response time information collected under different connection conditions.As shown in Table 3, using Strategy 1 (i.e.without deploying any reliability strategy) the availability for PayPal and eWay services on the US West server is 90.4815% and 93.8654%, respectively.Deploying the RFT strategy (Strategy 2) the availability increases to 97.9033% and 97.2607%, for PayPal and eWay services, respectively.Using the RBFT strategy (Strategy 3), the availability of the composite service (Pay-Pal and eWay) increases to 99.8091%, and finally using the DSFT strategy (Strategy 4) the availability of the composite service (PayPal + eWay) increases further to 99.9508%.The theoretical values obtained in section 3 are slightly higher than the experimental values; this can be explained by noting that connection issues may affect both PayPal and eWay services concurrently, invalidating the assumption of independent modes of failure.Table 4 shows the average response time of PayPal and eWay services using different reliability strategies during the period from March 15 th to April 15 th 2016.The average response time of the US West Server is considerably lower than the Sydney Server when connecting to the PayPal service in the US.However, for the eWay service (https://www.eway.com.au/), which is located in Australia, the response time of the Sydney Server is slightly better than for the US West server.The bar charts in Figure 7 and 8 give a comparison of the availability values for various reliability strategies for the period of March 15 th to April 15 th 2016.As the figures illustrate, the availability of PayPal and eWay services using any of the reliability strategies is significantly higher than without deploying a reliability strategy.During the measurement period the availability of the PayPal service varied between 88% and 92%, but the availability of the combined PayPal-eWay services using the DSFT strategy remained above 99.9%.
CONCLUSIONS
With the increasing availability of various types of cloud services many organizations are becoming reliant on providers of cloud services to maintain the operation of their enterprise applications.Different types of strategies designed to improve the availability of cloud services have been proposed and implemented.These reliability strategies can be used to improve availability of cloud-based enterprise applications by addressing service outages, service evolution, and failures arising from overloaded services.
In this paper we have estimated the theoretical improvements in service availability that can be achieved using the Retry Fault Tolerance, Recovery Block Fault Tolerance, and Dynamic Sequential Fault Tolerance strategies and compared these values to experimentally obtained results.The experimental results obtained using the SCF framework are consistent with theoretical predictions, and indicate significant improvements in service availability when compared to invoking cloud services directly (i.e.without deploying any reliability strategy).In the specific case of payment services, the availability for PayPal and eWay services increased from 90.4815% and 93.8654%, respectively for direct payment service invocation, to 97.9033% and 97.2607%, for PayPal and eWay services, respectively when the RFT strategy was used.Using the RBFT strategy, the availability of the composite service (PayPal + eWay) increased to 99.8091%, and using the DSFT strategy the availability of the composite service (PayPal + eWay) increased further to 99.9508%.
Deploying multiple alternative services using the RBFT strategy also alleviates issues arising from service evolution that results in incompatible versions of services released by service providers.We are currently extending the functionality of the SFC framework to detect such situations and automatically redirect requests to alternative services.
Fig. 3 .
Fig. 3. Online shopping scenario using a composite payment service
Strategy 3 :Strategy 4 :
Payment requests are sent to a composite payment service using the RBFT strategy Payment requests are sent to a composite payment service using the DSFT strategy which is combination of RBFT and RFT with PayPal (R=3, D=5) and eWay (R=3, D=5)
Table 1 .
Suitability of reliability strategies
Table 2 .
Consumer Service Transaction Logs
Table 3 .
Availability of payment services
Table 4 .
Response time of payment services in seconds | 4,550 | 2016-09-05T00:00:00.000 | [
"Computer Science"
] |
Gene Therapy with MiRNA-Mediated Targeting of Mcl-1 Promotes the Sensitivity of Non-Small Cell Lung Cancer Cells to Treatment with ABT-737
Background: Despite the dramatic efficacy of ABT-737, a large percentage of cancer cells ultimately become resistance to this drug. Evidences show that over-expression of Mcl-1 is linked to ABT-737 resistance in NSCLC cells. The aim of this study was to investigate the effect of miRNA-101 on Mcl-1 expression and sensitivity of the A549 NSCLC cells to ABT-737. Methods: After miRNA-101 transfection, the Mcl-1 mRNA expression levels were quantified by RT-qPCR. Trypan blue staining was used to explore the effect of miRNA-101 on cell growth. The cytotoxic effects of miRNA-101 and ABT-737, alone and in combination, were measured using MTT assay. The effect of drugs combination was determined using the method of Chou-Talalay. Cell death was assessed using cell death detection ELISA assay kit. Results: Results showed that miRNA-101 markedly suppressed the expression of Mcl-1 mRNA in a time dependent manner, which led to A549 cell proliferation inhibition and enhancement of apoptosis (p < 0.05, relative to blank control). Pretreatment with miRNA-101 synergistically decreased the cell survival rate and lowered the IC50 value of ABT-737. Furthermore, miRNA-101 dramatically enhanced the apoptotic effect of ABT-737. Negative control miRNA had no remarkable effect on cellular parameters. Conclusions: Our findings propose that suppression of Mcl-1 by miRNA-101 can effectively inhibit the cell growth and sensitize A549 cells to ABT-737. Therefore, miRNA-101 can be considered as a potential therapeutic target in patients with non-small cell lung cancer.
ABT-737 is a potent and specific inhibitor of Bcl-xL, Bcl-2 and Bcl-w, which has shown single-agent activity in several cancer types including lung cancer (Avsar Abdik et al., 2019;Shen et al., 2019;Wang and Hao, 2019). However, multiple studies have documented that over-expression of Mcl-1 confers resistance to ABT-737. Concordantly, down-regulation of Mcl-1 by pharmacologic or genetic strategies induces sensitivity of malignant cells to the compound. Therefore, the combination of Mcl-1 targeting and ABT-737 appears to be an efficient means of triggering apoptosis in various tumor types (Dai and Grant, 2007;Quinn et al., 2011).
MicroRNAs (miRNAs) are a family of non-coding RNAs with 18-25 nucleotides long, which bind to the 3'-untranslation regions (3'-UTR) of target transcripts to regulate gene expression, either via mRNAs degradation or translational inhibition (Hu et al., 2018;Rezaei et al., 2019;Alamdari-Palangi et al., 2020). It has been reported that miRNAs participate in a biological and pathological processes, such as cell differentiation, cell proliferation, cell growth and cell death. Aberrations in particular miRNAs expression are a hallmark of various cancer cells Amri et al., 2019b). For example, miRNA-143 expression is down-regulated in NSCLC, causing elevated c-Myc expression, increased tumor cell growth, migration and metastasis. In contrast, over-expression of miRNA-21 suppresses Bcl-2, inhibits apoptosis, enhances metastasis and confers multidrug resistances (Ricciuti et al., 2014;Zhang et al., 2014;MacDonagh et al., 2015;Amri et al., 2019a). In lung cancer, miRNAs are emerging as potential markers for chemoresistance and prognostic.
MiRNA-101, a tumor-suppressive miRNA, is under-expressed in various types of tumor tissues and cell lines, including lung cancer, and displays an inhibitory effect on cell apoptosis, migration, proliferation and invasion (Luo et al., 2012;Zheng et al., 2015). Moreover, it has been shown that up-regulation of miRNA-101 inhibited tumor progression, at least in part, by targeting Mcl-1, and associated with poorer prognosis (Su et al., 2009;Wang et al., 2010;Chen et al., 2011;Luo et al., 2012). However, the biological role of miRNA-101 on drug resistance of NSCLC cells has not yet been fully elucidated. Therefore, in the present study, the effect of miRNA-101 on sensitivity of the NSCLC cells to ABT-737 was investigated. Our data demonstrated that ectopic expression of miRNA-101 was associated with suppression of Mcl-1 mRNA in tumor cells. We also found that elevated level of miRNA-101 inhibited the cell growth and enhanced the apoptotic effect of ABT-737, which suggests that miRNA-101 may play important roles in NSCLC resistance.
Cell culture
Human NSCLC cell line A549 was obtained from Pasteur Institute (Tehran, Iran). The cells were maintained in RPMI-1640 medium (Sigma-Aldrich, St. Louis, MO, USA) containing 10% fetal bovine serum (FBS; Sigma-Aldrich) at 37°C in a 5% CO 2 humidified atmosphere, with the medium changed every four days. Cells were sub-cultured at 90% confluence, seeded at 40% confluence and used in the logarithmic phase.
Cell transfection
The miRNA-101 mimic and negative control (NC) miRNA were purchased from Dharmacon (Lafayette, CO, USA) and transfected into cells with a final concentration of 50 nM. The sense strand sequences of miRNA-101 mimics and NC miRNA were 5'-UACAGUACUGUGAUAACUGAA-3' a n d 5 ' -U U C U U C G A A C G U G U C A C G U T T-3 ' , respectively. All of the cell transfections were performed with Lipofectamine™2000 reagent (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions. After 24 and 48 h transfection, down-regulation of Mcl-1 was assessed by quantitative real time PCR (RT-qPCR).
Cell growth assay
The effect of miRNA-101 on cell growth was determined by trypan blue dye exclusion method. The cells were treated in 6-well plates as mentioned above in the experimental group for 24-120 h. Then, the cells were harvested and diluted with 0.4% trypan blue solution (Sigma-Aldrich) which was then counted under a microscope using a hemocytometer. The percent of viable cells was calculated as follows: Percent viable cells = (No of viable cells Test) / (No of viable cells Control) ×100.
Cytotoxicity assay
The effect of miRNA-101 on the response of lung cancer cells to ABT-737 was determined using 3-(4, 5-Dimethylthiazol-2-yl)-2, 5-Diphenyltetrazolium Bromide (MTT) assay. The experiment was subdivided into eight groups: ABT-737, NC miRNA, miRNA-101, NC miRNA and ABT-737, miRNA-101 and ABT-737, ABT-737 blank control, miRNA blank control and combination blank control. Treatment with only lipofectamine without miRNA was considered as a miRNA blank control. Treatment with 1% DMSO was served as ABT-737 blank control. Also treatment with of apoptotic cells. Briefly, the cells were lysed and cell suspensions centrifuged at 200 g for 10 min. Then, 20 μL of the supernatants and 80 μL of a mixture containing anti-histone-biotin and anti-DNA-peroxidase were added to each well of streptavidin-coated plate. After incubation for 2 h in 25°C, the wells were washed and 100 μL of 2, 2-azino-bis (3-ethylbenzthiazoline-6-sulfonic acid) solution was added to each well. The reactions were stopped and absorbances were measured by using an ELISA plate reader at 405 nm.
Statistical analysis
All data in this study were analyzed using GraphPad Prism software. Quantitative data were presented as mean ± standard deviation (SD). Statistical analysis was evaluated by analysis of variance (ANOVA) followed by Bonferroni's test. Value of p less than or equal to 0.05 was considered significant.
MiRNA-101 cause down-regulation of Mcl-1 mRNA
To analyze the effect of miRNA-101 on Mcl-1 gene expression, A549 lung tumor cells were transfected for 24 and 48 hours with 50 nM miRNA-101 and NC miRNA. Subsequently, RT-qPCR was performed to measure expression of Mcl-1 mRNA. As shown in Figure 1, after treatment with 50 nM miRNA-101, the expression of Mcl-1 suppressed dramatically in a time-dependent way (p < 0.05; relative to the NC miRNA or blank control groups). The relative expressions of Mcl-1 mRNA in cells were 79.32% and 66.14% after 24 and 48 h, respectively (p < 0.05). As expected, NC miRNA had no effect on the expression of Mcl-1 mRNA (p > 0.05).
MiRNA-101 showed a growth inhibitory effect in NSCLC cells
As over-expression of Mcl-1 is associated with the growth of NSCLC cells; we therefore investigated whether combination of lipofectamine and DMSO was considered as a combination blank control. Briefly, cells were plated at a density of 4 × 10 3 cells per well in 96-well plates and transfected with 50 nM of either miRNA-101 or NC miRNA. After 6 h of incubation, ABT-737 was added to the wells at a final concentration of 0, 0.25, 0.5, 1, 2, 4, 8 and 16 µM, and continued to incubate for another 24 and 48 h. Then, 10 µL of MTT solution (Sigma-Aldrich) (5 mg/ml) was added to each well and the plates were incubated for another 4 h at 37°C. Next, supernatant was discarded and 150 µL of DMSO was added to each well followed by reading the absorbance (A) at 490 nm in a microplate reader (Awareness Technology, Palm City, FL, USA). The survival rate (SR) was determined with the following formula: SR (%) = (A Experiment /A Control) × 100%. Half maximal inhibitory concentration (IC 50 ) was calculated using Prism 6.01 software (GraphPad Software Inc., San Diego, CA, USA).
Synergy determination
To further evaluate the effect of combining the therapies, the combination index (CI) method of Chou and Talalay was performed (Chou and Talalay, 1984). The cell survival rates were converted to Fraction affected (Fa) and analyzed using CompuSyn version 1.0 software (ComboSyn Inc., Paramus, NJ, USA). CI values less than 1, equal to 1, or greater than 1 represent synergy, additivity or antagonism, respectively.
Apoptosis assay
The A549 lung cancer cells (1 × 10 5 cells/well) were placed in 12-well culture plates and then treated with miRNA-101, NC miRNA, the IC 50 dose of ABT-737 and their combinations as described previously. Following 24 and 48 h of incubation, the cells were harvested and apoptosis was detected with the Cell Death Detection ELISA kit (Roche Diagnostics GmbH) according to the manufacturer's protocol. This assay measures the amount of mono-and oligonucleosomes in the cytoplasm miRNA-101 could inhibit the growth of A549 cells. The cells were treated with miRNA-101 and NC miRNA and cell viability was then determined by trypan blue dye exclusion assay over a period of 5 days. The cell growth curve demonstrated that compared with the blank control group, the growth of the miRNA-101 transfected cells was inhibited notably in a time-dependent manner (p < 0.05; Figure 2). At 24 h posttransfection, the cell growth decreased to 86.40% and then to a further 65.39% at the end of the experiment (day 5). However, there was no obvious alteration in cell growth between NC miRNA transfected cells and blank control group (p > 0.05; Figure 2).
Enhanced sensitivity to ABT-737 by miRNA-101 in A549 cells
We addressed whether miRNA-101 would enhance sensitivity to ABT-737 in A549 cells. The results of MTT assay showed that ABT-737 has cytotoxic effects in a dose-dependent manner ( Figure 3A and 3C). Moreover 24 and 48 h exposure of the cells with miRNA-101 significantly decreased the survival of the cells (85.19% and 73.51%, respectively) relative to the blank control cells (p < 0.05). Surprisingly, the cells exposed to miRNA-101 in the presence of ABT-737 showed a significant decrease in survival (compared with miRNA-101 or ABT-737 treated cells), and the IC 50 values of ABT-737 were markedly decreased relative to the cells treated with ABT-737 alone (p < 0.05, Table 1). Treatment with NC miRNA had no significant effect on cell survival as well as the IC 50 values of ABT-737. The combination index (CI) method of Chou and Talalay was performed using CompuSyn software. CI values less than 1, equal to 1, or greater than 1 represent synergy (S), additivity or antagonism, respectively. Targeting of Mcl-1 Promotes the Sensitivity to The combination effect of miRNA-101 and ABT-737 on lung cancer cells was synergistic To evaluate whether the combination of miRNA-101 and ABT-737 is synergistic, the combination index analysis was performed according to the non-constant method of Chou-Talalay. Results indicated that the combination effects of miRNA-101 (50 nM) and were synergistic with the CI values of less than 1 at any given concentration of ABT-737 ( Figure 3B and 3D). CI-Fa plots further revealed that the most obvious synergism effects of 24 h (CI=0.74) and 48 h (CI=0.79) of treatments were observed at 16 µM of ABT-737 with Fa levels of 0.97 and 1, respectively (Table 2).
MiRNA-101 sensitizes A549 cells to apoptosis induced by ABT-737
To confirm whether the observed sensitizing effect of miRNA-101 was associated to the enhancement of apoptosis, the effects of ABT-737 and miRNA-101, alone and in combination, on apoptosis were examined using cell death ELISA assay. The results demonstrate that 24 h exposure of the cells with miRNA-101 or ABT-737 enhances apoptosis by 2.55 and 6.28 fold, respectively, compared to the blank control (p < 0.05; Figure 4). Moreover, combination of miRNA-101 and ABT-737 further increases the degree of apoptosis to 10.70 fold (p < 0.05, compared with either ABT-737 alone or miRNA-101 alone). Forty-eight hours treatment of the cells with miRNA-101 or ABT-737 alone, led to enhancement of apoptosis by 3.73 and 6.61 fold, respectively, compared to blank control (p < 0.05). Also, the combination of two agents significantly increased the extent of apoptosis relative to the monotherapy in this period of time. However, NC miRNA alone or in combination with ABT-737 had no distinct effect on apoptosis compared with the blank control or ABT-737 treated cells, respectively (p > 0.05; Figure 4). Therefore, these findings demonstrate that the sensitization effect of miRNA-101 in A549 cells is linked to the augmentation of apoptosis.
Discussion
Non small cell lung cancer (NSCLC), which accounts for approximately 80% of all lung cancer cases, is the number one cause of cancer deaths worldwide for both men and women (Zhang et al., 2014;MacDonagh et al., 2015). Despite an initial response of the tumor to therapy, response rates were often low, due to the development of resistance (Yu and He, 2013). The precise mechanisms underlying NSCLC resistance remain unclear.
ABT-737 is one of the well studied BH3 mimetics that binds with strong affinity to Bcl-2 family anti-apoptotic proteins Bcl-2, Bcl-xL, and Bcl-w, but not to Mcl-1. ABT-737 was effective in inducing cytotoxicity as a single agent in vitro and in vivo in several types of cancer such as glioblastoma, leukemia and lymphoma, multiple myeloma and SCLC (Su et al., 2009;Quinn et al., 2011). However, a large percentage of cancers are resistant to ABT-737. There are clear evidences that show over-expression of Mcl-1 confers resistance to ABT-737. Concordantly, adding drugs that down-regulate Mcl-1 sensitized tumor cells to ABT-737 (Dai and Grant, 2007;Quinn et al., 2011). Additional studies suggest that decreased expression of miRNA-101 results in elevated expression of Mcl-1 in NSCLC, and consequently poorer prognosis (Su et al., 2009;Wang et al., 2010;Chen et al., 2011;Luo et al., 2012). However, the role of miRNA-101 on drug resistance of lung cancer is not fully understood. Therefore, in the present study, we investigated the effect of miRNA-101 on cellular apoptosis and sensitivity of the A549 NSCLC to ABT-737.
MiRNAs are a kind of non-coding RNAs that play pivotal roles in the regulation of cellular processes such as cell proliferation, differentiation and apoptosis (Yin et al., 2014). Experimental and clinical investigations revealed that up-regulation and down-regulation of miRNAs are linked to the development of malignant phenotype of cancers, including enhancement of cell proliferation, invasion and abrogated apoptosis (Zhang et al., 2017;Wang et al., 2018;Chen et al., 2019). MiRNA-101 is a tumor suppressor miRNA that inhibits the cell proliferation and invasion, induces apoptosis and augments chemosensitivity in several types of cancers including NSCLC (Yin et al., 2014;. Here, we demonstrated that single therapy with miRNA-101 obviously reduced the cell growth and survival and increased the extent of apoptosis in A549 cells. Moreover, previous studies have found that miRNA-101 inhibits the cell proliferation and augmented apoptosis in gastric cancer, colon cancer and NSCLC by targeting zinc finger E-box binding homeobox 1 (ZEB1), cyclooxygenase-2 (Cox-2), enhancer of zeste homologue 2 (EZH2) and Mcl-1 (Su et al., 2009;Wang et al., 2010;Luo et al., 2012;Han et al., 2018). However, our findings are in agreement with these reports and further confirm the role of miRNA-101 in the progression of NSCLC as well Twentyfour and forty-eight hours after transfection, apoptosis was assessed by cell death ELISA assay. The results are presented as mean ± SD (n=3). *p < 0.05 relative to blank control; #p < 0.05 relative to miRNA-101 or ABT-737 alone.
as the tumor-suppressive effect of this miRNA.
Mcl-1 is an member of the anti-apoptotic Bcl-2 family proteins that is expressed in various tissues and tumor cells Sieghart et al., 2006). Mcl-1 involves in apoptosis by blocking cytochrome c-release from mitochondria by sequestering the pro-apoptotic members of the Bcl-2 protein family, e.g. Bid, Bim and NOXA as well as Bak and Bax Hussain et al., 2007). In apoptotic conditions, specific proteins such as NOXA can displace Mcl-1 from pro-apoptotic proteins which leading to cytochrome-c release from mitochondria, and subsequently activation of apoptosis (Akgul, 2009;Guoan et al., 2010). Studies have shown that increased levels of Mcl-1 in tumor cells is correlated with high levels of cell survival and development of resistance to diverse chemotherapeutic agents including ABT-737 (Thallinger et al., 2003;Sieghart et al., 2006;Keuling et al., 2009;Quinn et al., 2011). Furthermore, knockdown of Mcl-1 has been demonstrated to decrease cell survival and reverse drug-resistance of tumor cells (Chen et al., 2007;Chen et al., 2010;Quinn et al., 2011;Lucas et al., 2012). Here, we applied suppression of Mcl-1 by miRNA-101 to investigate the role of this miRNA in sensitivity of A549 NSCLC cells to the ABT-737. Our findings demonstrated that exposure of the A549 cells to ABT-737 decreased the cell survival rate and induced apoptosis. Transfection of miRNA-101 significantly suppressed the expression levels of Mcl-1 mRNA and synergistically enhanced the cytotoxicity of ABT-737. In addition, miRNA-101 in combination with ABT-737 further enhanced the extent of apoptosis compared to the single therapy. In agreement with our results, Yin et al., (2014) demonstrated that miRNA-101 enhances cisplatin-induced apoptosis via the activation of caspase 3 and reduced colony formation in the A549 cells. Su et al., (2009) reported that miRNA-101 represses the expression of Mcl-1 and sensitizes the liver cancer cells to chemotherapeutic agents. Chen et al., (2011) also demonstrated that ectopic expression of miRNA-101 sensitized NSCLC cell lines to radiation. However, the results of our study are similar to these reports and propose that down-regulated miRNA-101 expression could be related to Mcl-1 over-expression and ABT-737-resistance of NSCLC cells.
In conclusion, our results provide strong evidence that miRNA-101 could suppress the cell growth and survival and trigger apoptosis in NSCLC through blocking Mcl-1. Furthermore, we showed that miRNA-101 enhanced the sensitivity of the lung cancer cells to ABT-737 mediated apoptosis. Therefore, our study suggests that miRNA-101 can be considered as an effective target to reverse ABT-737 resistance in cancer cell. | 4,271.4 | 2020-03-01T00:00:00.000 | [
"Biology"
] |
From content to context: A qualitative case study of factors influencing audience perception of the trustworthiness of COVID-19 data visualisations in UK newspaper coverage
Drawing from 18 audience interviews, this article examines audience perception of the trustworthiness of COVID-19 data visualisations in UK newspaper coverage. The findings suggest that overall, the participants viewed the selected COVID-19 data visualisations as largely trustworthy. Their perception was unaffected by the types of data visualisations. The trustworthiness of data visualisations had no clear connection with their likability and learnability. Instead, the participants’ trust was influenced by the perceived problematic presentation of data visualisations, such as the inappropriate use of bars to represent data or the failure to present data in context. It was also affected by the participants’ understanding of the problems about data (production and presentation), their assessment of the credibility of data sources and news outlets, and their personal lived experiences and information gained from other sources. All of these were related to the social context surrounding data and data visualisations, rather than merely the content of data visualisations. The findings reveal that the social construction nature of data and data visualisations creates a space for the participants to question data visualisations’ trustworthiness. The close connection between trust in data visualisations and trust in data, a socially constructed product, suggests that the trustworthiness of data visualisations transcends the control of journalists and news media, extending to the context of data and its visualisations. This qualitative research reveals the importance of context to audience trust in data visualisations in the UK.
Introduction
Data visualisations have become increasingly prevalent in news coverage (Kennedy and Engebretsen, 2020), particularly since the onset of the COVID-19 pandemic. However, research on the reception of data visualisation is scarce (Engebretsen, 2020), leaving us with limited knowledge of how audiences perceive data visualisations in general and, in particular, their trustworthiness.
This study fills the gap in the literature. In this study, 18 interviews were conducted with UK audiences. Participants were asked to evaluate the trustworthiness, likability and learnability of selected COVID-19 data visualisations published in the news without the presence of the accompanying text. This study contributes to the literature on audience trust in news and data visualisations.
This article first discusses the literature on audience reception and trust in news. It then introduces the present study, including its process, method and data. A discussion of the findings will be presented in the next sections, followed by a reflection on the limitations of this study.
Theoretical framework: Audience reception, trust in news and data visualisations
Trust is crucial for the proper function of a society and "brings us all sorts of good things" (Uslaner, 2002: 1). For communication research, trust is considered a vital parameter informing us of how audiences perceive and assess news media and their content. In the literature, terms including "trust", "trustworthiness" and "credibility" are used interchangeably (Kohring and Matthes, 2007).
Trust in news (media) plays a crucial role in audience acceptance and consumption of news media as well as fostering civic participation. However, it has been reportedly declining in societies like the United Kingdom (UK) over the past decades, albeit with variations across countries (Park et al., 2020). Despite the various changes induced by the COVID-19 pandemic, the issue of low trust in news persists as an overarching concern. Trust in news initially declined during the early stages of the pandemic (Newman et al., 2020), followed by a subsequent recovery (Newman et al., 2021), but ultimately experienced another downturn after COVID-19 bumps ended (Newman et al., 2022).
The studies examining trust in news (media) started in the 1970s and have burgeoned since the 1990s (such as McCroskey and Jenson, 1975;O'Keefe, 1990;Kohut and Toth, 1998;Austin and Pinkleton, 1999). During the pre-digital era, including the 1990s and before, scholars (such as McCroskey and Jenson, 1975;Austin and Pinkleton, 1999) focused on examining traditional news media. Later, they (such as Cassidy, 2007;Tsfati, 2010) extended their interest to audience trust in online news (sites). Earlier studies focused on the correlation between media effects (or agenda setting) and audience trust in news (media) and its impact on news media's democratic roles (such as Tsfati, 2002;Tsfati, 2003;Kiousis, 2001;Tsfati and Cappella, 2003;Tsfati and Cappella, 2005). Along with the emergence of the idea of active audiences, a large body of the literature has gradually shifted from examining media effects and trust to probing how audiences evaluate news trustworthiness and the factors that influence audience (perception of) trust in news (media) (such as Lee, 2010;Amazeen and Muddiman, 2018;Swart and Broersma, 2022). Measures such as reducing journalistic bias and opinion in the news are seen to boost trust in the news (Fisher et al., 2021;Henke et al., 2020).
The literature sees trust as complex and multiple-dimensional (Horowitz et al., 2021;Kohring and Matthes, 2007). Trust in news (media) is defined as trust in journalistic selections of topics, facts and journalistic depictions and assessments (Kohring and Matthes, 2007). It refers to the news sources cited in the news, the message conveyed within the news, and the news media responsible for its publication. It can be summarised as perceived "source credibility", "message credibility" and "media or medium credibility". Source credibility is defined as "judgments made by a perceiver concerning the believability of a communicator", regarding a communicator's expertise and trustworthiness (Hovland et al., 1953;O'Keefe, 1990: pp. 181). Message credibility has been examined in terms of "message structure", "message content", "language intensity" and "message delivery". Media/medium credibility research typically focuses on the types of news media through which the message is published, such as newspapers or television (Golan, 2010). The literature also considers trust in news as a fundamental component of public trust in the government and the political establishment (Marcinkowski and Christopher, 2018). It can be influenced by audiences' partisanship, political ideology, and political and personal cynicism (Lee, 2010). This means that media trust can also be influenced by external factors such as political trust or the legitimacy of social institutions.
Since the COVID-19 pandemic emerged, news outlets around the world have embarked on an unprecedented utilisation of data and data visualisations within their news coverage. When it comes to news that uses data and data visualisations, the literature highlights the difference made by numbers-statistical data-and data visualisations to audience trust in news. Overall, two contrasting arguments exist regarding the association between the incorporation of numeracy and data visualisations in news stories and how the audience perceives their trustworthiness. One view (such as the articles in Nguyen, 2018;Porter, 1995;Van Dijk, 1988: 87) generally sees the use of numbers and data visualisations in news generating trust (Beer, 2016;Lindsey and Yun., 2003;Koetsenruijter, 2011). The other view questions the effect of using numbers in news. A cross-country study found that news stories with numbers were not appealing to audiences in the United States, Zambia and Tanzania. Audiences with lower levels of numeracy appeared to trust news stories with numbers more than those with a better understanding of numeracy (Gondwe et al., 2021). Prior knowledge and issue involvement influence audiences' ability and incentive to process information (Lee and Kim, 2016), with those who have prior knowledge finding it easier to process information. These discussions highlights the importance of audiences' backgrounds in shaping their perception of the trustworthiness of the news that uses numbers and data visualisations.
In addition, Henke and collaborators found that the inclusion of statistical information and data visualisations in the news can contribute to enhancing audience perception of news trustworthiness. However, having too much statistical data in the news may lead to a cognitive burden and make it difficult for audiences to understand (Lee and Kim, 2016;Henke, et al., 2020). Also, news stories using numbers often have, or are believed to have, errors and misleading information (Maier, 2002), leading to mistrust in the news media (Peters, 2020).
These discussions in the literature point to four important aspects of trust in news: (1) trust in news is multidimensional, shown as trust in news sources, news content and news media; (2) trust in news can be influenced by external factors; (3) the utilisation of numbers and data visualisations in news can earn the trust of audiences, but it also depends on the numeracy levels of the audience and the quality of the information provided and may result in other issues such as information overload; (4) prior knowledge and motivation to process information are important to audience perception of the trustworthiness of news using numbers and data visualisations.
When it comes to audience trust in data visualisations, current studies have mostly seen data as numbers but neglected to consider the influence of its social construction nature on shaping audiences' perceived trust in news. Data extends beyond mere numbers; it encompasses any information or content that can be digitized. Data is inherently socially constructed and intertwined with social dynamics and power relations within the context in which it is collected and produced (Jenkins, 2019). Audience perception of trust in data visualisations can be influenced by the audience's comprehension of data and the manner in which data is presented visually. Since our understanding in this area is limited, further research is thus necessary.
The existing studies have also largely examined data visualisations as an integral component of the news. This approach has its advantages, as in real-world scenarios, data visualisations often accompany news articles, allowing audiences to consume them simultaneously. However, in this approach, the combination of visuals and texts, being two different systems, has the potential to amplify and influence their respective meanings (Bateman, 2014). The perception of data visualisations by the audience can be influenced by the accompanying textual content of news articles. Data visualisations are standalone cultural artefacts, containing data and visual representations of data (Rettberg, 2020). The visuality of data visualisations possesses a communicative and persuasive power that differs from, and often surpasses, that of text. It has the ability to evoke users' emotions by bypassing rational judgment (Kennedy and Hill, 2018;Fratczak, 2022). It can also engender an uncritical trust in numbers, thereby contributing to the restoration of public trust (Beer, 2016;Sleigh and Vayena, 2021;Naerland and Engebretsen, 2023). To understand audiences' perception of data visualizations solely based on their inherent qualities, independent of the textual form of the news, it is necessary to examine data visualizations outside the context of news articles. However, it is important to note that the absence of the actual reading context of data visualizations might hinder audiences' perception of them. This is because certain information that is obscure or missing in data visualisaitons may be included in the text of news, and the understanding gained from reading the text can enhance the audience' interpretation of data visualisations. But the advantage of omitting the text of news lies in the ability to analyze data visualizations independently. This allows us to understand how the presentation of data visualizations influences audiences' trust and perception, without being influenced (or distracted) by the textual content of news articles. This approach also helps us explore whether certain types of data visualizations evoke stronger perceptions of trustworthiness and to what extent. These aspects are, however, understudied in the current literature and related studies are scarce (Kennedy and Engebretsen, 2020;Engebretsen, 2020). Scholars (such as Henke, et al., 2020;Lee and Kim, 2016) called for more research about the perception of data visualisation.
The study
This research will address these aspects by interviewing 18 participants about their perception of the trustworthiness of COVID-19 data visualisations published in the news coverage by UK news media. COVID-19 data visualisations were abundant in news coverage, both in the UK and beyond, serving as a significant tool for communicating pandemic-related information to the public. Therefore, it is important to understand how audiences perceive the trustworthiness of COVID-19 data visualisations since this aspect has not yet been thoroughly researched. However, it is expected that audiences would have a higher level of familiarity with COVID-19 data visualisations and the associated data compared to those related to other topics. While this familiarity can facilitate participants' discussions during the interviews, it also has the potential to influence the findings of the present study. In addition, the pandemic has also triggered a wide range of controversial issues such as social distancing, lockdowns, true death numbers, and vaccines. While reading data visualisations on these topics, the political and ideological stances, as well as personal experiences of the audience can potentially impact their perception of these data visualisations' trustworthiness.
The semi-structured interviews took place in 2021, a time the UK was on the verge of re-emerging from its third national lockdown and the safety issue surrounding As-traZeneca was in spotlight. The participants (see Table 1 for their backgrounds) were recruited through snowball sampling and the internet, particularly Twitter and Facebook (now Meta). Table 1 suggests these participants were well-educated, and most were in well-paid professions. 15 out of 18 participants were under 50. Interviews lasted between one and 3 hours. 11 COVID-19 data visualisations published by UK news media, including The Guardian, The Times, The Financial Times, The Daily Telegraph, The Independent, The Daily Mail and The Economist, were selected before the interviews, according to their types and themes. 10 data visualisations were published by quality newspapers, with only one published by The Daily Mail, a tabloid. They cover key topics related to the COVID-19 pandemic including social distancing, lockdowns, the economic impact, case and death numbers, as well as blood clot concerns and the AstraZeneca vaccine.
Each participant was asked to view the data visualisations individually and explain their thoughts. The data visualisations were presented independently without the accompanying text of the news articles. The advantage of this research design is that data Tong visualisations can be examined as standalone visual cultural artefacts and audiences' perception of them will not be influenced or distracted by the accompanying text. However, removing the immediate reading context of these data visualisations has a downside, as it may limit the participants' understanding of their meanings and contextual backgrounds. As previously discussed, data visualisations and the textual content of news articles can mutually influence their respective meanings. Their interdependence means if audiences struggle to understand data visualisations, they may turn to the text of news articles for answers. This is because some information may be included in the text of news articles rather than data visualisations themselves, and the textual content can aid audiences in interpreting the meaning of the data visualisaitons. The analysis and conclusion of this research took this limitation into account. Of the 18 participants, one read and discussed seven data visualisations and another eight. The other 16 did 11. Three main aspects of the data visualisations were discussed: trustworthiness, likability and learnability. The participants were asked to rate these data visualisations for these aspects between 0 (not trustworthy) and 5 (trustworthy).
Interviews were recorded and transcribed verbatim after permission was received from the participants. The interview transcripts were uploaded and analysed in Nvivo qualitatively. Thematic analysis (Braun and Clarke, 2006) was conducted, and the transcript
Findings
Overall, the participants largely perceived data visualisations as trustworthy, with the majority rating them at three or above on the trustworthiness scale. Only the two data visualisations based on survey data with unknown data sources were rated below three by approximately one third of the participants. The types of data visualisations did not appear to affect the participants' perception. What mattered was the way data was presented, the data sources and the news media. Data visualisations with clear data sources and published by broadsheets were rated higher for trustworthiness than those lacking data source information, those with data sources unknown to the participants, or the one produced by the tabloid. Likability and learnability were not corresponding to trustworthiness. For example, the data visualisation by the Daily Telegraph regarding following government advice received low trustworthiness scores due to unclear survey data and a data source unfamiliar to participants; however, it received high likability scores. By contrast, The Economist's data visualisation about flattening the curve was rated low for likability but had high trustworthiness scores. The following sections will outline the main aspects of participants' perception of these data visualisations' trustworthiness and related influencing factors.
The influence of data visualisations' presentation
The 11 data visualisations include six types: line graph, bar graph, map, scatter plot, interactive and inforgraphic. However, participants did not mention the types of data visualisations as a factor when discussing trustworthiness. Their trustworthiness ratings also demonstrate the insignificance of data visualisations' types, as no specific type received particularly low or high scores, except for a bar graph and an infographic that had slightly lower scores than others because they used survey data and unknown data sources. However, three aspects of data visualisation presentation: problematic presentation, presenting obvious intention and presenting data out of context influenced participants' perception of trustworthiness.
Firstly, problematic presentation, including issues with bar and line presentation, colour choices and wording, undermined trust. A participant from a medical background expressed confusion regarding the inappropriate use of bars to present multiple sets of data in a bar graph: "I think it is trying to match the data in all these, but one (bar) is based on 6 per 10,000, the other is based on incidents of 20 per 10,000, but they are trying to put that information (together), trying to make it equal. … It is not the same comparison, but (what) they are trying to show is (to compare) all three of them … But, ... it is not actually compatible, ... it can be portrayed as giving the wrong message to the general public." (Jess, a medical professional)(content in parentheses was added by the author) Likewise, a participant from an administration background also voiced concerns about the problematic presentation of an infographic featuring multiple line graphs: "I don't think the graphs are a very good visual representation of the numbers. How is that, that says 41.3, the same length (with that showing 14.1)? … Do you see what I mean? Each of those graphs looks the same but the numbers are quite different." (Monica, an administrator) The problematic presentation also referred to mistakes, exemplified in the following quote: "The spelling mistake (in the data visualisation) really makes me very uncomfortable about the quality of this graph" (Vince, a university student studying politics) Secondly, the obvious viewpoints and intentions presented in data visualisations weakened the trust of the participants. Examples of this include: "Let's say, trustable, three. …. It looks, to me, like that's a graphic that's been used to demonstrate a point, which is probably, 'Look at America, it's so dreadful, they haven't sorted it out, we hate Trump.' or something like that. … I think they're using it just to support an angle." (Monica, an administrator) "Well, I think the graph is portraying an intention, so it's not saying, 'This is what we've measured.' It's saying, 'This is what we are trying to do.'" (Rob, a software engineer) Thirdly, participants commonly cited the presentation of data without proper contextualisation as a significant factor diminishing their trust. They would like to see data was visualised in its context or brief information was included in the visual representation. For example, several participants reduced their ratings for The Guardian's line graph representing daily case numbers since 2020, as the graph did not provide the context of different testing scales in different times in the UK, as exemplified in this quote: "I don't think it can be trusted that much, especially, you know, in the first lockdown, we didn't really have the tests and stuff. So, … this may not actually represent the truthful information. So, I would put a note saying that that was the case, and I'm thinking, if someone wanted to look at this chart in 30 years' time, that wouldn't be very informative for them because they wouldn't know this" (Nancy, a data analyst).
This observation may be attributed to the absence of accompanying text, where this particular piece of information could have been found.
A participant with a marketing background also recognised the importance of context in relation to the representation of COVID-19 case numbers, where the size of population should have been taken into consideration: "And the thing with the number of cases, I suppose, is you don't necessarily get told how many people are living there. … For complete transparency, what you'd need to do is to find a way of showing how many people were affected. If it's 600,000, then what is that per head of population? "(Emma, a marketing professional)
The perceived transparency and trustworthiness of data sources
The perceived transparency and trustworthiness of data sources were identified as common factors that influenced participants' perception of the trustworthiness of data visualisations. Most participants referred to the data source while explaining why they (dis)trusted the data visualisations. Data visualisations that lacked this information usually received low scores for trustworthiness, as shown in the following example: "Trust? Well, I don't know the source. Where is this (data) from? … I would say 2.5 out of 5." (Sam, a Master's student studying architecture) Although most participants admitted they would not check and verify the data, seeing the link to the data source and information about the data source provided reassurance: "Although as a resident of the country who just wants to know how much trouble we are in with coronavirus, I don't download that data and look at it. But it's important that it is there." (Rob, a software engineer).
This means that including data sources and related links in data visualisations is symbolic rather than practical and may not effectively encourage audiences to use open data for societal benefits.
However, the extent to which including information about data sources can enhance the trustworthiness of the data visualisations depended on participants' familiarity with and trust in them. Data sources such as the "Centre for Disease Control and Prevention" and "coronavirus.data.gov.uk", were thought to be more trustworthy than data sources such as "JL Partners" and "SAVANTA". As the latter sources were unknown to participants, they were perceived as untrustworthy. For example, a participant said: "I don't know who JL partners is … it doesn't strike me as something which has reliability" (Vince, a university student studying politics) Even participants with data or IT backgrounds were unfamiliar with "JL Partners" and "SAVANTA" and perceived them as undermining the trustworthiness of the data visualisations.
While most participants regarded government sources such as data.gov.uk as trustworthy, a few expressed scepticism due to their lack of trust in the government. For instance, a participant with an IT background provided a detailed explanation of his scepticism: "Remember, this is the same government that when the numbers were not looking in their favour, they were doing this regular reporting, every day they were going out there in the press and they were getting hammered. The press, the media, people like us, the public, lost faith in them because they were getting hammered. What was their response? They stopped doing daily briefings then." … "When you see that sort of stuff happening, you then go, 'Okay, how confident should I be?' Again, it's one of those cases where you can literally turn round and go, 'Okay, the government is saying this.' I would start looking at the source and go, 'Okay, well where are they getting it from and what's sitting behind them.' Or I'd look at this information and go and get a more qualified opinion." (Jack, an IT project manager) The concerns of participants sceptical about government data sources were also derived from their scepticism about data production, particularly among those possessing substantial knowledge about data. This will be discussed in the next section.
Scepticism about data production
Participants, especially those whose work closely involved data and thus possessed higher levels of data literacy, expressed scepticism regarding data production, which emerged as a main factor impeding their perceived trustworthiness of data visualisations. They believed the trustworthiness of data visualisations did not come from journalists' intention to handle the data carefully and truthfully represent them. Instead, they perceived it as contingent upon the trustworthiness of the underlying data, which, however, could be greatly influenced by what happened in the data production process. As audiences, their lack of knowledge of the influence on the data production process prevented them from fully trusting that data visualisations accurately represented reality. They suspected that even the journalists who created the data visualisations were uncertain about the trustworthiness of the data. Therefore, interestingly, participants made a distinction between the trustworthiness of data visualisations, which they believed originated from journalists' intention to convey the truth, and the trustworthiness of the underlying data, which they perceived as beyond journalists' control. A striking consensus among participants, particularly those with backgrounds in IT or data, was that the trustworthiness of data visualisations was closely linked to the factors influencing the collection and production of the underlying data, such as how COVID cases were identified and recorded and how cases or people were classified in a survey. For example, a participant commented on the survey data used in a data visualisation: "How do you get the information? How do you classify the people? What's the difference between the 'Mostly Sometimes' and 'Mostly Not'? Where did you get the data from? I don't trust this information. Because I know it's extremely difficult to get accurate information for this category" (Nancy, a data analyst).
A participant with an IT background made a comment: "… there's little consistency between the data sources (in different countries). And it's a question mark if the authority is cooperative to publish the data in a transparent way. … I can trust that the visualisation showing what is contained in the data source. … But whether the data source is telling the truth or not, that's a completely different story. …" (Henry, a data architect).
Another participant with an IT background echoed this perspective:
"… I think the data is less trustworthy because I know that all the countries are measuring things in different ways. … if you are asking how trustable it is, the question is only answerable by finding out where the data came from and how it was collected. We don't know. It's just a guess." (Rob, a software engineer).
Participants also showed a consensus about the criticality of data context in establishing the trustworthiness of both data and data visualisations. For example, a participant with an education background commented on a data visualisation that used survey data without including information about the polling company and the poll itself: "I think knowing who carries out a poll is important because different polling companies have different agendas. … it also doesn't say how big the sample was, which means that I can't really make a good judgement about it." (Rachel, an education professional) This point connects to the earlier discussion on data visualisation presentation: it would be beneficial for standalone data visualisations to include brief information about the background of the data. Certainly, this background information could have been included in the text of the news articles, which participants were not be provided with.
(Un)trustable because of the news media that published the data visualisations
Participants' perception of COVID-19 data visualisations was also influenced by the perceived trustworthiness and reputation of the UK news media, which was weighed against their doubts about data sources. Some participants even viewed the trustworthiness of news media that published the data visualisation as more important than that of data sources.
Participants considered quality newspapers, particularly the Economist, The Guardian and The Financial Times and those that shared their perspectives to be trustworthy, while perceiving tabloids as less trustworthy. For example, one university student, who rated four out of five for one data visualisation, placed greater emphasis on the news outlet rather than the data source: "I can see at the bottom that it's from The Economist, and that's a fairly established news form, so I trust seeing it from there. If it was on, saying something like The Daily Mail or The Sun, I might question it a bit more, because they tend to be more biased. And they do post a lot of gossip kinds of things, so I don't always fully take what they say as the truth." (Phoebe, a university student studying psychology) Likewise, two participants with a marketing background said: "I mean if it was The Economist that's publishing it, then, for me, I would probably trust it. It's just because I would expect them to know what they're publishing there and what statement they're making there." (Rose, a marketing professional).
"Okay. I do tend to-I shouldn't really. I do tend to trust what I read in The Guardian and The Independent. … I don't deny they have a political agenda but it feels like maybe, I guess, their perspective is more aligned with mine." (Emma, a marketing professional) Most participants expressed distrust in the Daily Mail-the only tabloid included in this study, as shown in the following quote: "Because obviously I don't trust the Daily Mail. They have a history of showing lots of bias on different topics, on different things. They're very forceful and try to push people to their view and what they want the people to follow, go by what they want them to go by. I don't know how much I can trust this data because it's from the Daily Mail." (Jacob, a university study studying engineering) In cases where data sources were absent from the data visualisations, the trustworthiness of the news media played a role in enhancing participants' trust in the data visualisations. For example: "(There is no data source.) I suppose I am biased as well, because I think the FT wouldn't publish something that didn't have a proper source. It is naughty that they didn't put the source (in the data visualisation), but it may be in the article somewhere. It probably is." (Rachel, an education professional) Lived experiences, pre-gained information and pre-existing understanding Participants frequently relied on real-life lived experiences during the COVID-19 pandemic and information from other sources as reference points when assessing the trustworthiness of the data visualisations. Often, participants chose to place trust in a data visualisation when the information presented in the data visualisation was aligned with their existing understanding of the situation: "It makes sense to me and it kind of does fit in line with what I know from last year, so I would probably trust it." (Phoebe, a university student studying psychology).
"... because it supports, anecdotally, the information I'm seeing from other sources, I'm prepared to give it a 3." (Jack, an IT project manager) Medical professional participants frequently turned to their knowledge about the situation gained from their work to verify how truthful the data visualisations were. While evaluating the FT data visualisation about the official death toll only telling part of the story, Jess, a medical professional, made the following comments: "It is because I think I was aware this information was shared as well about these COVID-19 deaths and not reporting (all COVID-19 deaths), or this mismatch of information as well. That was discussed in some of our forums as well in the beginning. So, maybe that is why." Another very interesting point is that participants frequently attempted to compare the information they perceived from the data visualisations with what they observed in the news to form a judgement. Examples include: "Yes, yes, I trust it because I think it is corroborated by other things that I have seen. I mean I have heard this concept said by lots of scientists and epidemiologists, so I would agree with that, yes." (Rachel, an education professional) "Looking at it now, I would say five, because it does simply replicate what has been in the news." (Jill, a medical professional) On occasions where the data visualisations did not match participants' understanding gained from lived experience or information they already had, they tended to regard data visualisations as less trustworthy. For example, a participant explained why she did not entirely trust a data visualisation, which showed that some countries, such as Indonesia and Malaysia, had lower case numbers than others, such as the US: "A country like Indonesia or Malaysia, they reported a certain number of cases but actually I heard from the people who lived there, the actual situation is worse than what being reported because some cases in a bit rural area are not reported." (Charlotte, an education professional).
Discussion and conclusion
The above discussion of audience perception of the trustworthiness of COVID-19 data visualisations identifies five main influencing factors: (1) data visualisations' presentation (content/visuality), (2) perceived data source transparency and trustworthiness, (3) perceived data (production) trustworthiness, (4) perceived news media trustworthiness, and (5) personal experiences, information from other sources and understanding of the situation (see Figure 1).
These factors highlight the important role social context plays in influencing audience perception of the trustworthiness of data visualisations. Participants' perception was not only influenced by the content itself, including how data was presented in data visualisations (content/visuality), which is similar to the concept of "message credibility" (Hovland, et al., 1953;O'Keefe, 1990). It was also-to a greater extent -influenced by the context, in which they gained their pre-existing understanding of the data, the data sources, the news media and the overall situation. Therefore, their perception was significantly affected by factors extending beyond the realm of data visualisations and into social context.
The existing literature has recognised the importance of social context to audience perception of news trustworthiness. But by social context, these studies mainly mean external factors, such as audiences' backgrounds such as numeracy levels (Gondwe, et al., 2021), their motivation and incentive to process information and prior knowledge (Lee and Kim, 2016), political stances and values, partisanship (Kohut and Toth, 1998;Horowitz, et al., 2021) and political and personal cynicism (Lee, 2010). While confirming the significance of participants' backgrounds, this study takes these discussions further by identifying the influence of three-leveled social context. The three levels of social context indicate the critical role played by data-a socially constructed product-in influencing the perception of audiences regarding the trustworthiness of data visualisations.
The first level presents the social context where data is collected, produced, curated and archived. Participants' trust (or mistrust) in data visualisations was influenced by their varied understandings of data as a socially constructed entity shaped by social factors throughout the production process.
The second level of social context is related to where data is retrieved-the data source. This is similar to what Hovland and his collaborators described as "source credibility" (Hovland, et al., 1953), although their term merely refers to the trustworthiness of the communicator. The trustworthiness of data sources refers not only to their own trustworthiness but also to how the data is produced, which occurs at the first level of social context. Social context also means the social context in which audiences live and gain information and understanding about the situation. In this study, participants' lived experiences and the information they received in the past from various sources shaped their understanding of the situation. These anecdotal experiences, cognition, and frames of reference gained from their living environment greatly influenced their perception of data visualisations' trustworthiness.
The role of social context thus highlights the importance of participants' backgrounds, particularly their job and personal experiences, to their perception of data visualisations' trustworthiness. These backgrounds enabled them to develop a specific understanding of data and COVID-related reality, which they relied on to make judgements about data visualisations' trustworthiness. The trustworthiness of data visualisations was not solely dependent on their content or presentation, although these aspects were important. Instead, it extended to the social context associated with data visualisations and their audiences. It was up to how participants perceived the trustworthiness of the data sources and the news media and their knowledge of data and data production. Whether the meanings of data visualisations matched their understanding of the situation gained from previous lived experiences and information received from other sources was also vital. These suggest that audiences' lived experiences and the information they receive from various sources become significant reference points when they assess the trustworthiness of data visualisations related to events such as the COVID-19 pandemic, which have implications for people's personal and, in some cases, professional lives. The social construction nature of data and data visualisations opens a space for audiences to question data visualisations' trustworthiness, which is thus beyond the control of journalists and news media. Hence, data visualisations should not be viewed merely as graphs but rather as cultural products of social context.
Traditionally, data visualisations have been thought to be a useful journalistic tool to tell news stories and engage audiences (Flew et al., 2010;Ojo and Heravi, 2018;Stalph and Heravi, 2021). In this study, participants largely regarded data visualisations trustworthy, which confirms their usefulness to boost audience trust in news, in particular in the UK context where trust in the news is low. However, this study also suggests audiences may take into account aspects of data visualisations that differ from those considered by journalists. The importance of social context suggests that data visualisations, as cultural products of social context, may have a precarious ability to enhance audiences' trust in news media, at least if they are considered in isolation from the accompanying text of news articles. This applies not only to news media but also to other knowledge institutions that use data visualisations to convey messages to the public. The strong connection between trust in data visualisations and trust in data, a socially constructed product, suggests that audiences with higher levels of data literacy would be more likely to question the credibility of data visualisations based on their knowledge of data. However, for audiences with lower levels of data literacy and limited awareness of the uncertain nature of data, data visualisations can be deceptively credible, leading them to unquestioningly accept the reality represented in data visualisations as the true reality. Therefore, efforts to improve their data literacy are necessary to enable them to have a more appropriate appreciation of data visualisations.
This study contributes to our understanding of audience perception of data visualisations' trustworthiness. However, it has limitations due to the small sample size, the participants' high levels of education and data literacy, the absence of accompanying text for the data visualisations, and the fact that the majority of the data visualisations were taken from quality newspapers. The findings did not confirm emotional reading of participants as found by other scholars (Kennedy and Hill, 2018;Sleigh and Vayena, 2021). Instead, the interviews were full of rational discussions and judgement. This could be attributed to the participants being well educated and having professional backgrounds. The findings, such as the significance of participants' lived experiences and pre-existing knowledge, may have been influenced by the fact that COVID-19 data visualisations are about the topics that they have personally experienced. In addition, the examination of data visualisations in isolation from the text of news articles may have been a reason why participants regarded data visualisations lacking context.
For further research, it would be beneficial to involve participants from diverse social backgrounds and data visualisations published by tabloids or broadcasters to gain a comprehensive understanding of how participants' backgrounds and types of news media may influence audience perception of data visualisations' trustworthiness. It would also helpful to explore whether audiences perceive data visualisations' trustworthiness in a similar manner in social contexts outside of the UK, considering that COVID-19 management and the publicpress relationship may have characteristics specific to each context. This research also implies the importance of the immediate reading context of data visualisations, which is, however, removed in this study. Future research may aim to investigate the influence of including the text of news articles on audiences' trust in data visualisations. | 9,266.6 | 2023-07-27T00:00:00.000 | [
"Computer Science"
] |
Methods for geothermal resource assessment of hot dry rock: A case study in the Gonghe Basin, China
Hot dry rock (HDR) geothermal resources are renewable energy source. Many of the findings of HDR resource evaluations have been used in energy planning and EGS design. However, to assess the amount of HDR resources in different locations, a consistent classification scheme and evaluation methods are still lacking. Considering geological credibility and economic feasibility, HDR resources are separated into three categories: vision, reserve, and exploitable. Vision and reserve are stationary resources that can be evaluated using the volume technique, and the exploitable resources can be evaluated using the numerical simulation approach. The HDR vision resource of the Gonghe Basin is evaluated to be 4.076 × 1022 J, and the reserve resource of the Qiabuqia HDR mass is evaluated to be 2.11 × 1020 J. At the Qiabuqia HDR development site, a discrete fracture network (DFN) model is applied for numerical simulation computations, which is based on the notion of local thermal nonequilibrium. The K1 and K2 wells produce varying amounts of heat due to the heterogeneous features of the fractured medium model, which is primarily due to differences in fracture density, heat exchange area, and fluid migration pattern. The categorization system and assessment technique can be used as a guide for evaluating HDR resources in the future.
Introduction
Hot dry rock (HDR) geothermal resources have the advantages of great amount, global availability, minimal pollution, long-term stability, and renewability. In addition, it is regarded as a clean energy source with enormous potential for resolving energy problems and pollution. There has been a worldwide uptick in HDR exploration and development since the US Department of Energy launched the "Frontier Geothermal Observatory Research Project (FORGE)" in April 2015 . The GR1 well in the Gonghe Basin drilled a 236°C HDR mass at a depth of 3 705 m in August 2017, marking a key milestone in China's HDR exploration. Following that, the China Geological Survey held the "Qinghai Gonghe HDR Test Mining Scientific and Technological Battle," officially launching China into the worldwide HDR exploration and development track.
HDR resources offer many promises. For instance, Xu Tianfu calculated that the energy stored in HDR at a depth of 3-10 km in the Earth's crust is 30 times that of all oil, natural gas, and coal on the planet (Xu et al., 2012). The quantity of HDR resources in the United States' land area was assessed by Teste, and the findings revealed that the total geothermal resources of HDR in the United States at a depth of 3-10 km was 1.67 × 10 25 J. (excluding the Yellowstone Park area) (Teste et al., 2006). The FORGE plan, which is based on this, is ambitious. Its application-level aim is to obtain over 100,000 megawatts of electricity in the United States to fulfill the green power needs of 100 million homes (Zhang et al., 2019a). Accordingly, Wang Jiyang calculated that the entire quantity of HDR resources in China's land area at a depth of 3-10 km is 2.09 × 10 25 J, or approximately 714.9 trillion tons of standard coal, based on data from terrestrial heat flow (Wang et al., 2012). Lin Wenjing conducted comparable calculations on HDR resources in China's land regions at depths of 3 to 10 km and discovered that the entire HDR resource is 2.52 × 10 25 J, or 860 trillion tons of standard coal. Assuming that the extractable amount accounts for 2%, it is equivalent to 5200 times of China's total energy consumption in 2010 (Lin et al., 2012).
The volumetric technique and numerical simulation method are the most widely used methods for evaluating HDR resources. The basic approach for evaluating stationary HDR resources is the volume method. Scholars such as Zhang Shengsheng, Yang Lizhong, Kang Zhiqiang, Tan Xianfeng, and Guo Pan employed the volumetric technique to analyze the stationary resources of HDR in various parts of China, in addition to the results produced by Xu Tianfu, Test, Wang Jiyang, Lin Wenjing, and others (Zhang et al., 2019b;Yang et al., 2016;Kang et al., 2020;Tan et al., 2020;Guo et al., 2020). When determining the quantity of usable HDR resources, numerical simulation approaches are frequently utilized. Xiao Yong, Li Zhengwei, Guo Liangliang, Lei Hongwu, Sun Zhixue, and others, for example, have performed research on the Thermal-Hydro-Mechanical-Chemistry (THMC) coupled numerical modeling of the enhanced geothermal systems (EGS) (Xiao, 2017;Li et al., 2015;Guo et al., 2016;Lei, 2014;Sun et al., 2016). Wang Yang, Zhai Haizhen, Qu Zhanqing, Azadeh Riahi, Xu Chaoshui, etc. studied the fracture simulation method of EGS (Wang and Zhang, 2011;Zhai et al., 2020;Qu et al., 2017;Riahi A et al., 2019;Xu et al., 2015). Pranay Asai, Zhang Chao, Gao ping, Maleaha Y. Samin, Yue Gaofan, etc. have studied the key parameters that affect the productivity of EGS (Asai et al., 2019;Zhang et al., 2018a;Samin et al., 2019;Gao, 2015;Yue et al., 2015). Based on the theory of local thermal nonequilibrium, Qu Zhanqing simulated the extraction of thermal energy from HDR with a fracture network (Qu et al., 2019). Liu Gang proposed an inversion algorithm for resource evaluation of EGS (Liu et al., 2018).
Despite the fact that many academics have worked on evaluating HDR resources, there is still a need for a uniform classification system and assessment approach that can give more reliable data to decision-makers. This study presents an HDR resource categorization system and assessment technique, using the Qinghai Gonghe Basin as an example to compute stationary and exploitable resources. It offers a reference for the following HDR resource evaluation while mastering the quantity of HDR resources in the Gonghe Basin.
Classification scheme of HDR resources
Research goals have led to different HDR classifications. Based on the analysis of the genetic models of HDR resources found at home and abroad, Wang Guiling and Gan Haonan classified the occurrence types into high radioactive heat production, sedimentary basin, modern volcano and inner-plate active tectonic belt (Wang et al., 2016;Gan, 2015). This categorization approach, on the other hand, is not appropriate for assessing HDR resources.
The goal of HDR resource evaluation is to determine the quantity and offer fundamental data for decision makers based on the assumption of recognizing geothermal geological background. It ensures optimum resource efficiency while decreasing the danger of HDR development. A categorization method for HDR resources was suggested with this objective in mind. HDR resources are classified into three categories based on geological believability and economic viability: vision, reserve, and exploitable ( Figure 1).
The vision resources located in rock masses with limited porosity and permeability within 10 km underground (the maximum depth of drilling) and temperatures greater than 180°C are HDR vision resources. It correlates to the research stage's inferred grade (D). The HDR within a depth of 6 km are reserve resources, and they match to the controlled grade (C) in prefeasibility exploration and the proven grade (B) in feasibility exploration. The exploitable resources correspond to the development stage's validated grade (A). Stationary resources may be evaluated using the volume technique for vision and reserve types, and the exploitable resources can be evaluated using the numerical simulation approach for exploitable kinds. Volume method for stationary resource evaluation Mathematical method. The volume technique is the most frequent way to calculate HDR stationary resources, and it is also the most fundamental method for calculating heat. The stationary resources are determined by the HDR volume, temperature, and thermophysical characteristics. It is the heat generated in a low porosity and permeability rock medium (ignoring the heat storage of fluid in the rock). The calculation formula is as follows (Teste et al., 2006;Wang et al., 2012;Lin et al., 2012): In the equation, Q is the stationary resource, ρ is the rock densities, C p is the heat capacities, V is the volume, T is the temperature, and T c is the specific reference temperature, usually the lower limit of the power generation temperature.
Parameters settings. The parameter acquisition techniques for stationary resource appraisal are not the same, according to geological credibility. Below are the procedures for obtaining assessment parameters for inferred/controlled/proven (D/C/B) grade HDR resources.
Inferred grade (D)
The inferred HDR resource appraisal is primarily used to support the national energy development strategy by defining the major HDR target area. The density (ρ) and specific heat capacity (C p ) regulated by stratum lithology are typically regarded as constants, which may be determined by laboratory experiments or empirical values, due to the vast scale of evaluation and the limitations of exploration. The deep temperature can not be observed directly; instead, it has to be estimated using the steady-state heat conduction theory and temperature logging data, as well as rock thermophysical characteristics. The following is the calculation formula (Wang et al., 2012;Lin et al., 2012): For simple calculation, the following formula is commonly used: Where the T 0 represents the surface temperature, q 0 surface heat flow value, A 0 surface heat generation rate, Z depth, K thermal conductivity, A' bottom heat generation rate, Z' total thickness of the formation.
Controlled grade (C)
The controlled grade HDR resource evaluation is primarily used to define the best target region. Drilling core tests in the assessment region can be used to acquire thermal physical characteristics such as density (ρ) and specific heat capacity (C p ). Furthermore, the temperature field is mostly calculated using a mix of temperature monitoring, electrical methods, and seismic surveys. The temperature controlled by the borehole is appropriately projected based on the geological structure characteristics, and ultimately the temperature field is created.
Proved grade (B)
The assessment of high-quality HDR resources is mostly for development purposes. Thorough research is required to enhance evaluation accuracy, and the selection of various factors should be more acceptable. The stratum and fault distribution features are carefully shown using sophisticated 3D geological modeling. Drilling core tests can be used to determine the density (ρ) and specific heat capacity (C p ), as well as the impact of formation temperature and pressure. The temperature field distribution in faulted regions should be adjusted depending on the fault zone's water and heat conduction characteristics.
Numerical simulation method for exploitable resource evaluation
Factors affecting exploitable resources of HDR. The appraisal of exploitable resources is focused on the HDR development site, where geothermal geological conditions have been discovered and reservoir building has been completed successfully. The HDR exploitable resources may be estimated using a numerical simulation technique based on a clear well site architecture, mining plan, and other engineering designs. The underlying geological conditions and the technical conditions are the two components that define the actual exploitable resources. The appraisal of HDR exploitable resources is based on geological parameters. Drilling, logging, geophysical prospecting, connectivity tests, and other methods are used to obtain information on the formation structure, rock properties, fracture distribution, and temperature field distribution within the HDR development site after the reservoir has been hydraulically fractured. These data may be used to create a geothermal geology model for numerical simulation. The key component for evaluating HDR exploitable resources is engineering conditions. It covers well-setting variables such as well location, well type, and injectionproduction interval, as well as mining process variables such as circulation flow rate and injectionproduction mode. Sanyal et al., discovered that when the thermal storage volume broken is higher than 0.1km 3 , the recovery rate of HDR resources remains constant at 40 ± 7%, regardless of well architecture, fracture spacing, or permeability (Sanyal and Butler, 2005). However, most engineering instances to far have failed to owe to "no injection or extraction" or "short circuit, resulting in thermal breakout" issues.
The recoverable coefficient was sometimes arbitrarily defined (approximately 2%, 20%, or 40%) in earlier research to estimate the HDR exploitable resources, but the findings were not convincing. As a result, the recoverable coefficient, which can be estimated by numerical simulation, is exclusively utilized to assess EGS performance. It may be used to assess the resources that can be exploited under similar geological conditions. Numerical simulation process of HDR exploitable resources. HDR extractable resource assessment is a numerical simulation technique that is infinitely near reality. Aside from the fundamental geological and technical requirements, the extension of the geothermal geological model and the creation of a mathematical model are critical. The following is the procedure for evaluating HDR extractable resources.
1. Creation of a geological model for geothermal energy. Obtain fundamental geological information such as HDR mass, lithology, structure, permeability, porosity, thermophysical characteristics, and temperature field, among other things, by drilling, logging, geophysical prospecting, and connection tests, among other things. Build a 3D geological model of geothermal energy using these data. 2. Establishing boundary conditions. Establish the reservoir conceptual model based on the cycle test and microseismic monitoring data, and set the external boundary conditions according to the EGS effect range. When situations allow, the heterogeneous fractures conceptual model should be used initially. 3. Creating engineering conditions. Well information and mining technique information, such as well position, well type, interval, circulation flow, and so on, are set according to the engineering design. 4. Calculation of numerical simulation. Fit the measured data, such as engineering cycle rate, wellhead pressure, and temperature, to the fracture distribution, width, roughness, and other characteristics. It can fit the reservoir injection capacity and impedance of locations with similar geological characteristics if there are inadequate on-site observed data.
The evaluation of HDR extractable resources is a dynamic process that should be noted. Existing data may be utilized to forecast future resources that can be extracted. To verify the validity of the numerical simulation model, it must be adjusted after gathering adequate field observation data.
Mathematical method. Thermal power, reservoir life, and cumulative exploitable resources are all influenced by the fracture conceptual model. Oversimplified conceptual model findings on production temperature and reservoir life frequently differ from reality, making it difficult to persuade people (Zhai et al., 2020;Huang et al., 2017;Li et al., 2018). Based on this, we prefer to choose the DFN model in the simulation process. We consider the reservoir as a dual medium model comprised of matrix rock mass and cracks, calculate using the local thermal nonequilibrium theory, and apply the following assumptions to simulate the energy transfer between rock mass and fluid: (1) Darcy's law applies to the seepage in the matrix and cracks; (2) Heat conduction follows Fourier's law, disregarding the effect of thermal radiation; (3) Seepage is a single-phase liquid flow with no phase change; (4) No chemical reaction occurs between the fluid and the rock; (5) Gravity and capillary force are ignored.
Local thermal nonequilibrium multiphysics coupling can truly reflect the heat transfer process at the interface between the fluid and fractured medium. This is achieved by coupling the heat equations in the solid and fluid subdomains through a transfer term proportional to the temperature difference between the fluid and the solid. The corresponding heat equations in the solid are : The corresponding heat equations in the fluid subdomains are : where the subscript 's' represents the solid and 'f' represents the fluid. In the equation, T is the temperature, ρ is the densities, θ p is the solid volume fraction, C p is the heat capacities at constant pressure, k is the thermal conductivities, q sf is the interstitial convective heat transfer coefficient, and u is the velocity vector.
Geological background
The Gonghe Basin is Qinghai Province's third largest basin. With a total size of approximately 1.52 × 10 4 km 2 , it is roughly diamond-shaped in plane, small in the west and large in the east. The Yellow River divides the basin into two sections as it flows from southwest to northeast along its short axis. The Gonghe Basin is located in the West Qinling orogenic belt, straddling two tertiary structural units: the Zongwu Longshan-Xinghai Aola trough (a) and the Zeku back-arc foreland basin (b). The Gonghe Basin is composed of sedimentary caprock and granite basement (Figure 2). The sedimentary caprock is mainly composed of the Xining Formation (EN 1 x), Xianshuihe Formation (N 1 x), Linxia Formation (N 2 l) and Gonghe Formation (Q 1-2 g). The Longwuhe Formation (T 1−2 l) and Gulangdi Formation (T 2 g) exist in some areas. The granite basement was formed in the Middle and Late Triassic, and its lithology includes granodiorite, monzonitic granite, granite porphyry, etc. (Figure 3). With a depth of 3705 meters, well GR1 is located on the platform southeast of Qiabuqia town. Well GR1 entered the granite section at a depth of 1 350 m, according to drilling data. CHEGS used a dispersed optical fiber to detect the temperature of GR1. The temperature of the granite reached 180°C at a depth of 3 300 m, and it climbed practically linearly with increasing depth. The geothermal heating rate was 43.8°C/km, which is normal for a geothermal system with high thermal conductivity. The heat source for the Gonghe Basin, according to Zhang Senqi, might be a partial melting layer in the crust, augmented by heat generated by radioactive materials. The upper thick sedimentary caprock has strong thermal insulation properties, which allow HDR resources to accumulate secondary heat (Zhang et al., 2018b;Zhang et al., 2020).
Inferred HDR resource (D level) evaluation of the gonghe basin
Establishment on geological model. Because the Gonghe Basin has a single geological border, this assessment considers the whole Gonghe Basin when assessing inferred grade (D) HDR resources. The inferred grade (D) HDR resource is defined as the thermal energy stored in granite with a depth of less than 10 km and a temperature greater than 180°C, according to the categorization methodology. The Gonghe Basin's sedimentary caprock is considered to be glutenite for ease of computation, while the shallow foundation is supposed to be granite. The burial depth of the granite top surface is shallow in the east and deep in the west, according to geophysical methods such as magnetotelluric and ambient noise seismic imaging, which is also confirmed by drilling.
The top and bottom elevations of the Gonghe Basin have been merged for ease of computation. The top elevation is 0 kilometers, while the lower elevation is −10 kilometers. Granite burial depth and stratum lithology are both modified correspondingly. The processed 3D geological model is divided into equal size cube grids, which are 1 km long, 1 km wide, and 0.1 km high. The total number of divided blocks is 1,514,300. Preference settings and calculation for evaluation. The evaluation of stationary resources is based on parameter setting. Temperature, volume, density, and specific heat capacity are the most important factors. In the Gonghe Basin, 123 typical rocks were gathered, with lithologies ranging from granite to granodiorite to sandstone. The density was measured by a true density meter (3H-2000) with a resolution of up to 0.0001 g/mL. The density range of the test granite is 2 546∼2 620 kg/m 3 , and the granite density is 2 550 kg/m 3 in this evaluation; the variation range is large due to the difference in lithology, and the caprock density is 2 500 kg/m 3 . The specific heat capacity is measured by a specific heat meter DSC204F1, the measuring range is −180∼700°C, and the temperature rise and fall rate is 0∼200 K/min. The specific heat capacity of granite is distributed between 709∼800 J/(kg*K), and the value of this evaluation is 750 J/(kg*K); the specific heat capacity of sandstone is distributed between 805∼845 J/(kg*K), and the value is 825 J/ (kg*K) in this evaluation. The rock heat generation rate is calculated using the following formula proposed by Rybach in 1976.
where A represents the radioactive heat generation rate (μW/m 3 ) and ρ density (g/cm 3 ), and C u , C Th , and C k are the uranium (μg/g), thorium (μg/g) and potassium (Wt%) contents in the rock, respectively. The range of heat generation rates of granite samples tested at this time is 0.778∼4.10 μW/m 3 , and the average value is 2.28 μW/m 3 , which is used in this evaluation. The thickness of the concentrated layer of radioactive elements is 10 km (Lin et al., 2012). The multiyear average temperature in the Gonghe Basin is 6.34°C, and the terrestrial heat flow is 114.7 mW/m 2 (Zhang et al., 2019b). Based on the steady-state heat conduction theory (Equation 3), the temperature field in the Gonghe Basin is calculated. The temperature distribution at the centroid of granite blocks shallower than 10 km in the Gonghe Basin is shown in Figure 4. Comparing the measured temperature with the inferred temperature of the GR1 well, the two temperature curves fit well, which also confirms the reliability of the parameters ( Figure 5). Taking 90°C as the lower limit temperature for development, the cumulative calculation results show that the HDR resources within a depth of 10 km in the Gonghe Basin contain 4.076 × 10 22 J of thermal energy, which is equivalent to 1.39 × 10 12 t of standard coal.
Controlled HDR resource (C level) evaluation of the Qiabuqia geothermal area
Establishment on geological model. Magnetic anomalies are present in concealed HDR Zhang Senqi used the V2D-depth technique to infer the HDR mass in Qiabuqia based on high-precision aeromagnetic survey data. The Qiabuqia HDR mass is additionally verified by the GR1, GR2, DR3, and DR4 HDR exploration holes (Figures 6 and 7). The control range of GR2 was extended by half based on the HDR mass range specified by high-precision aeromagnetic survey data to estimate the HDR mass distribution range in Qiabuqia. It stretches 21.2 kms east to the west and 14.3 kms north to south, with a total area of 246.90 km 2 ( Figure 6) (Zhang et al., 2018a).
The reserve resources are computed using the Qiabuqia HDR mass as the assessment object. The HDR reserve resource is defined as the thermal energy stored in granite with a depth of less than 6 km, survey accuracy of control level (C), and temperature more than 180°C, according to the categorization system. The strata inside the defined Qiabuqia HDR mass are divided into two layers, with the sedimentary caprock being glutenite and the shallow foundation being granite. Geophysical research techniques such as magnetotelluric, high-power time-frequency electromagnetic approaches, and 2D seismic surveys have been used to establish the granite burial depth, which has been confirmed by drilling.
The top and bottom elevations of the Qiabuqia HDR mass model have been merged for ease of computation. The top elevation is 0 kms, while the lower elevation is −6 kms. Granite burial depth and stratum lithology are both modified correspondingly. The processed 3D geological model is split into 100 m long, 100 m wide, and 10 m high cube grids of equal size. The total number of divided blocks is 450,300.
Preference settings and calculation for evaluation. The data from borehole temperature measurements are crucial in determining the control grade(C) HDR reserves. Temperature logging was performed on the major holes in the Qiabuqia HDR mass. The temperature of the granite section varies practically linearly with depth (Figure 8), and the average geothermal heating rate can reach 45.5°C/km, according to the findings (Table 1). The deep temperature is inferred using the average geothermal gradient of the granite portion in each well, and then interpolated using the inverse power ratio technique to generate the temperature field of the Qiabuqia HDR mass. The parameter parameters for inferred HDR resources, such as density and specific heat capacity, remain the same (D). Taking 90°C as the lower limit temperature for development, the cumulative calculation results show that the HDR resources within a depth of 6 km in the Qiabuqia HDR mass contain 2.11 × 10 20 J of thermal energy, which is equivalent to 7.2 × 10 9 t of standard coal.
Exploitable resources evaluation of Qiabuqia EGS site
Establishment on discrete fracture network model The majority of cracks in new rock masses are caused by geological pressures such as extrusion, torsion, and stretching, which follow a strong statistical rule. The geological structural unit of the Cheji Sag, where the Qiabuqia HDR development site is located, has shallow burial in the east and deep burial in the west, according to drilling and geophysical data. On the east side, the Qiabuqia HDR granite mass is comparable to the Dangjiasi granite mass. We used this information to calculate the number of cracks in the Dangjiasi granite mass, which we used to estimate the number of fractures in the granite reservoir at the Qiabuqia HDR development site.
We investigated 8 fresh sections of the Dangjiasi granite body, with a total of 277 fractures. The length, width, mechanical properties and filling conditions of each fracture were recorded, and isodensity maps (Figure 9a) and rose maps (Figure 9b) were made. The Dangjiasi granite mass has 3 groups of dominant strike fractures, which are 30-40°, 90−100°and 160−170°, of which 30−40°a ccounts for the highest proportion. The dip angles of the fractures are concentrated in the lowangle area less than 45°, and the spreading characteristics are conducive to the communication between shafts.
The HDR exploitable resources are heavily influenced by fracture density. Physical modeling tests demonstrate that the fracture pressure of intact granite is more than 90 MPa and that during hydraulic fracturing, the granite preferentially begins cracks along natural fissures. Based on this, we describe the fracture density of the granite reservoir at the Qiabuqia HDR development site using the fracture line density of the granite section in Well GR1. According to GR1 logging, there are eight fracture zones in the 1 500∼3 350 m granite section, with a total length of 72.7 m and a fracture line density of 4.32/km.
In the granite portion, create a 1 000*1 000*500 m fracture distribution model. The model is spread east-west, with a buried depth of 3 500−4 000 meters. Referring to the crack occurrence of the Dangjiasi granite mass and the line crack density of the GR1, the average radius of the cracks was set to 300 m. Thirty-six random cracks with a total area of 6.92 km 2 were created ( Figure 10).
Initial conditions setting and simulation
The injection capacity and reservoir resistance are the two most important factors in determining reservoir connectivity following hydraulic fracturing. The reservoir performance is limited in this simulation using an analogous technique. The EGS project in Sulz, France, began commercial power generation in 2013 and is now considered the most successful HDR power generating project in the world. The Upper Rhine Graben is home to the Sulz EGS project. The sedimentary layer in this location is approximately 1 400 meters thick, and the reservoir lithology is monzonitic granite. It shares geological similarities with the Qiabuqia HDR development location. The Sulz EGS drilling depth is 5 000 m, and the temperature at the bottom hole is 165°C. The HDR reservoir has a resistance of 0.23 MPa/(kg/s) after fracture, and an injection capacity of 2-4 L/s/MPa. The Sulz test site in France is used to describe the reservoir properties of the Qiabuqia HDR mass following fracture.
This simulation has the same scope as the DFN model, which has Well GR1 at its core. The top and bottom surfaces, as well as the surrounding borders, are established as water-and heat-insulating boundaries, ignoring the impact of ground heat flow and adjacent rock masses on heat transmission. The model is initially configured to a homogeneous temperature of 230°C, with one injection and two mining steps as the development mode. The GR1 well is an injection well with a 20 kg/s injection flow rate and a 60°C reinjection temperature; the K2 and K1 production wells are 300 meters to the east and west of the GR1 well. Ignoring fluid loss during mining, the density of granite is set to 2 550 kg/m 3 , the specific heat capacity is 750 J/ (kg·K), the porosity is set to 0.03, and the permeability is 1 × 10 −16 . The thermal conductivity test is completed by the TCS (thermal conductivity scanning) thermal conductivity automatic scanner produced in Germany, with a measurement range of 0.2∼25 W/(m·K), and a measurement accuracy of ± 3%. The thermal conductivity of 22 granite samples in the Gonghe Basin varies from 2.173 to 3.273 W/(m·K). In this simulation, the average value of 2.87 W/(m·K) was adopted.
We simulated local thermal nonequilibrium by generalizing the reservoir as a dual medium model consisting of matrix rock mass and cracks. Equations (4) and (5) show the calculating formulas. Run for 30 years with a crack width of 0.75 mm, a crack specific surface area of 3, and a gap convective heat transfer coefficient of 100 W/(m 2 ·K). In a steady state, the injection capacity of Well GR1 is 4.46 L/s/MPa, and the reservoir resistance is 0.224 MPa/(kg/s), which is in good agreement with the reservoir parameters of the French Sulz EGS project, which also confirms the model's reliability.
Results and discussions
Wells K1 and K2 display variable flow, temperature, and thermal energy characteristics during the mining operation due to the heterogeneous features of the DFN model ( Figure 11). The output flow rate of the K1 well is approximately 12.7 kg/s under the same production bottom pressure, while K2 is approximately 7.3 kg/s, suggesting that GR1 is properly linked to K1. K1 produces more heat than K2 when the flow rate is changed. Wells K1 and K2 had initial thermal powers of 9.08 × 10 6 W and 5.26 × 10 6 W, respectively. The temperature of the generated fluid drops as production continues, and the thermal power of the K1 and K2 wells drops to 7.98 × 10 6 W and 4.44 × 10 6 W after 30 years, respectively.
Due to the high flow rate, we did not find a substantial temperature reduction in Well K1. The temperature decline, on the other hand, was slower than K2. The major explanation is that between K1 and GR1, there are more complicated fractures, a larger heat exchange area, and longer fluid flow pathways (Figure 12).
With a temperature drop of 10°C as the lower limit, Well K1 can be continuously mined for 17.7 years, and the thermal energy produced is 4.99 × 10 15 J; Well K2 can be mined continuously for 14.1 years, and the thermal energy produced is 2.29 × 10 15 J. Under the condition of continuous mining for 30 years, the temperature at wellhead K1 will drop by 20.94°C, wellhead K2 will drop by 24.94°C, and the total thermal energy will be 1.28 × 10 16 J. Taking 90°C as the lower limit temperature of HDR resources, the stationary resource is 1.34 × 10 17 J. Under the model engineering conditions, the thermal energy mined in 30 years accounts for approximately 9.55% of the stationary resources.
Conclusions
1. This paper proposes a categorization method for evaluating HDR geothermal resources. HDR geothermal resources may be separated into stationary resources and exploitable resources depending on the purpose and technique. Based on survey accuracy, we can classify HDR resources into inferred (D), controlled (C), proven (B), and verified (A) resources. According to geological credibility and economic feasibility, HDR could be split into vision, reserves, and exploitable resources. 2. A summary of HDR resource evaluation methodologies is provided. The volumetric technique is used to determine stationary resources. The parameter acquisition techniques change depending on the precision requirements of different tiers of HDR resources. Numerical simulation methods are mostly used to calculate exploitable resources. The fracture concept model has a significant impact on thermal power, reservoir life, and cumulative exploitable resource findings. The heterogeneous fracture model should be studied initially in order to accurately replicate the energy transfer between the rock mass and fluid. 3. The volume method is used to evaluate the HDR vision resources of Gonghe Basin and the reserve resources of the Qiabuqia HDR mass. Taking 90°C as the lower limit temperature for exploitation, the HDR vision resources within 10 km of the Gonghe Basin are 4.076 × 10 22 J, which is equivalent to 1.39 × 10 12 t of standard coal; the reserves resources within 6 km of Qiabuqia HDR mass are 2.11 × 10 20 J, equivalent to 7.2 × 10 9 t of standard coal. 4. The usable resources of the Qiabuqia HDR development site were assessed using a numerical simulation approach. A DFN model of the Qiabuqia HDR development site was created using the fracture line density of the GR1 well and the outcrop fissure statistics of the Dangjiasi granite mass, which is closely linked to the Qiabuqia HDR mass. Local thermal nonequilibrium theory was utilized to determine the exploitable resources of the Qiabuqia HDR development site, using the reservoir resistance and injection capacity of the Sulz EGS project as limitations. The K1 wellhead temperature drops by 20.94°C, the K2 wellhead temperature drops by 24.94°C, and the total production heat energy is 1.28 × 10 16 J under circumstances of one injection and two production cycles, a circulating flow rate of 20 kg/s, and a reinjection temperature of 60°C, and continuous mining for 30 years. Accounting for approximately 9.55% of the HDR stationary resources. 5. As a result of the heterogeneous features of the fractured media model, wells K1 and K2 produce varying amounts of heat throughout the mining operation. Due to the high flow rate of continuous heat generation, K1 did not experience a substantial temperature drop when compared to Well K2. The temperature decline, on the other hand, was slower than K2. The major explanation is that between K1 and GR1, there are more complicated fractures, a larger heat exchange area, and longer fluid flow pathways. Reservoir fracture distribution features and fracture model generalization have a significant influence on the assessment of HDR exploitable resources. | 7,833 | 2022-03-14T00:00:00.000 | [
"Geology"
] |
MALINTO: A New MALDI Interpretation Tool for Enhanced Peak Assignment and Semiquantitative Studies of Complex Synthetic Polymers
The newly developed MALDI interpretation tool (“MALINTO”) allows for the accelerated characterization of complex synthetic polymers via MALDI mass spectrometry. While existing software provides solutions for simple polymers like poly(ethylene glycol), polystyrene, etc., they are limited in their application on polycondensates synthesized from two different kinds of monomers (e.g., diacid and diol in polyesters). In addition to such A2 + B2 polycondensates, MALINTO covers branched and even multicyclic polymer systems. Since the MALINTO software works based on input data of monomers/repeating units, end groups, and adducts, it can be applied on polymers whose components are previously known or elucidated. Using these input data, a list with theoretically possible polymer compositions and resulting m/z values is calculated, which is further compared to experimental mass spectrometry data. For optional semiquantitative studies, peak areas are allocated according to their assigned polymer composition to evaluate both comonomer and terminating group ratios. Several tools are implemented to avoid mistakes, for example, during peak assignment. In the present publication, the functions of MALINTO are described in detail and its broad applicability on different linear polymers as well as branched and multicyclic polycondensates is demonstrated. Fellow researchers will benefit from the accelerated peak assignment using the freely available MALINTO software and might be encouraged to explore the potential of MALDI mass spectrometry for (semi)quantitative applications.
For branched polyesters, 2-and 3-step syntheses were performed to avoid extensive crosslinking and gelation of the resins. For bPES2, neopentyl glycol, isophthalic acid and trimethylolpropane were stepwise heated up to 240 °C until the clearing point of the reaction mixture. This indicated a nearly full conversion of isophthalic acid. After cooling down to 160 °C, trimellitic anhydride was added for the endcapping reaction. For bPES1, a precondensate of isophthalic acid (85% of total mass) and neopentyl glycol was synthesized using again the clearance point as an indication of full conversion. In a second step, trimethylolpropane and the remaining 15% of isophthalic acid were reacted and then, trimellitic anhydride was added in a third step. All steps were carried out under inert atmosphere under ambient pressure.
For poly(lactic acid) (PLA) synthesis 1 g of recrystallized lactide (6.9 mmol) was transferred to a two-neck round bottom flask connected to a Schlenk line which has previously been evacuated and purged with Argon three times. 0.05 mmol of tin(II)-2ethylhexanoate (Alfa Aesar, technical purity) was used as catalyst and added as a 0.2 mol L -1 solution in absolute toluene. Methanol (VWR, 100%) was tested for initiation of the ring-opening polymerization and added to the reaction flask after dilution with toluene (1 wt% methanol/lactide). The medium was stirred, heated to 180 °C and kept at this temperature for 1 h while the sample solidified. After slightly cooling down, 5 mL of chloroform was added and the polymer dissolved under reflux. The polymer was further precipitated by dropwise addition of 15 mL of methanol. The precipitate was filtered, washed and dried in a vacuum drying cabinet at 40 °C. Polystyrene synthesis. Polystyrene was synthesized via anionic polymerization using nbutyl lithium as initiator (Sigma Aldrich, 1.6 M in hexanes). The procedure was carried out under inert conditions on a Schlenk line. 45 mL absolute tetrahydrofuran was cooled to -78 °C, 4 mL (34 mmol) of purified styrene and 2.2 mL (3.5 mmol) of the butyl lithium solution were added subsequently. After 1 h 10 vol% of the reaction mixture was withdrawn and precipitated in methanol. This intermediate product (PS) was filtered and dried at 60 °C overnight. In the meantime, 1 mL styrene was added to the reaction medium. After another hour, the reaction was terminated by adding 1.3 mL ethylene oxide solution (Sigma Aldrich, 2.5-3.3 M in tetrahydrofuran). The product (PS-OH) was again precipitated in methanol, isolated by centrifugation and dried.
MALDI-ToF MS.
Samples were analyzed via MALDI mass spectrometry as described in the main text. Variations of solvents, matrices and salts are given in Table S2.
Size exclusion chromatography. Size exclusion chromatography (SEC) of PES1-6 and branched polyesters were measured in tetrahydrofuran on a setup including an HPLC pump (PU-2086 Plus, Jasco), an autosampler (728, Bischoff), and three detectors (UV: UV-975, Jasco, wavelength: 230 nm for aliphatic and 260 nm for aromatic samples; refractive index: 200, Perkin Elmer; light scattering: MiniDAWN, Wyatt Technology). Phenogel columns with pore sizes of 50, 500, and 10 4 Å were used for separation at 40 °C. Samples were dissolved in THF at concentrations of 2-3 mg mL -1 . Calibration was performed with polystyrene standards. Molecular masses of samples (number average, Mn, mass average, Mw) as well as the polydispersity index were calculated as polystyrene equivalents.
SI: RESULTS
Example 1: A2+B2 homo-and copolyesters. Figure S1 shows the detail of an 1 H NMR spectrum of a copolyester synthesized with 1,10-decanediol (DD), neopentyl glycol (NPG) and isophthalic acid (IPA). The degree of esterification for long aliphatic monomers such as decanediol was estimated by two separated signals for D2 protons. Before esterification these protons found in the region of 1.65-1.50 ppm, after esterification a peak around 1.85-1.65 ppm appeared. The latter was compared to mono-and diester peaks of neopentyl glycol (diester N2 II : 1.20-1.05 ppm, monoester N I : 1.05-0.95 ppm). Since the degree of esterification of a monoester is only 50%, half of the respective integral is used for the calculation of xDD as shown in equation S1. xDD was used for comparing MALDI with 1 H NMR results and investigation of 1,10-decandiol and neopentyl glycol reactivities. The MALDI mass spectra of NPG/DD-IPA copolyester PES4 including the intermediate products after 2 h (03) and 3.7 h (06) reaction time are shown in Figure S2. While molecular mass distributions were shifted to higher m/z values during the course of the reaction, also different kind of polyester species were found in the MALDI spectra. At the beginning, decanediol seems to be the predominant diol in the sample composition although DD probably reduced the ionization efficiency of the analyte ( Figure 5, main text) leading to an underestimation of xDD. Additionally, different ratios of terminating groups were observed. Towards the end of the reaction, double carboxyl terminated chains dominated the spectra due to the excess of acid used in the synthesis while both mixed and double hydroxyl terminated species were found at the beginning of the polycondensation process. Ring formation only occurred at advanced reaction times of 6 h or longer.
While degree of esterification and thus comonomer ratios could be determined for many polyesters using 1 H NMR, peak overlapping prevented necessary deconvolution for several other systems. One example were copolyesters containing 1,4cyclohexandicarboxylic acid (CHDA) as described in the main text. In contrast to 1 H NMR, MALDI measurements revealed the polyester composition and thus incorporation of CHDA compared to adipic acid (ADPA). By performing such investigations during the course of a polycondensation reaction, a higher reactivity of adipic acid was observed while CHDA only slowly incorporated into the polyester structure ( Figure S3). Due to different ionization efficiencies comonomer ratios obtained from MALDI MS might not represent absolute values which is, however, not relevant in the shown example which shows a trend during the polyesterification of the same monomer mixture.
Example 2: Branched copolyesters and endcapping. Besides the crucial advantage of distinct peaks for branched and endcapped polyesters in MALDI mass spectrometry, a good resolution can only be obtained in a limited spectral width which does not necessarily represent the overall polyester composition. Thus, comparison of branched polyesters is only recommended for similar systems while samples should additionally be investigated via size exclusion chromato-graphy. Size exclusion chromatograms of discussed bPES are shown in Figure S4. While peak patterns vary in the low molecular weight region which is mostly included in the MALDI mass spectra (600-4000 Da), the main peak shapes are similar as are calculated molecular weights. Additional to the findings presented in the main text, this confirms the suitability of the MALDI method.
Example 4: AB polyesters and chain-growth polymers. A prominent example of an AB polyester based on a hydroxycarboxylic acid is poly(lactic acid). As for other polycondensates, the mass of the repeating unit equals the monomer mass minus water. However, no second monomer with a different kind of functionality is required because the monomer already carries a carboxylic acid and a hydroxyl group. Therefore, this class of polycondensates can be treated similar to chain-growth polymers like polystyrene. Since polystyrene forms via a polymerization reaction, the repeating unit equals the monomer mass. The functionality fields are both filled with the number of double bonds present in the monomer (e.g. 1 for styrene, 2 for divinylbenzene). A screenshot of the input data is given in Figure S5.
The MALDI mass spectra and end group statistics of a poly(lactic acid) and a polystyrene sample are given in Figure S6 and S7, respectively. Poly(lactic acid) was synthesized via ring-opening polymerization of lactide. Although methanol has been used as initiator, MALDI MS revealed a second series of PLA initiated with propanol. Residuals of 2propanol are expected to cause these byproducts because this solvent has been used for the recrystallization of the monomer. Although the ratio of the initiators does not necessarily present absolute values as has been demonstrated for copolyesters, it is Figure S4: Size exclusion chromatograms of branched polyesters bPES1 (a -before endcapping, b -after endcapping) and bPES2. The peak around 10.3 mL appears after endcapping and is caused by unreacted trimellitic anhydride/acid. Figure S5: Input data for chain-growth polymers such as polystyrene.
constant for chain lengths between 25 and 50 repeating units which confirms simultaneous initiation. The lack of propanol-initiated PLA at lower chain lengths can be explained by the decreasing signal intensity which affects the lower concentrated species more severely until the peak area cannot be determined anymore.
A different behavior was observed for a polystyrene sample synthesized via anionic polymerization and succeeding termination with ethylene oxide. The ratio of successfully OH terminated chains increased with the chain length. This could be attributed to the very sensitive anionic polymerization which is terminated by any kind of impurities, for example moisture. If chains are terminated accidentally via protonation, polystyrene can neither react with further monomers to increase the molecular weight nor react with the later introduced terminating agent. Thus, the trend suggests efficient termination while non-functionalized polystyrene is present due to premature termination. Figure S7: End group statistics for A) poly(lactic acid) synthesized via ring-opening polymerization of lactide using methanol as initiator and for B) polystyrene synthesized via anionic polymerization using n-butyl lithium as initiator and ethylene oxide as terminating agent. Figure S6: MALDI mass spectra of A) poly(lactic acid) with methanol and propanol initiation, and B) polystyrene with proton and ethylene oxide termination. Structures and statistics are given in Figure S7. | 2,515.8 | 2023-01-04T00:00:00.000 | [
"Materials Science"
] |
Biological effects of sub-lethal doses of glyphosate and AMPA on cardiac myoblasts
Introduction: Glyphosate is the active compound of different non-selective herbicides, being the most used agriculture pesticide worldwide. Glyphosate and AMPA (one of its main metabolites) are common pollutants of water, soil, and food sources such as crops. They can be detected in biological samples from both exposed workers and general population. Despite glyphosate acts as inhibitor of the shikimate pathway, present only in plants and some microorganisms, its safety in mammals is still debated. Acute glyphosate intoxications are correlated to cardiovascular/neuronal damages, but little is known about the effects of the chronic exposure. Methods: We evaluated the direct biological effects of different concentrations of pure glyphosate/AMPA on a rat-derived cell line of cardiomyoblasts (H9c2) in acute (1–2 h) or sub-chronic (24–48 h) settings. We analyzed cell viability/morphology, ROS production and mitochondrial dynamics. Results: Acute exposure to high doses (above 10 mM) of glyphosate and AMPA triggers immediate cytotoxic effects: reduction in cell viability, increased ROS production, morphological alterations and mitochondrial function. When exposed to lower glyphosate concentrations (1 μM—1 mM), H9c2 cells showed only a slight variation in cell viability and ROS production, while mitochondrial dynamic was unvaried. Moreover, the phenotype was completely restored after 48 h of treatment. Surprisingly, the sub-chronic (48 h) treatment with low concentrations (1 μM—1 mM) of AMPA led to a late cytotoxic response, reflected in a reduction in H9c2 viability. Conclusion: The comprehension of the extent of human exposure to these molecules remains pivotal to have a better critical view of the available data.
Introduction
Glyphosate [IUPAC name N-(phosphonomethyl) glycine] is a synthetic phosphonic amino derivative of glycine, which disrupts the shikimate pathway by inhibiting the activity of 5-enolpyruvylshikimate-3-phosphatase (EPSP) synthase. This metabolic pathway is used by plants and several microorganisms for the biosynthesis of folate and aromatic aminoacids (Bai and Ogbourne, 2016). Glyphosate (Gly) is the active compound of a large part of nonselective herbicidal (glyphosate-based herbicidal, GBHs), being the most used worldwide since middle 70s (Torretta et al., 2018).
Gly is absorbed through leaves and stems and it is transported from roots to edible parts (Tong et al., 2017). In agriculture, genetically modified Gly-resistant crops (as soybean, cotton, corn, etc.) are extensively used and, because of their resistance they accumulate Gly at high concentrations (Xu et al., 2019). Once applied, Gly undergoes degradation mainly by a process known as mineralization, which leads to different byproducts, with aminomethyl phosphonic acid (AMPA) as the main metabolite. The kinetic of this mechanism is highly dependent on soil pH and minerals concentration. Other processes that determine Gly fate are immobilization and leaching: the first one leads to soil adsorption/ accumulation, while the second results in water contamination (Bai and Ogbourne, 2016). Gly and AMPA are highly soluble in water and their persistence is variable depending on water conditions with half-lives ranging from few days to several weeks (Tomlin, 2009;Grandcoin et al., 2017;ATSDR, 2020;Goncalyes et al., 2020). In soil, Gly and AMPA accumulate with a discrete persistence with half-lives depending on factors such as pH, salinity, microbial composition, spanning from few days up to about a year (Bai and Ogbourne, 2016;Bento et al., 2016;Domínguez et al., 2016;Grandcoin et al., 2017;ATSDR, 2020).
Given the massive use of GBHs, Gly and AMPA are frequently detected in different water and food samples and classified as pollutants (Bai and Ogbourne, 2016;Bonansea et al., 2017;Silva et al., 2018;Xu et al., 2019;Okada et al., 2020;Marques et al., 2021;Pelosi et al., 2022). The constant presence represents not only an ecological burden but also a potential indirect threat to both animal and human health. Gly and AMPA were, in fact, found in urines of both occupationally or para-occupationally exposed workers (from 0.26 to 73.5 μg/L) and in general population (from 0.16 to 7.6 μg/L) (Krüger et al., 2014;Niemann et al., 2015;Gillezeau et al., 2019;Perry et al., 2019;Mesnage et al., 2022a). Indeed, this type of report suffers from inconstant technical approaches that fail to allow a reliable comparison, mostly because the available studies are based on very different methodologies for Gly and AMPA quantification (gas chromatography, liquid chromatography or ELISA) (Valle et al., 2019). Liquid chromatography is the elective analytical technique for glyphosate determination because of its flexibility and availability in different types of laboratories. This technique can be coupled with different detector types (i.e., ultraviolet-visible, fluorescence, mass spectrometry, etc.) many of which are applicable to Gly quantification. Every technique needs various degrees of technical skills to be performed and requires substantially different investments, and each of them can reach different levels of sensitivity (Moldovan et al., 2023). Hence, more accurate and standardized procedures are needed to reliable and repeatable measurements of Gly and AMPA concentration in biological samples and, therefore, an accurate evaluation of exposure extent.
Despite its selective mechanism of action, Gly has been proven to have either acute or chronic toxicity in different off-target nonmammals animal species, such as amphibians, annelids, arthropods, fishes and birds (Antón et al., 1994;Contardo-Jara et al., 2009;Roy et al., 2016;Gill et al., 2018;Jin et al., 2018). However, these effects were more severe when animals were exposed to Gly formulation than the molecule alone, suggesting that the adjuvants (such as surfactants) act in synergy, amplifying the toxicity.
As of today, the safety of Gly in mammals is still under debate. Acute intoxications due to GBHs ingestion are reported to strongly affect cardiovascular system (Bradberry et al., 2004;Gress et al., 2015;Brunetti et al., 2020;Hu et al., 2021), as well as to cause gastrointestinal and respiratory symptoms, hypotension and consciousness alteration (Lee et al., 2000;Bradberry et al., 2004); however, these effects are due to very high levels of Gly and adjuvants and are in line with accidental intake and does not reflect the low, although daily, exposure of the general population. The long-term effects of a chronic exposure to Gly and AMPA are not clear. Some in vitro studies on different mammalian cell lines showed Gly (or its formulations) to be genotoxic (Benachour and Séralini, 2009;Martini et al., 2012;Mesnage et al., 2013;Townsend et al., 2017;Santovito et al., 2018;Mesnage et al., 2022b), cytotoxic (Townsend et al., 2017;Vanlaeys et al., 2018;Hao et al., 2020;Martínez et al., 2020) and reprotoxic (Gasnier et al., 2009;Clair et al., 2012;De Liz Oliviera Cavalli et al., 2013;Anifandis et al., 2017;Stur et al., 2019;Hao et al., 2020;Jarrell et al., 2020;Cao et al., 2021;Mohammadi et al., 2022). Gly toxicity is usually associated with oxidative stress, dysfunctional mitochondria dynamics and bioenergetics. The sensibility to Gly seems to be cell specific; only few studies demonstrated Gly toxicity in concentrations below the human Acceptable Daily Intake (1.0 mg/kg) (Santovito et al., 2018) and not related to the adjuvants present in its formulations.
In the present work, we evaluate the direct biological effects of different concentrations of pure Gly or AMPA on a rat-derived immortalized cell line of cardiomyoblasts (H9c2), recognized as a valuable tool for investigating in vitro effect of toxic factors on myocardial and muscle-skeletal immortalized cells (Branco et al., 2015;Bouleftour et al., 2021;Onódi et al., 2022).
In the first part of the study, we simulated an acute exposure to high levels (10-20 mM) of Gly or AMPA. We eventually shifted to lower concentrations (1 μM-1 mM) in order to identify a sub-lethal range to mimic the biological effects of acute and sub-chronic treatments. We evaluated changes in cell viability, morphology, ROS production and mitochondrial distribution and mass.
MTT
The solution was freshly prepared the day of the experiment by dissolving 5 mg/mL of powder in sterile Phosphate Buffered Saline (PBS-Sigma Aldrich).
Glyphosate, AMPA and NAC
Stock solutions were freshly made the day of the experiments by dissolving the powder in serum-free cell culture medium. Then, stock solutions were diluted in complete cell culture medium to reach the working concentrations.
DCF-DA
Stock solution was made by dissolving the powder in sterile dimethyl sulphoxide (DMSO-Sigma-Aldrich) and stored at −20°C, Frontiers in Physiology frontiersin.org 02 in the dark. Stock solution was diluted in sterile PBS with Ca 2+ /Mg 2+ to reach the working concentration.
Cell viability
H9c2 cells were seeded in 96-well plates at 5 × 10 4 cells/well and kept in incubator 24 h. Then, cells were starved O/N in DMEM 2% FBS and treated with different Gly or AMPA concentrations for different times. When necessary, cells were pretreated 1 h with NAC (100 µM). After the treatments, the medium was replaced, 10 µl MTT were added to each well and the plates were incubated 3 h at 37°C. Then, medium was discarded and the purple formazan crystals were dissolved in 100 µl DMSO. The optical density was measured in a microplate reader (Model 680-BioRad) at 570 nm. The experiment was performed on technical and biological triplicates.
Morphology
Cells were plated into Petri dishes and kept in complete medium for 24 h to allow cell adhesion. After the desired confluence (70%-90%) was reached, samples were treated with 10 or 20 mM of glyphosate or AMPA for 24 h (t24) or kept in culture medium. After the treatments, cells were washed with warm sterile PBS with Ca 2+ /Mg 2+ and medium was replaced with a fresh one. All samples were observed under an optical microscope (Axiovert 200-Zeiss) at t0 or t24 with a 63X lens. Images were acquired through Infinity Analyze Software (Lumenera Corporation). At least five fields/sample have been analyzed. The experiment was performed on technical triplicates.
Transmission electron microscopy
H9c2 cells were plated into Petri dishes and kept in culture until reaching 80% confluence. Then, cells were treated with Gly 10 mM for 1 h or kept in culture medium (Control). Cells were gently washed with warm sterile PBS without Ca 2+ /Mg 2+ , detached with trypsin/EDTA 0.05%/0.02% (PAN Biotech), collected in tubes and centrifuged 5′ at 3000 rpm. Supernatant was discarded and pellet was fixed in 1% paraformaldehyde (Merck, Darmstadt, Germany), 1.25% glutaraldehyde (Fluka, St Louis, MO, United States) and 0.5% saccharose in 0.1 M Sörensen phosphate buffer (pH 7.2) for 2 h. For resin embedding, samples were post-fixed in 2% osmium tetroxide (SIC, Società Italiana Chimici) for 2 h and dehydrated in ethanol (Sigma Aldrich) from 30% to 100% (5 min each passage). After two passages of 7 min in propylene oxide, one passage of 1 h in a 1: 1 mixture of propylene oxide (Sigma Aldrich) and Glauerts' mixture of resins, samples were embedded in Glauerts' mixture of resins (made of equal parts of Araldite M and the Araldite Harter, HY 964, Sigma Aldrich). In the resin mixture, 0.5% of the plasticizer dibutyl phthalate (Sigma Aldrich) was added. For the final step, 2% of accelerator 964 was added to the resin in order to promote resin polymerization at 60°C. Ultra-thin serial sections (70 nm thick) were cut using an Ultracut UCT ultramicrotome (Leica Microsystems, Wetzlar, Germany), stained with a solution of 4% UAR-EMS uranyl acetate replacement in distilled water and analysed using a JEM-1010 transmission electron microscope (JEOL, Tokyo, Japan) equipped with a Mega-View-III digital camera and a Soft-Imaging-System (SIS, Münster, Germany) for computerized acquisition of the images.
For mitochondria quantification, 4 ultra-thin sections 50 µm distant each other were considered for each experimental group with a magnification of 30000X. A total of 50 cells for experimental group were analysed and the number of impaired and unimpaired mitochondria was estimated in % based on their morphological features such as the shape of mitochondria, the morphology of the cristae and evidence of swelling.
ROS measurements
DCFH-DA is a non-fluorescent molecule permeable to cells. It is hydrolyzed at the intracellular level in dichlorofluorescine (DCFH), which is retained in the cell as it is no longer able to cross cell membranes. In the presence of H 2 O 2 , DCFH is oxidized forming the highly fluorescent DCF. 4 × 10 3 cells/well were seeded in 96-well plates and kept in incubator O/N to allow adhesion. Cells were treated with different concentrations of Gly or AMPA for 1 or 2 h. After the treatments, cells were gently washed two times with warm PBS with Ca 2+ /Mg 2+ . 100 μl/well of 10 µM DCF-DA were added and the plates were incubated for 45 min at 37°C, covered by an aluminum foil. Cells were washed two times with warm PBS with Ca 2+ /Mg 2+ . The fluorescence intensity was measured at the wavelengths ex: 485 nm and em: 535 nm with a microplate reader (Infinite 200-Tecan). The experiment was performed on technical and biological triplicates. A control lane with only cells (NO DCF) was always included to subtract cellular auto-fluorescence.
Mitochondrial staining
MitoTracker Green FM ™ (MTG-Thermo Fisher) is a fluorescent probe, which stains mitochondria independently from their metabolic activity. 5 × 10 3 cells/well were plated in 24-well plates in complete DMEM and kept O/N in the incubator. Cells were washed with sterile warm PBS with Ca 2+ /Mg 2+ and treated with different concentrations of Gly or AMPA for 2 or 24 h. After the treatments, cells were gently washed with sterile warm PBS with Ca 2+ /Mg 2+ . 100 nM MTG was added to each well and plates were incubated 30' in the dark at 37°C. Samples were washed with warm sterile PBS with Ca 2+ /Mg 2+ and observed under a fluorescence microscope (Axiovert 200-Zeiss) with a ×40 magnification lens.
Frontiers in Physiology frontiersin.org 03 Images were acquired through Infinity Analyze Software (Lumenera Corporation) with a resolution of 480 × 360 pixels. At least five fields/sample have been analyzed. The experiment was performed on technical triplicates.
Statistical and computer analysis
Statistical analysis was performed using Graphpad Prism Software ® (version 9.00, GraphPad Software). Data are expressed as a mean ± SD. The differences between the groups were analyzed through statistical tests: ANOVA, one-way or two-way, or Kruskal-Wallis or Mann-Whitney t-test. Statistical significance has been set at p < 0.05.
Effects of high doses of glyphosate and AMPA-Acute exposure
In order to evaluate whether an acute exposure to Gly or AMPA determines changes in cell viability in our cell model, we performed an MTT assay.
After 2 h, Gly treatment diminished H9c2 viability in a concentration-dependent manner, with the most dramatic effect given by the highest dose. As shown in Figure 1, 10 and 20 mM treatment determined, respectively, 30% and 90% decrease in cell viability ( Figure 1A). At equal doses, AMPA decreased cell viability from 20% to 30% ( Figure 1A).
In light of the observed cytotoxic effects and considering that Gly, in the literature, is often associated with oxidative stress (Sardão et al., 2009;Kwiatkowska et al., 2014;Anifandis et al., 2017;Burchfield et al., 2019;Cao et al., 2021), we performed ROS measurements on H9c2 cells.
At 10 mM, there was a slight increase in ROS production compared to the control, without substantial differences between Gly and AMPA groups ( Figure 1B). Treatment with 20 mM of Gly, instead, determined a 4-fold increase in ROS production ( Figure 1B), which can explain the dramatic loss in cell viability ( Figure 1A). This potent effect was not observed in 20 mM AMPA treated group, which ROS levels are comparable to 10 mM one ( Figure 1B).
After 24 h (t24), signs of membrane blebbing and cell shrinkage are still present in Gly-treated group (Figure 2, bottom panels); many rounded and floating cells were clearly visible in the plates at 20 mM, together with strong signs of cytoplasmic cavitation. The same morphological alterations were not observed in AMPA treated group (Figure 2, bottom panels).
After analysis of phenotypical changes (Figures 1, 2), additional MTT and DCF-DA assays were performed in a shorter time-range, focusing on 10 mM Gly treatment, which effects were not too deleterious on the selected cell model.
Interestingly, cell viability did not change when comparing 1 h and 2 hours-treatment groups ( Figure 3A), while ROS production was significantly higher after 1 h ( Figure 3B), confirming an early response of H9c2 cells to these levels of Gly exposure.
As additional confirmation, both a decrease in cell viability (≡ 20%, Figure 4A) and an increase in ROS production (≡ 1.5 foldchange, Figure 4B) were significant in H9c2 cells already after 5 min of 10 mM Gly. However, the most appreciable effect was reached after 1 h (Figures 3, 4).
Given the significant and rapid production of ROS, an involvement of Gly-driven mitochondrial functional impairment was postulated. Therefore, H9c2 cells were treated with Gly 10 mM for 1 h and analyzed using transmission electron microscopy. The morphology of mitochondria was further investigated by transmission electron microcopy that allowed to access healthy mitochondria with intact double membrane structure, cristae and cristae space easily detectable in the control group ( Figures 5A, C); several swollen mitochondria without cristae, instead, are detected in Gly-treated group (Figures 5B, D). Furthermore, the number of perinuclear mitochondria was quantified. We determined two populations: 1) "healthy mitochondria" (HM) showing normal morphology, cristae structure and intact membrane; 2) "damaged As shown in Figure 5E, the percentage of damaged mitochondria was significantly higher in Gly-treated group compared to control, potentially explaining the observed cytotoxic effects.
Effects of medium-to-low doses of glyphosate and AMPA-Acute exposure
The acute exposure of H9c2 cells from medium (1 mM) to very low (1 µM) doses of Gly, produced similar effects seen before ( Figure 1), although to a lesser extent. In terms of cell viability and ROS production, the treatments determined, respectively, a decrease from 10% to 15% ( Figure 6A) and an increase from 1.1 to 1.2 fold-change ( Figure 6B).
The use of the antioxidant NAC, even if effective in lowering ROS production ( Figure 6B), was not able to totally restore cell viability ( Figure 6A).
After the observation of Gly-and AMPA-induced production of reactive oxygen species, we wanted to test if there were variations in mitochondrial mass and distribution. To do so, we probed mitochondria with the fluorescent molecule MitoTracker Green FM ™ after 2 or 24 h of Gly or AMPA treatment.
Mitochondrial distribution, as shown in Figures 7A, B, appeared homogeneous and no variations in fluorescence intensity were detected, suggesting that both mitochondrial dynamics and mass Morphology. The figure shows representative fields of H9c2 cells treated with 10 or 20 mM of glyphosate (GLY) or AMPA for 24 h. Images were acquired through a camera connected to an inverted microscope at the start (t0-top panels) and at the end (t24-bottom panels) of the treatments with a 63X lens (scale bar = 10 µm). Frontiers in Physiology frontiersin.org 05 were preserved. However, we cannot totally exclude that the inability to find any relevant change, could be associated with a limitation of the technique used, which has, indeed, a limited resolution. Moreover, the probe stains all mitochondria independently from their activity, so it was no possible to distinguish healthy and damaged populations.
Effects of medium-to-low doses of glyphosate and AMPA-Sub-chronic exposure
Given the scarce effects of Gly on ROS production ( Figures 6A, B) and since there were not changes in mitochondrial distribution and mass after 24 h of Gly ( Figure 7A) or AMPA ( Figure 7B) exposure, we hypothesized that H9c2 cells were able to overcome the injury. To verify this hypothesis, we tested cell viability of H9c2 cells after prolonged exposure (24 and 48 h) to low doses (1 μM-1 mM) of Gly or AMPA.
As expected, after 24 h of low doses of Gly exposure, cell viability is totally rescued, except for the 1 mM dose ( Figure 8A). After 48 h, the control phenotype was restored under all doses ( Figure 8B).
As regards to AMPA treated-group, after 24 h, cell viability was comparable to control cells ( Figure 8A). Surprisingly, after 48 h of exposure, cell viability decreased by ≡ 40% at all doses ( Figure 8B).
Discussion
Gly is considered an environmental pollutant as active compound of a large part of non-selective herbicidal largely used worldwide in the last 50 years (Torretta et al., 2018). As a matter of fact, traces of Gly and AMPA (its main degradation product) are commonly detected in samples of water, soil and food (Bai and Ogbourne, 2016;Bonansea et al., 2017;Silva et al., 2018;Xu et al., 2019). This diffuse contamination leads to a constant exposure, representing both an ecological and a health concern for humans and animals. Despite its plant-specific mechanism of action, Gly has been proven to have either acute or chronic toxicity in different animal species, including mammals.
Glyphosate effects
At high doses, Gly treatment determines a great reduction in myoblasts viability after 2 h ( Figure 1A). The response appears very early, since 10 mM treatment is able to reduce cell viability already after 5 min ( Figure 4A). Furthermore, cell shrinkage and membrane blebbing are already visible soon after the application of the treatments (Figure 2, top panels). Signs of cell damage are still present after 24 h (Figure 2, bottom panels). Coupled to the reduction in cell viability, these morphologic alterations suggest an involvement of apoptotic pathways (Sardão et al., 2009;Gui et al., 2012;Zhang et al., 2018;Noritake et al., 2020). Benachour and Séralini (2009) showed that in vitro pure Gly treatment caused apoptosis via caspase (cas)-3 and -7 activation, already after 6 h, in three different human cell lines. Gly-dependent increase in cas-3, -8 and -9 activity was also recently confirmed in human peripheral blood mononuclear cells (hPBMCs) (Kwiatkowska et al., 2020). Moreover, in a neuroblastoma cell line (SHSY-5Y), 5 mM Gly treatment altered the expression of different apoptosis-related genes such as BAX, BCL2, CASP3 and CASP9 (Martínez et al., 2020).
The toxic effects we observed were related, at least in part, to ROS production and mitochondrial abnormalities. Mitochondria are, in fact, key players in maintaining cellular redox status and homeostasis. Upon a toxic stimulus, mitochondria may trigger an apoptotic response through cytochrome c release followed by the activation of cas-9-dependent pathway (Orrenius, 2004). A dose of 10 mM Gly determined a great production of ROS already after 5 min ( Figure 4B), reaching the peak after 1 h ( Figure 3B). In addition, 1 h of Gly treatment rapidly provoked mitochondrial disruption ( Figures 5A, B). This is in line with what shown in hPBMCs, in which 4 h in vitro Gly treatment, from 0.05 mM, caused Frontiers in Physiology frontiersin.org a significant reduction in mitochondrial membrane potential (ΔѰm) and a consistent ROS production. These effects were markedly increased at 5 mM concentration (Kwiatkowska et al., 2020). H9c2 viability after 1 or 2 h of Gly exposure was comparable ( Figure 3A), altogether suggesting that the damage could occur during the first hour. However, it remains a speculation since we did not checked these data in a longest time-window for this range of Gly concentration. The same drastic effects were not detected at lower concentrations (1 μM-1 mM), in which there was only a slight (although significant) variation in cell viability ( Figure 6A) and ROS production ( Figure 6B) after acute treatment. Similar results were obtained from Kim et al. (2013): the researchers found that the treatment with pure Gly up to 10 µM was not able not alter H9c2 features in terms of caspases activation, cell morphology and ΔѰm. As a further confirmation of the low toxicity, the subchronic exposure (24 or 48 h) of H9c2 to low doses of Gly showed a total rescue of the phenotype in terms of cell viability (Figures 8A, B) and no variations in mitochondrial dynamics ( Figure 7A) or cell morphology (data not shown), suggesting that the cells were able to recover from the damage. A similar type of behaviour has been already reported by Townsend et al. (2017), which demonstrated that Gly is lethal to Raji cells (a line of lymphoblast-like cells) at concentrations above 10 mM, while no cytotoxic effects were observable at concentrations at or below 100 μM. Furthermore, in their study, acute (from 30 to 60 min) Gly treatment in concentrations between 1 and 5 mM induced significant DNA damage, which was totally recovered after 2 h. Overall, our original results are not in contrast with what previously reported in literature. Gly appears toxic, on average, at-or above 1 mM in different mammalian and non-mammalian cell types, while at low doses it is relatively safe. The toxicity mechanisms seem to be related to oxidative stress, induced by mitochondrial dysfunctions or disruption of antioxidant systems (Contardo-Jara et al., 2009;Kwiatkowska et al., 2014;Lopes et al., 2014;Jin et al., 2018;Vanlaeys et al., 2018;Martínez et al., 2020;Nerozzi et al., 2020;Madani and Carpenter, 2022;Strilbyska et al., 2022).
It remains unclear whether Gly exerts its toxicity by acting in an intra-or extra-cellular manner. Unfortunately, as of today, it is not known whether glyphosate is transported into mammalian cells and how it may vary across different cell lines. A 2016 study performed Frontiers in Physiology frontiersin.org 08 on a human epithelial cell line suggests an active uptake mediated by the L-type aminoacid transporter (LAT) (Xu et al., 2016). We evaluated whether our cells could use this carrier for Gly uptake. To do so, we co-treated the cells with different doses of glyphosate (5, 10 and 20 mM) and a specific LAT-1 inhibitor (2-aminobicyclo-(2,2,1)-heptane-2-carboxylic acid, BCH) in acute settings (1 and 2 h). We, then, assessed cell viability and ROS production through MTT and DCF-DA assays, respectively, that did not show any changes in Gly-driven cytotoxicity (data not shown), suggesting that cardiac myoblasts use a different type of transport system and/or that Gly toxicity relies on a receptor-mediated signalling.
AMPA effects
Cells exposure to AMPA showed two types of responses. There was an acute cytotoxic response to high doses (10 or 20 mM), as demonstrated by a reduction in cell viability ( Figure 1A) and an increase in ROS production ( Figure 1B). Membrane blebbing, cell shrinkage and cytoplasmic cavitation were observable at t0 (Figure 2, top panels), but not after 24 h of treatment ( Figure 2, bottom panels). Overall, in this range of concentrations, AMPA treatment was less toxic than Gly. Kwiatkowska et al. (2020) observed an analogous behavior: in hPBMCs, the treatment with AMPA induced hydroxyl radical formation only at the highest concentration (5 mM), while Gly treatment was effective already at 0.05 mM. Similarly, in a study from 2018, it was observed an increase in ROS levels in hPBMCs exposed to 1 mM Gly, but not to the same concentration of AMPA (Woźniak et al., 2018). In SHSY-5Y cells, after 48 h of exposure to 10 mM AMPA there was a significant increase is ROS production, while Gly exerted the same effect at 5 mM (Martínez et al., 2020).
Conversely, when treated sub-chronically at low doses (from 1 to 1 mM), H9c2 cells showed a late cytotoxic response to AMPA. After 48 h, there was a decrease in cell viability about 40% at all doses ( Figure 8B). This was somehow unexpected, given the scarce amount of data about AMPA effects (especially in mammals) (Grandcoin et al., 2017;Bailey et al., 2018;Stur et al., 2019), and represents a result that need to be explored with more detail. A non-monotonic response to sub-lethal doses of AMPA was recently reported in amphibians. In such experimental model, the chronic treatment with low (0.07 μg/L) and medium (0.32 μg/ L) doses of AMPA determined a significant dysfunction of the antioxidant machinery, which authors suggest to be linked to a hyper-stimulation of catalase activity, while high doses (3.57 μg/L) did not recapitulate the same effect (Cheron et al., 2022). We hypothesize that the early response could be due to a direct extracellular damage (as the binding with a receptor), while the late one could be secondary to bioaccumulation. Accordingly, it was demonstrated that, in hPBMCs, AMPA treatment determined an increase in both cas-8 [generally associated with the death receptors-mediated apoptotic pathway (Orrenius, 2004)] and cas-9 [involved in the mitochondrial-mediated apoptotic pathway (Orrenius, 2004)] activity (Kwiatkowska et al., 2020), supporting the hypothesis that the molecule is able to trigger both types of response. The activation of cas-3 and cas-9 pathways, following 48 h of AMPA treatment, was also reported by Martínez et al. (2020) in SHSY-5Y cells.
The fact that Gly treatment did not determine the same effects, could have two means: (I) Gly is not actively metabolized in AMPA neither inside nor outside our cells; (II) the kinetic of Gly to AMPA biotransformation is very slow, so more time is necessary to start to see the effects (Bailey et al., 2018).
Conclusion
Overall, we confirmed in our model previous in vitro studies indicating that pure Gly is toxic when administered at high concentrations, causing alterations in cell viability, morphology and mitochondrial health. At low doses, Gly causes only a slight cytotoxic response and the phenotype is rescued within 24 h. AMPA recreates almost the same effects, but with a lesser extent. Moreover, we provided new evidences about a late cytotoxic response to low doses of sub-chronically administered AMPA. In each condition, mitochondria and the antioxidant machinery are likely to be key mediators, a finding which is largely supported by the literature. Unfortunately, the comprehension of the mechanisms by which Gly is possibly imported into mammalian cells is very limited, nor is clear if it is actively or inactively metabolized within the cells. Unveiling these aspects would help to clarify whether the damage is receptor-mediated or if it occurs after the internalization of the molecules. Furthermore, it is of pivotal importance to have a reliable measure of the real human exposure to glyphosate and AMPA, in order to critically evaluate all the scientific data obtained as of today. Since the main route of exposure of the general population to Gly is through the diet, it is pivotal to perform quality control of the agrofood chain. In particular, on those foods which are more likely to contain Gly such as fish/meat and derivatives, cereals and derivatives, honey and beverages such as tea, beer and wine. Some studies have been already conducted and are reported in a recent review by Soares et al. (2021). To do so, there is the urge to develop standardized quantification systems with good sensitivity (that should be well below the maximum residue limit established) but also affordable in terms of technical equipment and costs, a goal achievable with HPLC-related methodologies. Last, in order to shed light on the debate about Gly safety, it would be helpful to distinguish between the damages directly related to the pure molecules and its metabolites and the ones mediated (or amplified) by the adjuvants, i.e., the surfactants, present in the different GBH formulations.
This study needs further research to address additional scientific concerns: first, we did not included AMPA in all of the experiments, since we did not expect to observe any appreciable effect (especially at low doses); second, we did not examine in depth the effects of the chronic exposure of the two substances. However, the evaluation of a chronic treatment in an in vitro environment is limited and this study was intended as a pilot to identify a sub-toxic range, coherent with the environmental exposure, to evaluate the chronic toxicity of Gly in vivo.
Author contributions
DM and EA contributed to conception and design of the study. EA and SG performed the experiment. EA, LM and SR performed ultrastructural TEM analysis. EA organized the database. EA performed the statistical analysis. EA wrote the first draft of the manuscript. DM, EA, SG, LM and SR wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
Funding
This study was supported by Fondo di Beneficenza Intesa San Paolo and RILO 2020/2021 granted to DM, and RILO 2021/2022 granted to SR.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 7,480.2 | 2023-04-24T00:00:00.000 | [
"Biology"
] |
A recognition of exosomes as regulators of epigenetic mechanisms in central nervous system diseases
Exosomes, vesicular structures originating from cells, participate in the conveyance of proteins and nucleic acids. Presently, the centrality of epigenetic modifications in neurological disorders is widely acknowledged. Exosomes exert influence over various epigenetic phenomena, thereby modulating post-transcriptional regulatory processes contingent upon their constituent makeup. Consequently, the heightened attention directed toward exosomes as instigators of epigenetic alterations has burgeoned in recent years. Notably, exosomes serve as vehicles for delivering methyltransferases to recipient cells. More significantly, non-coding RNAs, particularly microRNAs (miRNAs), represent pivotal contents within exosomes, wielding the capacity to influence the expression of diverse factors within the cerebral milieu. The transfer of these exosomal contents amidst brain cells, encompassing neuronal cells and microglia, assumes a critical role in the genesis and progression of neurological disorders, also, this role is not limited to neurological disorders, it may deal with any human disease, such as cancer, and cardiovascular diseases. This review will concentrate on elucidating the regulation of exosome-induced epigenetic events and its subsequent ramifications for neurological diseases. A more profound comprehension of the involvement of exosome-mediated epigenetic regulation in neurological disorders contributes to a heightened awareness of the etiology and advancement of cerebral afflictions.
Frontiers in Molecular Neuroscience 02 frontiersin.org(Zhou et al., 2021).Despite the advancements in contemporary medical science, the diagnosis and treatment of CNS disorders confront formidable challenges, with the presence of the blood-brain barrier (BBB) standing out as a primary impediment (Piguet et al., 2021).In physiological states, the BBB offers protective fortification to the brain; however, it concurrently impedes the transit of the majority of therapeutic agents into the cerebral domain (Piguet et al., 2021).
The emergence of exosomes has instilled optimism among scholars, as they hold promise for traversing the blood-brain barrier and gaining access to the brain.Exosomes are nanoscale membranebound vesicles and exhibit natural blood-brain barrier (BBB) traversing ability, which enables their application as drug delivery vehicles for brain disease treatment (Rehman et al., 2023).Immune exosomes loading self-assembled nanomicelles can traverse the blood-brain barrier for effective prevention of glioma recurrence (Cui et al., 2023).In recent years, artificially engineered exosomes have been developed as better alternatives to natural exosomes in terms of large-scale production, standardized isolation, drug encapsulation, stability and quality assurance.Manufactured exosomes are considered as potentially effective carriers for chemical and biological therapeutics, as we can control circulation time and selectivity (Abdelsalam et al., 2023).
Exosomes, measuring approximately 40-100 nm in size, constitute nanoscale extracellular lipid bilayer vesicles secreted by nearly all cell types under both physiological and pathological conditions (Kalluri and LeBleu, 2020).The story of the origins of exosome research arguably begins with the Chargaff 's studies of coagulation (Chargaff, 1945), then, Peter Wolf described a "material in minute particulate form, sedimentable by high-speed centrifugation and originating from platelets, but distinguishable from intact platelets" which he described as 'platelet dust' (Wolf, 1967).Subsequently, Nunez et al., described the presence of small (1-10 nm) extracellular vesicles in the bat thyroid gland during arousal from hibernation.We do believe that this paper was one of the first to describe the presence of multivesicular bodies (MVBs) close to the apical membrane (Nunez et al., 1974).Until 1983, two seminal and complementary papers published by the Johnstone and Stahl laboratories made a watertight case for the release of intraluminal vesicles from the cell, and defined them as exosomes (Harding et al., 1983;Pan and Johnstone, 1983).Initially perceived as a mechanism for the disposal of cellular debris, research advancements have unveiled the pivotal role of exosomes in facilitating cell-to-cell communication and maintaining cellular homeostasis through the transfer of nucleic acids, specific proteins, and lipids between cells (Liu et al., 2021).This regulatory function is, however, a double-edged sword, as exosomes possess the capability to modulate cellular properties.Present ubiquitously in the body, exosomes derived from distinct cell types exhibit variations in size and content, actively participating in diverse physiological and pathological processes such as immune response (Noonin and Thongboonkerd, 2021), cardiovascular and cerebrovascular diseases (Han et al., 2022), cancer initiation and progression (Mashouri et al., 2019), neuronal information transmission, and central nervous system diseases (Frühbeis et al., 2020).The inherent capacity of exosomes to regulate intricate intracellular pathways enhances their potential for therapeutic intervention across a spectrum of diseases, including neurodegenerative conditions and cancer.
Epigenetics, the study of regulatory mechanisms governing gene expression and gene interactions, addresses the fundamental issue of core regulation in the transmission of genetic information from the genome to the transcriptome (Jeffries, 2020).Epigenetic modification, a complex and reversible heritable process altering gene function, emerges as a central theme in basic neuroscience research, demonstrating its paramount role in the intricacies of nervous system structure and function.Epigenetic factors orchestrate various aspects of brain development, neurogenesis, synaptic plasticity, stress response, aging, and even the intergenerational inheritance of cognitive and behavioral phenotypes (Qureshi and Mehler, 2018).Translational research underscores the contribution of dysregulated epigenetic mechanisms to the onset of neurological diseases (Surguchov et al., 2017).Translational research underscores the contribution of dysregulated epigenetic mechanisms to the onset of neurological diseases.Targeting these processes in disease models holds the potential to substantially mitigate pathological changes and alleviate symptoms, encompassing the relief of neurodegeneration, promotion of nerve regeneration, and restoration of cognitive function (Gräff et al., 2012).The characterization of epigenetic and genome-wide epigenomic profiles, along with the regulation of epigenetic factors, introduces a novel and potent paradigm for identifying, monitoring, preventing, and potentially reversing neurological diseases.
Biogenesis and secretion of exosomes
The process of exosome generation (Figure 1) is a continuous and intricate sequence involving the double invagination of the plasma membrane and the formation of intracellular multivesicular bodies (MVBs) (Kalluri and LeBleu, 2020).The biogenesis initiates with the de novo formation of early sorting endosomes (ESEs).Initially, the plasma membrane undergoes invagination, forming a cup-shaped structure that encapsulates cell surface proteins and extracellular components, including soluble proteins, lipids, metabolites, small molecules, and ions (Mathieu et al., 2019).ESEs, post-formation, undergo fusion with the endoplasmic reticulum (ER), the trans-Golgi network (TGN), or pre-existing ESEs.Subsequently, ESE matures into late sorting endosomes (LSE) (Sahoo et al., 2021).The second invagination within LSE results in the formation of intraluminal vesicles (ILVs), facilitating the selective loading of future exosomes and permitting the entry of cytoplasmic components into the newly formed ILVs.LSE further matures into multivesicular bodies (MVBs), which can undergo degradation through fusion with lysosomes or autophagosomes, or release ILVs by fusing with the plasma membrane, transforming them into exosomes (Colombo et al., 2014).
Cells release exosomes in response to specific stimuli or as part of normal physiological processes (Kourembanas, 2015).These released exosomes function as carriers of molecular information, transferring cargo molecules from parent to recipient cells, thereby regulating cellto-cell communication involved in various physiological and pathological processes.The quantity and composition of exosomes reflect the state of the originating cell.Whether under pathological or physiological conditions, the contents of exosomes are meticulously regulated by parent cells, transmitting specific information to other cells for specialized functions (Kilchert et al., 2016).Conversely, the functional status of parent cells can be inferred by analyzing their exosome contents, transmitting signals and molecules through the intercellular vesicle communication pathway, thereby exerting local or systemic effects (Isaac et al., 2021).Exosomes, with diverse biological functions spanning multiple areas of biology, are the focus of this review specifically concerning diseases of the central nervous system.Within the context of the central nervous system, exosomes released by neurons, glial cells, and other cells form a complex network of interconnected information, influencing information transmission, physiology, and pathological effects within the system (Li et al., 2023).
Epigenetic regulation
The transmission of phenotype from one generation to the next is a fundamental aspect of life continuity in multicellular organisms, primarily achieved through mitosis, involving the replication and transmission of the organism's genomic DNA.However, alongside genomic inheritance, the transmission of cell fate from parent to offspring also relies on the inheritance of epigenetic information.Mitotic inheritance necessitates the survival of epigenetic information despite a two-fold dilution of cell contents with each cell division.In each generation of organisms, all distinct cell types in the human body are re-established, implying that the majority of cell state information appears to undergo erasure or reprogramming in the germline of multicellular organisms (Takahashi et al., 2023).
Exosomes, comprising various contents, house DNA methyltransferases (Shrivastava et al., 2021) and small RNAs (Li et al., 2022), both capable of influencing epigenetic modifications.However, the official identification of these crucial genetic materials in exosomes did not occur until 2007 (Valadi et al., 2007).Exosomes derived from diverse tissues and cells exhibit varying quantities and sizes of DNA methyltransferases and small RNAs, signifying their distinct roles in The process of exosome generation.Extracellular components can enter the cell through endocytosis and by invagination of the cell membrane.The vesicles formed during this process can fuse with selective early endosomes (SEEs), then transform into selective late endosomes (SLEs).The second invagination of the SLEs leads to the formation of intraluminal vesicles (ILVs).The ILVs then transform into multivesicular bodies (MVBs), which can fuse with lysosomes or autophagosomes for degradation, or fuse with the plasma membrane to release the ILVs as exosomes.A multivesicular body (MVB), generated from an endocytic cisterna by the accumulation of vesicles, exhibits small membrane curvatures corresponding to distinct microdomains.epigenetic regulation.Wu et al., found that miR-124-3p delivered by exosomes from heme oxygenase-1 modified bone marrow mesenchymal stem cells inhibits ferroptosis to attenuate ischemiareperfusion injury in steatotic grafts (Wu et al., 2022).In addition, exosomes released from Bone-Marrow Stem Cells ameliorate hippocampal neuronal injury through transferring miR-455-3p (Gan and Ouyang, 2022) and miR-31 from adipose stem cell-derived extracellular vesicles promotes recovery of neurological function after ischemic stroke by inhibiting TRAF6 and IRF5 (Lv et al., 2021).
Exosomes regulate epigenetic modifications
Exosomes play a pivotal role in the effective regulation of various epigenetic modification processes, including DNA methylation, acetylation, phosphorylation, and the expression of regulatory non-coding RNAs, thereby influencing the onset and progression of diseases (Zhao et al., 2023).For instance, hUC-mesenchymal stem cell-derived exosomes (hUC-MSC-EXO) have been found to enhance the expression of miR-4553p in response to IL-6 stimulation.Analysis through Western blot and QRT-PCR demonstrated a significant reduction in the expression of PIK3r1 at both mRNA and protein levels in macrophages in the presence of miR-4553p.PI3K, a key factor in inhibiting IL-6-related signaling pathways, is suggested to be suppressed by miR-4553p, leading to the inhibition of macrophage activation by downregulating the target gene PIK3r1.Importantly, hUC-MSC-EXO mitigates the release of IL6 and other inflammatory factors by macrophages through the promotion of miR-4553p expression.This targeted inhibition of PIK3r1 curtails the overactivation of immune cells such as macrophages/monocytes, attenuates inflammation, and preserves systemic homeostasis (Shao et al., 2020).
Moreover, an HIV-1 promoter targets zinc finger protein (ZFP-362) fused to the active domain of DNA methyltransferase 3A, inducing long-term stable epigenetic repression of HIV-1.Cells engineered to produce exosomes encoding the RNA package of this HIV-1 repressor protein exhibit epigenetic inhibition of viral replication, delaying or inhibiting disease progression (Shrivastava et al., 2021).Additionally, human milk exosomes (MEX) and their miRNAs can enter the systemic circulation, potentially affecting epigenetic processes in various organs, including the liver, thymus, brain, pancreatic islets, beige, brown and white adipose tissue, and bones (Melnik et al., 2021).Translational evidence indicates that MEX and its miRNAs control the expression of global cellular regulators, such as DNA methyltransferase 1, important for upregulating developmental genes, and receptor interaction protein 140, essential for the regulation of multiple nuclear receptors.MEX-derived miRNA-148a and miRNA-30b may stimulate the expression of uncoupling protein 1, a key thermogenic inducer converting white adipose tissue to beige/brown adipose tissue (Shore et al., 2010).
Furthermore, exosome-delivered long non-coding RNA (lncRNA) UFC1 has been implicated in promoting non-small cell lung cancer progression through EZH2-mediated epigenetic silencing of PTEN expression.Mechanistically, UFC1 binds to EZH2, facilitating its accumulation in the PTEN gene promoter region, resulting in trimethylation of H3K27 and inhibition of PTEN expression.Exosome-borne UFC1, derived from non-small cell lung cancer cells, promotes the proliferation, migration, and invasion of these cells through UFC1-mediated metastasis (Zang et al., 2020).In summary, exosomes exhibit the capacity to either promote or inhibit disease progression by upregulating or silencing associated genes through intricate epigenetic mechanisms.
2 Epigenetic contents of exosomes for central nervous system diseases 2.1 DNA methyltransferases in the central nervous system (CNS) DNA methylation involves the transfer of a methyl group to the fifth carbon of a DNA cytosine residue, resulting in the formation of a specific methylation structure (5-MC).This process predominantly occurs on the CPG island of the gene promoter region, leading to transcriptional silencing.The catalysis of this process is facilitated by the DNA methyltransferase family (DNMT), where different members play distinct roles in DNA methylation (Shrivastava et al., 2021).Aberrant DNA methylation is strongly implicated in various diseases; for instance, aberrant hypermethylation is a crucial epigenetic modification mechanism in atherosclerosis (Li et al., 2022).Overexpression of DNMT1 and DNMT3A contributes to abnormal DNA methylation of tumor suppressor genes (TSGs), consequently promoting pituitary adenoma invasion.Therefore, DNA methylation is closely linked to the onset and progression of aggressive pituitary adenomas (Kilchert et al., 2016).DNA methylation-based biomarkers and epigenetic therapies play pivotal roles in the early diagnosis and prognosis of various diseases.DNA methylation serves as a diagnostic marker for conditions such as vitamin deficiencies, neurodegenerative diseases (Martínez-Iglesias et al., 2023), meningioma (Choudhury et al., 2022), cerebrovascular diseases (Martínez-Iglesias et al., 2020), neuroinflammation (Dai et al., 2021), and psychiatric disorders (Feng and Fan, 2009).
Brain tumors
Currently, there is significant interest among researchers regarding the role of DNA methylation in exosomes in neurotumors.Epigenetic alterations have become a prominent feature of molecular pathology in the primary category of brain diseases (Qureshi and Mehler, 2013).Glioma, a complex and heterogeneous tumor, comprises not only tumor cells but also various non-tumor cell types, including astrocytes, Emerging evidence suggests that communication between tumor cells and components in the glioma microenvironment can directly influence various hallmark features of glioma (Godlewski et al., 2015).Elevated levels of extracellular vesicles (EVs) have been observed in the circulation of patients with glioblastoma and other cancer types, indicating that circulating tumor-derived EVs may serve as valuable biomarkers for monitoring treatment response and aiding in tumor diagnosis (Osti et al., 2019;Ricklefs et al., 2019).The presence of O-6-methylguanine-DNA methyltransferase (MGMT) significantly impacts temozolomide (TMZ) therapy (Oldrini et al., 2020).Epigenetic silencing, achieved through promoter methylation of the MGMT gene, hinders the synthesis of this enzyme and stands as the sole known biomarker for TMZ response (Jha et al., 2010).Genomic rearrangements of MGMT help alleviate TMZ resistance both in vitro and in vivo, and these rearrangements can be detected in tumor-derived exosomes (Oldrini et al., 2020).Simultaneously, tumor-derived exosomes exhibit the ability to carry TMZ and dihydrotanshinone (DHT), contributing to the reversal of drug resistance and enhancing lesion-targeted drug delivery (Wang et al., 2022) for the targeted therapy of glioma.
Neurodegenerative diseases
Neurodegenerative diseases, encompassing Parkinson's disease (PD), Alzheimer's disease (AD), amyotrophic lateral sclerosis (ALS), among others (Hou et al., 2019), pose a significant health challenge.Age is the most prominent risk factor for these diseases, and cognitive decline is inherently associated with aging.Consequently, chromatin alterations occurring during brain aging emerge as crucial targets for preventing cognitive deterioration, with DNA methylation playing a pivotal role in this context (Younesian et al., 2022).The differential CpG-2 methylation status of α-synuclein (SNCA) in leukocytes in the blood of PD patients may serve as a novel diagnostic indicator for PD (Tan et al., 2014).However, existing research on the role of DNA methylation in neurodegenerative diseases primarily focuses on diagnostic markers, necessitating additional efforts to delve into deeper mechanisms and potential treatments.
Other neurological disorders
While DNA methylation is prevalent in various neurological disorders, limited research has been conducted in this realm.Gulf War Illness (GWI), a chronic multisymptomatic disease with central nervous system damage as a commonly reported symptom, including memory dysfunction and depression, remains challenging to diagnose and lacks effective treatments for poorly understood reasons.In the context of GWI, alterations in DNA methylation and hydroxymethylation levels in exosomes have been observed (Pierce et al., 2016), indicating the involvement of epigenetic changes in exosomes in GWI and offering new perspectives for future diagnosis and treatment.Additionally, in stroke patients, exosomes released by curcumin-treated cells have been found to inhibit DNA methylation levels, thereby mitigating the harm caused by stroke (Liu et al., 2009).
3 Small RNA in the central nervous system (CNS) RNA modifications constitute a crucial aspect of epigenetic modifications (Melnik et al., 2021).Comparable to DNA modifications, cellular RNA undergoes various chemical modifications, such as N6-methyladenosine (m6A) for mRNA.Among these modifications, m6A stands out as the most abundant epigenetic modification of RNA, primarily catalyzed by a methyltransferase complex comprising METTL3, METTL4, and other protein subunits.Aberrant m6A modification can result in transcriptional abnormalities, leading to irregular translation procedures that foster tumorigenesis and progression.Studies indicate that m6A modification also plays a significant role in the onset and progression of neurological diseases by regulating immune cells and RNA.The YTH domain family 2 (YTHDF2), an m6A-binding protein, has been identified, and knockdown of YTHDF2 has been linked to increased inflammation.YTHDF2 expression strongly correlates with the development of inflammatory bowel disease (IBD) (Shore et al., 2010;Hock et al., 2017).The chemical modification of these RNAs is pivotal in RNA metabolism.Exosomes deliver molecules through a variety of pathways, manipulate the epigenetics of cells, and influence disease progression (Table 1).
Neurodegenerative diseases
The role and mechanisms of small RNAs in neurodegenerative diseases have garnered significant attention from researchers.Small RNAs serve not only as diagnostic markers for neurodegenerative diseases but also play a substantial role in disease occurrence and development.Various scientific evidence suggests the existence of global and gene-specific epigenetic changes at both peripheral and brain levels in patients with neurodegenerative diseases such as Alzheimer's disease (AD) (Coppedè, 2020), Parkinson's disease (PD) (Zhang et al., 2023), and amyotrophic lateral sclerosis (ALS) (Coppedè, 2020).
For instance, in patients with PD and AD, substantial amounts of miRNA are present in cerebrospinal fluid and blood exosomes, exhibiting differences from normal levels (Gui et al., 2015).In amyotrophic lateral sclerosis (ALS), RNA within extracellular vesicles released by muscles can disrupt motor neurons (MNs), denaturing and promoting disease progression (Le Gall et al., 2022).Additionally, the Drosophila brain can acquire circ_sxc by ingesting adipose tissue exosomes that traverse the blood-brain barrier.circ_sxc inhibits the expression of miR-87-3p in the brain, regulating the expression of neuroreceptor ligand proteins (5-HT1B, GABA-B-R1, Rdl, Rh7, qvr, NaCP60E), ensuring the normal function of synaptic signal transduction in brain neurons.However, with age, this regulatory mechanism is dysregulated due to the downregulation of fat exosomal circ_sxc, accelerating "aging" in the brain and potentially hastening the development of neurodegenerative diseases (Li et al., 2023).
Brain tumors
The presence of brain tumors, particularly gliomas, remains a significant challenge for neuroscientists due to their rapid growth, high recurrence rate, and malignant nature.In glioblastoma multiforme (GBM), the expression levels of one small non-coding RNA (RNU6-1) and two microRNAs (miR-320 and miR-574-3p) are significantly associated with GBM diagnosis, with RNU6-1 potentially serving as an independent predictor for GBM diagnosis (Manterola et al., 2014).Low serum levels of miR-485-3p predict reduced survival in glioblastoma patients (Wang et al., 2017).
In terms of treatment, blood exosomes (Exos) have been chosen as the delivery vehicle, combining cytoplasmic phospholipase A2 (cPLA2) knockdown and metformin for GBM treatment.This approach can effectively traverse the blood-brain barrier, reaching the brain and GBM tissues.It inhibits the mitochondrial energy metabolism of GBM, thereby suppressing tumor growth and extending survival (Zhan et al., 2022).Tumors can also influence the tumor microenvironment (TME) by secreting exosomal microRNAs (miRNAs).Exosomal miRNAs inhibiting tumors are absorbed by immune cells in TME and converted into cancer promoters, thus impeding tumor cell proliferation and delaying disease progression (Qi et al., 2022).Considering the effective penetration of exosomes through the blood-brain barrier and the significant advancements in tumor treatment at the genetic level, the prospects for glioma treatment through exosome drug delivery and the study of RNA in exosomes and their cargo hold promising potential for major breakthroughs.
Cerebrovascular diseases
Cerebrovascular diseases exert a substantial impact on global human health, posing a significant threat to physical well-being and contributing to a considerable economic burden worldwide.They stand as the primary cause of disability (Gutierrez and Esenwa, 2015), and research on exosomes for cerebrovascular diseases is gaining traction.Notably, exosomal let-7b-5p miRNAs may serve as a potential prognostic marker for poor outcomes after stroke.Exosomal miR-223-3p derived from mesenchymal stem cells can facilitate the transformation of M1 microglia into M2 microglia, mitigating cerebral ischemia/reperfusion injury (Zhao et al., 2020).Exosomes from M2 phenotypic microglia can target Notch1 to ameliorate neuronal death induced by ischemia-reperfusion injury (Zhang et al., 2021).M2 microglia-derived exosomes attenuate ischemic brain injury and promote neuronal survival through exosomal miR-124 (Song et al., 2019).In patients with cerebral infarction, miRNAmodified exosome therapy has demonstrated improvements in infarct volume and neurobehavior (Yu et al., 2022).Moreover, in the hypoxic state following a stroke, RNA within exosomes secreted by neuronal cells can reduce neuronal activity, inhibit axon and dendrite growth, and expedite stroke progression (Chiang et al., 2021).
Mental disorders
Mental disorders encompass a diverse range of conditions, with depression emerging as one of the most challenging.Depression is characterized by significant and persistent low mood, sluggish thinking, and reduced motivation, severely impeding psychosocial functioning and diminishing overall quality of life.In 2008, the World Health Organization ranked depression as the third leading contributor to the global burden of disease, projecting it to ascend to the foremost position by 2031 (Malhi and Mann, 2018).While comprehensive models or theories for depression research are currently lacking, substantial progress has been made recently in understanding the interplay between exosomes, their contents, and depression.
Conclusion and future prospects
Neurological diseases have afflicted humanity for numerous years, and though some efforts have been made to alleviate suffering, there is still a considerable journey ahead in their treatment.The presence of exosomes, well-established as cell-to-cell communication mediators (Fyfe et al., 2023), has been confirmed for many years.Exosomes can function as intercellular communication molecules.These vesicles often transport proteins and nucleic acids between cells, potentially contributing to the pathogenesis of various neurological diseases.This article provides evidence for exosome-mediated epigenetic mechanisms that regulate factor transfer, emphasizing the profound impact of exosome-transported methyltransferases on gene expression in recipient cells.While exosomes transport methyltransferases, they may also influence the expression of these enzymes in recipient cells indirectly.Moreover, exosomes transfer various non-coding RNAs in the brain microenvironment, including miRNAs, lncRNAs, and circRNAs, among others.Despite the ongoing emergence of relevant information, a complete chain of evidence is still lacking in many published reports, particularly regarding the detailed mechanisms of up-or downregulation of factors/non-coding RNAs that directly interfere with the action of development of central nervous system diseases.We only described two types of epigenetic mechanisms related to exosomes in this review, small RNA and DNA methyltransferases, this is limited.There are many types of RNA, and further research is needed to investigate whether RNA such as tRNA and siRNA play a deeper role in central nervous system diseases.
While conclusive knowledge is currently elusive, there is substantial evidence supporting the role of exosome-mediated regulation of epigenetic mechanisms in neurons and surrounding cells, contributing to the onset and progression of diseases.
Conducting focused studies on these mechanisms will be crucial for realizing the clinical potential of exosomes in neurological disease research and treatment.The identification or synthesis of novel chemicals capable of modulating exosome synthesis or function, along with the development of non-toxic approaches for utilizing exosomes as therapeutic agents, holds the potential to rapidly advance this field.Further research is imperative to comprehensively understand the role of exosomes in epigenetic resistance, enabling the identification and validation of valuable targets for diagnosis and treatment of central nervous system diseases.Despite that there are few studies, we believe that epigenetic regulation of exosomes has great potential in the treatment of sleep disorders, anxiety and other diseases.
In fact, there have been two main technical hindrances that restrict the basic and applied research of exosomes.The first is how to simplify the exosome extraction procedure and improve the yield of exosomes; the second is how to effectively distinguish exosomes from other extracellular vesicles, especially from functional microvesicles (Fais et al., 2016).Various exosome separation strategies and devices have been suggested to facilitate the investigation of exosomes and their related biological functions.Through studying the nature of particular samples and specific application settings, we believe careful selection of isolation techniques (or a combination of isolation techniques) will help investigators address many of the challenges faced in current exosome studies (Yang et al., 2020).There are many methods available on the market that are currently being used in labs worldwide to obtain exosomes.Until now, six classes of exosome separation strategies have been reported, including ultra-speed centrifugation, ultrafiltration, immunoaffinity capture, charge neutralization-based polymer precipitation, size-exclusion chromatograph, and microfluidic techniques, with unique sets of advantages and disadvantages for each technique from different manufacturers (Supplementary Table S1).It is evident that currently available technology (as well as that of the future) will allow for the improvement of isolation methods and pave the way for novel findings in the years to come.
TABLE 1
The biological functions of exosomal ncRNAs in CNS. | 5,798.8 | 2024-03-11T00:00:00.000 | [
"Medicine",
"Biology"
] |
Journal of Process Control
This paper introduces set-membership nonlinear regression (SMR), a new approach to nonlinear regression under uncertainty. The problem is to determine the subregion in parameter space enclosing all (global) solutions to a nonlinear regression problem in the presence of bounded uncertainty on the observed variables. Our focus is on nonlinear algebraic models. We investigate the connections of SMR with (i) the classical statistical inference methods, and (ii) the usual set-membership estimation approach where the model predictions are constrained within bounded measurement errors. We also develop a computational framework to describe tight enclosures of the SMR regions using semi-infinite programming and complete-search methods, in the form of likelihood contour and polyhedral enclosures. The case study of a parameter estimation problem in microbial growth is presented to illustrate various theoretical and computational aspects of the SMR approach.
Introduction
Mathematical models capable of accurate prediction of physical phenomena have proved to be invaluable tools for engineers and scientists. In the area of process systems engineering, they routinely support the design, control and optimization of production processes, as a means of improving their economical profitability and reducing their environmental footprint. A majority of these models are nonlinear and contain adjustable parameters that need estimating from available experimental data, or else from other, more fundamental, mathematical descriptions. In this context, parameter estimation turns out to be a key step in the verification, and subsequent use, of the mathematical models.
Most commonly, parameter estimation in nonlinear models is cast as a nonlinear regression exercise, where selected parameter values are adjusted so that the model predictions match the available observations as close as possible, for instance in the leastsquares or maximum-likelihood sense [1][2][3][4]. In order to avoid for the resulting parameter estimates to be biased, one can account for measurement errors in all of the variables, both independent and dependent variable observations, by following the so-called errorsin-variables approach [5,6]. This problem has been widely studied * Corresponding author. E-mail address<EMAIL_ADDRESS>(B. Chachuat). from a computational standpoint over the past decades, including the development of rigorous global optimization approaches for overcoming convergence to local optima [7,8].
Of course, there is more to model identification than just determining values for the unknown parameters. Systematic procedures have been devised to support the development and statistical verification of process models, which include testing structural identifiability, designing experiments for improved parameter precision, and inferring parameter confidence [9][10][11][12]. The focus in this paper is on the latter aspect, namely characterizing subregions in parameter space wherein the parameter values can be expected to lie. Other applications of such parameter confidence regions are in design under uncertainty [13,14], robust model predictive control [15][16][17], robust monitoring [18,19], and robust optimal design of experiments [20][21][22], to name but a few. For the scope of this paper, the emphasis is on models described by algebraic equations, but these ideas can be extended to dynamic or distributed models described by differential equations too.
Accounting for model mismatch and uncertain observations within the regression problem has spawned several schools of thought. Statistical approaches can be broadly classified as frequentist or Bayesian. The former seek to determine confidence regions around the regressed parameter values, typically a maximumlikelihood estimate, considered as the 'true' parameter values [1,2,4]. By construction, a 100(1 − ˛)% frequentist confidence region comprises 100(1 − ˛)% of the parameter values that would be obtained upon repetition of the parameter estimation using (hypothetical) new observations, considered as random variables. Approximate confidence regions, for instance based on the Wald test or the likelihood-ratio (LR) test, are known to converge to the exact confidence region in the limit of an infinite number of observations under certain conditions. Process modeling environments such as gPROMS and Aspen Custom Modeler have been relying on linear approximation and the Wald test to determine ellipsoidal confidence regions, a computationally efficient procedure for problems having several dozen unknown parameters, but one which may produce inaccurate results with large measurement errors and model mismatch or few measurement points. Confidence regions based on the LR test have been shown to yield superior approximations, but are computationally more involved since the corresponding parameter regions are complex sets in general (e.g., nonconvex, not simply connected) [23,24].
In practice, the term 100(1 − ˛)% confidence region is often misused to refer to the range of parameter values that include 100(1 − ˛)% of their probability distribution [25]. This description corresponds to so-called 100 (1 − ˛)% credible regions instead, which are defined in the Bayesian inference approach [26]. Bayesian estimation uses the available observations to construct a probability distribution of the parameters, called posterior distribution, based on a likelihood function and a prior probability distribution of the same parameters. In essence, this approach thus considers the unknown parameter values as random variables. Sampling-based techniques such as Markov-Chain Monte-Carlo (MCMC) [27,28] provide a means of constructing (approximate) credible regions, although the computational effort can become prohibitive for problems having upwards of 10 parameters [29]. A most probable estimate can be determined from the posterior distribution, which also corresponds to a maximum-likelihood estimate for a flat prior. Albeit classical frequentist and Bayesian inference regions can be reconciled in special cases, no equivalence can be drawn in general since Bayesian inference incorporates problem specific contextual information from the prior distribution, whereas frequentist inference is solely based on the data; see, e.g., [30,Chapter 5]. The debate on whether to use frequentist or Bayesian statistical inference continues to this day [25,31], but its intricacies are beyond the scope of this paper.
Regardless of whether a mathematical model's structure is correct or not, a frequentist confidence region will normally converge to the maximum-likelihood estimate as the number of observations increases. Likewise, a Bayesian posterior will normally converge to a point mass that corresponds to a most probable estimate, i.e., a point that maximizes the probability of the data given the (possibly wrong) model. An interesting alternative to these statistical approaches is set-membership estimation (SME). The traditional SME setting, also called guaranteed parameter estimation (GPE), seeks to determine the set of all possible parameter values for which a model's predictions are consistent with a set of observations subject to bounded errors [32][33][34]. The fact that this approach does not require a statistical description of the observation errors, solely bounds, is not only less demanding, but also more realistic in many practical applications, including biological systems where the measurements are often scarce and subject to large errors [21]. Beside parameter estimation, the distinctive yes-or-no answer provided by set-membership techniques can also be used for model inconsistency detection [35,36]. One caveat here is that the set of feasible parameter values may be empty in the presence of measurement outliers or due to an inadequate description of the measurement noise, thus calling for remedial strategies [37,38].
Another key challenge in nonlinear set-membership estimation is describing the feasible parameter set accurately, while remaining computationally tractable. This challenge is in fact similar to the one faced by aforementioned statistical inference methods for describing parameter confidence sets, and it may explain why setmembership estimation has not reached a wider diffusion to this day. Existing computational strategies are limited to problem with downwards of a dozen parameters. They range from approximation using sampling-based methods, including stochastic search [39], support vector machines (SVM) [40] and MCMC [41]; to rigorous complete-search methods based on interval analysis and other set arithmetics [42][43][44]; and to semidefinite relaxation techniques for semi-algebraic problems [45,46]. This paper introduces set-membership regression (SMR), a new approach to nonlinear regression. The SMR problem seeks to determine the subregion in parameter space enclosing all (global) solutions to a nonlinear regression problem in the presence of bounded uncertainty on the observed variables. By contrast with the traditional SME setting seeking for parameter values to satisfy certain feasibility constraints, the SMR approach method seeks for parameter values to satisfy an optimality condition. To the best knowledge of the authors, this problem has not been investigated in the general nonlinear setting so far. Milanese [47] studied optimality and convergence properties of least-squares estimates in the presence of unknown bounded disturbance, but their theoretical work is limited to linear problems. This paper sets out to investigate the connections of SMR with both statistical inference and set-membership estimation approaches for nonlinear algebraic models. Another principal contribution is a computational framework to describe tight enclosures of the SMR regions using complete-search methods.
The rest of the paper is organized as follows. Section 2 starts by reviewing classical results from both areas of statistical and setmembership estimation. Section 3 introduces the SMR approach and analyzes its properties, after which numerical solution strategies are developed in Section 4. A simple case study is used throughout Sections 2-4 to illustrate the main concepts and results. Section 5 presents a more challenging estimation problem in microbial growth to demonstrate the SMR approach. Finally, Section 6 concludes the paper and discusses future research opportunities.
Background
Our focus throughout this paper is on explicit models in the form where p ∈ R np is the vector of unknown parameters; and (u, y) ∈ R nu × R ny is the vector of observed variables, denoted collectively by x:=(u, y) ∈ R nx for convenience. Notice that u and y often correspond to (either controlled or uncontrolled) input and output variables, respectively, in a practical setup. It is also worth pointing out that many of the concepts and methods presented herein can be applied to models described by implicit equation systems, such as f(p, x) = 0, and models comprised of differential equations too.
Suppose that n m observations x m k :=(u m k , y m k ) of the input-output variables are available, and assume that all of these observation errors are independent and described by the probability density functions p(·| ) parameterized by . In the error-in-variables approach [6], the reconciled values u 1 , . . ., u nm for the observations are estimated alongside the unknown model parameters p. The joint probability of the prediction-observation mismatch in all data points for the parameter values Â:=(p, u 1 , . . ., u nm ) ∈ R n  is described by the following likelihood function: with ıu k :=u k − u m k and ıy k :=g(p, u k ) − y m k . The error-in-equation approach instead, considers the input measurements u m k to be error-free; that is, the parameter vector  reduces to p, and the likelihood function simplifies to Nonlinear regression in the maximum-likelihood sense seeks to determine values for  in order to maximize L or, equivalently, maximize log L. In the error-in-variables approach, this estimation entails the solution of an optimization problem in the form of If the parameters describing the error distribution are also unknown, one may either approximate their values using an ad hoc estimator, or consider them as additional variables in the problem (3) [1].
In the special case of Gaussian-distributed errors, with zero mean and variance v k,i , the maximum-likelihood problem (3) is equivalent to the following weighted least-squares problem While least-squares ( 2 ) regression is optimal amongst minimumvariance mean-unbiased estimators for normally distributed observation errors, outliers can greatly distort the least-squares estimates. As an alternative, least-absolute-values ( 1 ) fitting may be preferable in the presence of outliers or if little is known about the distribution of the errors [48,49]. The 1 regression problem readŝ where standard tricks can be used to reformulate or approximate the nonsmooth absolute value term in the objective function. The solutions to the 1 regression problem (5) can also be viewed as maximum-likelihood estimates if the observation errors follow the with zero mean and variance v k,i . An ∞ regression problem can be constructed in a similar way [49].
Statistical inference
Classical frequentist confidence inference proceeds in two steps: (i) solve a regression problem, e.g., to determine a mostlikely parameter estimate as described above; and (ii) construct confidence regions around this estimate.
Under the assumption that matches the (unique) 'true' value of the model parameters, both the likelihood subset ratio statistic −2 log[L( | x m )/L( | x m )], and the Wald subset statistic 1 , follow a chi-squared distribution with n  degrees of freedom with an increasing sample size n m → ∞ [4]. 1 The covariance matrix V ∈ S n  ×n  + for the parameters at can be approximated in various ways [50], which are asymptotically equivalent; for instance [1, § 7-5], . where Ve ∈ S nx nm ×nx nm + stands for the covariance matrix of the observation noise, is the Hessian matrix atÂ.
These asymptotic confidence results can be used to obtain (approximate) 100(1 − ˛)% confidence regions, with the usual frequentist interpretation that the probability for a random confidence region to cover the true value of  is, in large samples, equal to 1 − ˛ [24] : • 100(1 − ˛)% likelihood-based confidence region: • 100(1 − ˛)% normal-theory (Wald) confidence region: where 0 ⊆ R n  denotes the allowable (prior) parameter set; and 2 n  (1 − ˛) is the 1 − ˛ quantile of the chi-squared distribution with n  degrees of freedom. At this point, we note that confidence intervals can be inferred from any confidence region by bounding the range of values for each parameter  i . In the case of the Wald approximation, explicit confidence bounds are obtained as A classical result in statistical inference is that the confidence regions (7) and (8) are asymptotically equivalent [51,52], with a convergence rate ∝ n −1 m . However, unlike the likelihood-based confidence regions, the Wald confidence regions are not invariant to a model reparameterization because of the (approximate) covariance term VÂ. Conversely, computing a Wald confidence region is straightforward, whereas describing a likelihood-based confidence region for a nonlinear model is generally a hard task since this region may not be convex or not even simply connected.
Unlike the frequentist view, Bayesian estimation treats the parameters as random variables, whose (posterior) probability distribution, p (Â | x m ) can be inferred from Bayes' theorem, where p(Â) is the so-called prior density of the parameters. Any is called a 100(1 − ˛)% credible set. One particular kind of credible sets is the highest posterior density (HPD) set, given by where ˛i s the largest value for which (10) holds. When a sampling approach is applied to estimate the posterior, for instance a MCMC sampler, the value of ˛c an be estimated from a procedure that examines all available samples of p(Â | x m ) [28]. It is also worth mentioning that complete-search approaches to enclosing credible sets have been proposed as well [53,54]. The connections between Bayesian and non-Bayesian statistical inference have been studied since the 1960s, for instance with regards to matching credible and confidence intervals [55,56]; or, more recently, in order to reconcile Bayesian and frequentist higher-order asymptotic expansions for predictive probability densities [57]. In linear regression problems with normally distributed measurement errors, the Bayesian posterior takes the form of a multivariate Gaussian centered at the maximum-likelihood estimate and with covariance matrix VÂ for non-informative priors, so the HPD credible regions match their frequentist counterparts. More generally, such matching can be made in cases where the Bayesian prior is invariant to model reparameterization, which is the case for Jeffreys or reference priors [58]. For simplicity, our focus in this paper is limited to uniform prior distributions with compact supports. Although such priors fail to be invariant under reparameterization, the resulting HPD sets correspond to contour levels of the likelihood function, similar to the likelihood-based confidence regions.
Set-membership estimation
The usual GPE problem in set-membership estimation seeks to determine a parameter subregion such that the predicted input-output observations are consistent with their matching measurements within given error bounds [32,33], Here the error set E ⊂ R nxnm may be any compact set and does not need a statistical description of the uncertainty. In the usual scenario where independent error bounds ±e u 1 , ±e y 1 , . . ., ±e un m , ±e y nm are given for each of the measurements, the set-membership estimation problem reads If statistical information about the observation error is nonetheless available, for instance a uniform or q-Gaussian probability distribution with compact support, one may take E directly as this support set. Even when the distribution support is not compact, one could decide to exclude those scenarios having a probability lower than a given threshold and use the corresponding HPD credible region as the error set E; see, e.g., [59]. It is not difficult to imagine a situation whereby no parameter value in 0 can be found such that the model predictions are consistent with the observations for a given error set E, i.e., the guaranteed parameter region (12) is empty. This may happen in the presence of measurement outliers, or could be caused by a large model mismatch. The former situation is common with experimental data, e.g., due to a failing or drifting sensor. Methods have been developed for robustifying set-membership estimation against outliers [37,38], alongside classical approaches to detecting outliers [60]. Moreover, one can take advantage of the latter situation, for instance to invalidate candidate models that would present a systematic offset with a certain set of observations [35,36], typically after checking for outliers [38]. Another appeal of set-membership estimation lies in its ability to detect a lack of identifiability in parametric models, that is, when model responses corresponding to distinct parameter values are indistinguishable [9].
The vast majority of computational studies in set-membership estimation uses exhaustive-search techniques based on interval analysis or other set arithmetics to describe the parameter regions (12) [42][43][44]61] . A current bottleneck of these approaches is their applicability to problems having no more than 5-10 parameters. However, if one is ready to abandon guarantees, sampling-based techniques such as SVM or MCMC can be used to approximate the parameter regions, and these remain applicable for black-box models too [40,41].
Illustrative example. We use a simple estimation problem adapted from [3] to illustrate the main approaches described in this background section, and we use the same problem to illustrate the main properties of the SMR framework developed later on in Sections 3 and 4. The model describes the dynamic evolution of biological oxygen demand (BOD), c in a wastewater sample, with parameters ( 1 ,  2 ) ∈ [0, 50] × [0, 2], and time t ≥ 0. For this problem, data points (t m k , c m k ) have been generated by simulating the model (13) for the parameter values  1 = 20 and  2 = 0.5, and corrupting these values with a Gaussian white noise with variance 2 c = 1. These data are reported in Appendix B for the sake of reproducibility.
Both 90% confidence regions and 90% HPD credible regions are compared in Fig. 1, in the case of an 2 -regression problem. Various sets of measurements are considered, namely n m = 4 measurement points (every other day), 8 measurement points (every day), and 16 measurement points (twice a day). The asymptotic convergence of the Wald and likelihood-based confidence regions with an increasing number of measurements is clearly visible. The HPD credible sets shown on these plots are generated from a flat prior, and are consistently smaller than their confidence counterparts; HPD credible sets constructed from a non-informative Jeffreys prior (not shown on the plots) would be identical to the likelihood-based confidence regions.
A comparison between guaranteed parameter regions for the same three sets of measurements, but corresponding to different measurement error sets in (12), is shown in Fig. 2. The first measurement error set corresponds to the usual assumption of independent error bounds on each measurement, here for 90% confidence bounds, so that 2 1 (0.9) 2 c ≈ 2.706. Notice how the corresponding guaranteed parameter sets shrink when more measurements are added, as it becomes more challenging for the model predictions to match a larger measurement set in the presence of measurement noise. Such guaranteed parameter regions could even be empty, which happens for instance with e 2 c k ≤ 1 in (14), corresponding to 68% (1-sigma) confidence bounds. Also notice that the real parameter value (20, 0.5) lies outside the guaranteed regions due to the large measurement noise. The other measurement error set in Fig. 2 is chosen as the HPD set of a joint Gaussian distribution, again for a 90% confidence limit. Guaranteed parameter sets so constructed do not shrink significantly as more measurements are added into the estimation problem, and they are thus more resilient to measurement noise than their counterpart sets constructed with independent error bounds on each measurement. This higher resilience is essentially due to an enlarged, and hence more flexible, measurement error set E 2 compared to E 1 .
Set-membership nonlinear regression
The developed set-membership regression (SMR) approach seeks to describe the subregion R in parameter space enclosing all (global) solutions to a nonlinear regression problem under all possible measurement uncertainty scenarios. Given a bounded uncertainty set E ⊂ R nxnm on the observation errors, the SMR region R is mathematically defined as In the context of the 2 -regression problem (4), SMR specializes to ∃ e u 1 , e y 1 , . . ., e un m , e y nm ∈ E : and in the context of the 1 -regression problem (5), to ∃(e u 1 , e y 1 , . . ., e un m , e y nm ) ∈ E : Notice that the constraint feasibility condition in the traditional SME formulation (12) is replaced with an optimality condition in the SMR problem (16) , making the parameter regions in SMR expectedly more difficult to characterize. Numerical solution strategies for describing enclosures of an SMR region are presented later on in Section 4. The remainder of this section investigates connections between SMR and the well-established set-membership and statistical inference approaches, respectively in Sections 3.1 and 3.2.
Set-membership interpretation
By contrast with the usual approach to set-membership estimation (Section 2.2), SMR comes with a guarantee that the set R is always non-empty, no matter how large the model mismatch or the observation errors might be, since the regression problems in (16) are all feasible by construction. Therefore, the SMR formulation is inherently resilient to the presence of outlying observations, and it does not need for such outliers to be detected or removed from the observation set before computing the parameter regions [38]. In other words, the outlying observations can be dealt with directly into the SMR problem (16) via an appropriate likelihood function.
The following inclusion result holds between SMR and GPE under mild assumptions: Theorem 1. Suppose that the probability density functions p(·| ) participating in the likelihood function (1) are all maximal at 0. Then, for a given error set E, the SMR region (16) contains the GPE region for some e:=(e u 1 , e y 1 , . . ., e un m , e y nm ) ∈ E. Since the probability density functions p(·| ) in L are all maximal at 0 by assumption, the log-likelihood function log L( · | x m + e) is (globally) maximal at Â, and therefore  ∈ R . ᮀ Remark 1. The assumption on the likelihood function L in Theorem 1 is not very restrictive in practice. For instance, it is satisfied by both 2 -and 1 -regression problems in (17) and (18), so we have G ⊆ 2 R and G ⊆ 1 R . It is also satisfied when the probability density functions are uniform on a compact support, as is the case with ∞ -regression problems [49].
Illustrative example (continued). A comparison between GPE and SMR regions for both 1 -and 2 -regression is presented in Fig. 3, in the case of 8 measurements. The same measurement error sets E 1 and E 2 as introduced earlier in (14) and (15) are used in this comparison. For simplicity, we have applied a simple sampling procedure to inner-approximate the SMR regions: 20,000 error vectors e (i) c are generated within the multi-dimensional error sets E 1 and E 2 , here using Sobol quasi-random sampling; then, the following nonlinear regression problem is solved to global optimality to obtain a corresponding point We start by noting that the inclusion result in Theorem 1 is indeed satisfied for both measurement error sets and both regression types. Moreover, the SMR regions obtained for either measurement error sets are comparable in size. In the case of independent error bounds on the measurements (set E 1 , left plot), the SMR regions do not shrink much when more measurements are added, which is unlike the corresponding GPE regions; compare Fig. 2. This also illustrates the higher resilience of SMR to noisy or outlying measurements than GPE. For both measurement error sets, the SMR-2 regions are consistently smaller than their SMR-1 counterparts. Interestingly, this observation is consistent with the classical Gauss-Markov theorem stating that the least-squares estimator provides the estimator with lowest variance in linear regression.
Statistical interpretation
Whenever statistical information is available for the observation errors, for instance in the form of a joint probability distribution, one may choose the error set E as the corresponding HPD region for a given credibility level 1 − ˛. In the case of independent and Gaussian-distributed observation errors, such as those leading to the 2 -regression problem (4), the 100(1 − ˛)% HPD region is given by with the diagonal error covariance matrix V e :=diag(v u 1 , v y 1 , . . ., v un m , v y nm ). Likewise, for Laplacian distributed errors as in the 1 -regression problem (5), the 100(1 − ˛)% HPD region comes in the form where nxnm (1 − ˛) is the counterpart of the chi-squared value for a joint Laplacian distribution. Notice that with the error sets in (19) and (20), the SMR regions R may not converge to a singleton (or a finite set) as more observations are added into the regression problem, since the HPD limits 2 nxnm (1 − ˛) and nxnm (1 − ˛) are themselves increasing with n m for a given confidence level 1 − ˛. The SMR regions derived from such error sets are thus unrelated to their confidence and credible region counterparts in classical statistical inference (Section 2.1), which are both shrinking to a singleton as n m → ∞ (under certain regularity conditions). But while one would indeed expect convergence to some 'true' parameter value when a model's structure is correct, such an idea of 'true' parameter values becomes meaningless in the presence of structural model mismatch. By contrast, SMR does not make any assumption about the correctness of a model's structure, and a 100(1 − ˛)% SMR region is comprised of those parameter values which are equally credible under the observation error set E, in the sense of the regression problem at hand: a clear and unambiguous statistical interpretation.
To sum up, convergence of an SMR region R to a singleton is dependent on the choice of the measurement error set E, but is unrelated to whether or not the model's structure is correct. A follow-up question then is identifying scenarios under which SMR regions would be asymptotically equivalent to classical confidence regions. The following result establishes one simple connection with the Wald confidence regions (8) under certain regularity conditions.
Theorem 2.
Let the error set in the SMR problem (16) be given by for some confidence level 1 − ˛, and covariance matrix V e ∈ S nxnm×nxnm + . Assume that the likelihood function in (16) is twice continuously differentiable and the regression problems for e ∈ E all have a unique, strict global optimum. Then, the SMR region R is asymptotically equivalent to the 100(1 − ˛)% Wald confidence region W in (6) and (8), where d H is the Hausdorff metric.
Proof. Let e ∈ E, and denote by Â(e) ∈ R the corresponding solution to the regression problem max  log L( | x m + e), so that withĤ: . The image of the error set (21) under the affine transformation (23) is an ellipsoid with center and shape matrix V as in (6), so that  ∈ W . Conversely, let  be any point in W , and let e be any point in E satisfying (23). Clearly, the point Â(e) ∈ R is such that Â(e) −  ∈ O(diam(E) 2 ) by (22). ᮀ
Remark 2.
In the special case of a linear regression, the equivalence between the SMR and Wald confidence regions in Theorem 2 turns out to be exact, not merely asymptotic. For an 2 -regression and the model y = F Â, we have which matches the likelihood-ratio confidence region L (2), as well as the Bayesian's HPD credible region B (11) for a uniform/non-informative prior. Both the frequentist and Bayesian inference regions are thus implied by the SMR framework in linear regression problems.
Remark 3.
The key difference between the error set (21) in Theorem 2 and the 100(1 − ˛)%-HPD region (19), is that the HPD limit in the former, namely 2 n  (1 − ˛), is independent of the number of observations. This is also the reason why the error set (21) shrinks to the origin, and therefore R converges to the singleton set {Â} as n m → ∞ (under the assumptions of Theorem 2). Conversely, a 100(1 − ˛)% confidence region may be regarded as the asymptotic equivalent to an SMR region with the confidence level 100(1 − ˇ)% on the jointly Gaussian-distributed observation errors in (19) such that 2 nxnm (1 − ˇ) = 2 n  (1 − ˛). For instance, a 90%-confidence region in a two-parameter regression problem is asymptotically equivalent to an SMR region with 67%, 20% and 0.26% joint confidence for 4, 8 and 16 observations, respectively. Illustrative example (continued). A comparison between 90% likelihood-ratio confidence regions and two SMR-2 regions corresponding to different measurement error sets is shown in Fig. 4, in the case of 4 and 8 measurement points. The SMR regions are innerapproximated using the same sampling strategy as previously.
The first error sets correspond to 90% HPD regions in (19) for the jointly Gaussian-distributed measurement errors-or, equivalently, the set E 2 in (15). These SMR regions are found to be significantly larger than their 90% likelihood-ratio (or Wald) confidence counterparts. Also recall that, by Theorem 1, these SMR regions always enclose the GPE regions shown in Fig. 3 for the same error sets E 2 .
The second error sets are constructed per (21), in order to illustrate the asymptotic equivalence with classical confidence regions as established through Theorem 2; they correspond to 67% and 20% HPD regions for jointly Gaussian-distributed measurement errors with 4 and 8 measurements, respectively, as discussed in Remark 3. Such asymptotic convergence with an increasing number of measurements is clearly visible in Fig. 4, where the small discrepancy observed on the left plot for n m = 4 cannot be seen anymore on the right plot for n m = 8. The SMR framework is thus capable of providing equivalent confidence information as in classical statistical inference, with the attendant advantage of being able to switch between alternative error set descriptions or likelihood functions seamlessly.
Numerical solution and approximation
Describing the SMR region R as defined in (16) is a difficult task in general. A simple approach to enclosing R by a set of algebraic constraints, which would then allow the application of the same set-inversion techniques as for GPE (Section 2.2; Appendix A), entails a substitution of the regression problems by their optimality conditions. Since every element  in (the interior of) R should satisfy the first-and second-order optimality conditions for some observation error e ∈ E, we have However, since the optimality conditions (24) hold for both local and global maxima of the likelihood function, as well as saddle points, this inclusion could end up being very conservative for nonlinear regression problems in general. Another important caveat with this approach is the computational penalty of applying a setinversion algorithm in the (n  + n x n m )-dimensional domain 0 × E, not merely in the original n  -dimensional domain 0 . The following subsections set out to develop more tractable, yet still conservative, bounding strategies to alleviate the computational burden of SMR, both in the form of confidence-like regions (Section 4.1) and polyhedral regions (Section 4.2).
Likelihood-contour enclosure
We consider the problem of enclosing the SMR region R within a confidence-like region of the form for some constant ≥ 0. Notice that the computational complexity of describing, or closely approximating, the relaxed region R ( ) is then comparable to describing either a likelihood-based confidence region (7) or a GPE region (12), for instance by applying a set-inversion algorithm in the original n  -dimensional domain 0 .
The following theorem provides a systematic means of computing a value * such that R ( * ) is a tight enclosure of R , upon specializing ϕ(Â):= log L(Â | x m ). This situation is depicted in Fig. 5.
Theorem 3. Given any continuous function
Moreover, the enclosure with * is tight in the sense that the two sets share one or more boundary points.
Solving this SIP problem is hard in general, since both the semi-infinite constraint and the objective function are generally nonconvex for a nonlinear regression problem. Existing solution approaches to SIP rely on either one of two key ideas [62,63]. In local reduction methods, a semi-infinite constraint is represented locally by a finite number of instances of the constraint, upon invoking the implicit function theorem. Alternatively, discretization (and exchange) methods involve replacing the uncertain parameter set with a finite discretization so as to create a relaxation of the SIP, and then iteratively refining this discretization until convergence. The focus in the remainder of this paper is on the second type of methods, for which global optimality certificates can be provided upon solving the nonlinear programming (NLP) subproblems to global optimality using complete search methods [64][65][66].
More specifically, we apply the cutting-plane SIP algorithm by Blankenship and Falk [67] in order to construct a sequence of decreasing upper bounds k on the upper bound * given by (27) ; that is, we construct an inclusion sequence R ( k ) ⊇ R ( * ) ⊇ R . Within the SMR framework, this algorithm entails an iteration between: (i) the finite-dimensional nonlinear programming (NLP) subproblems where k 0 :={ 0 , . . ., k } is a finite subset of 0 ; and (ii) the feasibility subproblems The subset k 0 at iteration k = 1 may be initialized as the empty set, or better, as a singleton set with the maximum-likelihood estimatê Â (see Section 2).
Under the assumptions that the likelihood function L is jointly continuous in (Â, e) and that the parameter set 0 and the error set E are both compact, any point of accumulation  * of the sequence { k } will correspond to the best possible lower bound * in (27) [67,Theorem 2.1]. In practice, the iterations may be interrupted when the following termination criterion is satisfied for a certain tolerance > 0, Naturally, such a convergence property of the cutting-plane algorithm hinges on the ability to solve all of the nonconvex subproblems (28) and (29) to global optimality. Otherwise, the resulting threshold values * could be underestimated, leading to likelihood contours that exclude parts of the corresponding SMR regions. The practical applicability of this approach may thus be hindered by its computational complexity. One way to expedite convergence of the cutting-plane algorithm is via the addition of redundant constraints, namely constraints that do not alter the optimal solution set of the SIP (27) yet tighten the relaxations in (28); see, e.g., [68,69] for more details about KKTbased tightening in SIP. Provided that the likelihood function is sufficiently smooth, one can add the first-and second-order optimality cuts (24) as redundant constraints in the subproblem (28), so that 2 ( k , e k ) ∈ arg min
0.
2 Given that most NLP solvers do not currently support constraints in the form of linear matrix inequalities (LMI), one can always substitute the LMI constraint In the case that none of the regression problems max ∈ 0 log L( | x m + e) have local (suboptimal) solutions for any e ∈ E, enforcing the semi-infinite constraint in (27) is of course equivalent to satisfying the optimality conditions (24), and so the cutting-plane algorithm will trivially terminate after a single iteration. Otherwise, the intermediate solution points  k to the NLP subproblems (31) might correspond to local optima of the regression problems for e k ∈ E, and the algorithm thus keeps iterating by adding cutting planes until all of these local optima have been excluded. At this point, satisfying both the discretized semi-infinite and optimality constraints in (31) becomes equivalent to enforcing the original semi-infinite constraint in (27), and the algorithm will then terminate exactly -optimality gap = 0 in (30) -at the next iteration. This behavior will be illustrated for the case study problem in Section 5.1.
Polyhedral enclosure
Applying a set-inversion approach to describe (an enclosure of) the SMR region R can prove computationally expensive, if at all tractable, especially for the estimation problems encountered in real-life situations. A computationally less demanding task entails the computation of a simple (axis-aligned) box enclosure for an SMR region; for instance, by solving a pair of optimization problems for each parameter  i , i = 1 . . . n  , as Clearly, these bounds may be computed by applying a similar cutting-plane algorithm as in Section 4.1 above, whereby the discretization subproblem (28) is now replaced with ( k , e k ) ∈ arg min/argmax and possibly supplemented with the redundant optimality cuts (24) as in (31).
As an alternative to the direct solution of the SIP problems in (32), one can also use the likelihood-contour enclosure R ( * ) in (25) with the lower bound * from (27) in order to construct an NLP relaxation of the SIP problem. A conservative box enclosure can be computed in this way by solving the auxiliary (potentially nonconvex) NLP problems Of course, the presence of several disconnected subsets in an SMR region cannot be detected by a simple box enclosure, and information about correlations between the parameters  i in the actual SMR region is also lost. Part of this information could nonetheless be recovered by constructing a polyhedral enclosure of the SMR region, e.g., expressed in the form for a set of vectors n 1 . . .n m ∈ R n  and scalars ı 1 . . .ı m , ı 1 . . .ı m ∈ R. Specializing the function ϕ(Â):=n T k  in Theorem 3 provides a means of constructing such non-axis-aligned polyhedral cuts. Herein, the directions n k are chosen in such a way that the cuts correspond to a (face or interior) diagonal of the box enclosure [Â, Â], with ∈ { − 1, 0, 1} n and | | = n  i=1 i ≥ 2. Further, the limits ı k , ı k in (34) such that the polyhedral cuts are tight can be computed via the solution of the auxiliary SIP problems s.t.
possibly supplemented with the redundant optimality cuts (24) once again. Similar to the box enclosure (33) earlier, conservative, yet computationally less demanding, polyhedral cuts could be derived from the likelihood-contour enclosure R ( * ) by solving the auxiliary NLP problems Notice that the spans (ı k − ı k ) are bounded in [0, 1] by construction. The case of a 2-dimensional face diagonal, where i = j = 1 are the only nonzero elements in (35), is shown in Fig. 5 for illustration. Enumerating all such pairs of parameters ( i ,  j ) 1≤i<j≤n  calls for the solution of 2n  (n  − 1) auxiliary optimization problems. More generally with | | ≥ 2 nonzero elements in the vector , the number of optimization problems is equal to 2 | | n  | | . To manage this high combinatorial complexity when the number of parameters n  is high, it is of course possible to include only those cuts involving combinations of | | = 2 or 3 parameters in the polyhedral enclosure at the price of a more conservative polyhedral enclosure. A simple way of detecting correlations among any parameter pair ( i ,  j ) 1≤i<j≤n  is by calculating the shortest-to-longest ratio between the spans (ı k − ı k ) obtained with i = j = 1 on the one hand, and i =− j = 1 on the other hand. A ratio close to 0 indicates an elongated set projection onto ( i ,  j ) in one of the diagonal directions, and therefore a large correlation between  i and  j ; whereas, a ratio close to 1 indicates a more spherical set projection onto ( i ,  j ). This approach is the counterpart to the shortest-tolongest axis ratio in an ellipsoidal (Wald) confidence region, which is also the basis for the so-called modified E-optimality criterion in experimental design [11]. More generally, shortest-to-longestspan ratios could be computed with | | > 2 in order to unravel correlations among more than 2 parameters likewise. Other classical criteria, such as the A-optimality and D-optimality criteria, also have counterparts in the SMR framework, given by the sum of all the parameter ranges  i −  i for i = 1 . . . n  and the volume of the polytope (34), respectively. To conclude this subsection, it is worth mentioning that the construction of such polyhedral enclosures is also relevant to the approximation of classical inference regions, for instance the likelihood-ratio confidence regions (7).
Illustrative example (continued). Various enclosures of SMR-2 regions are presented in Fig. 6 for the BOD case study, here with either 4 or 8 measurement points. The measurement error set E is constructed based on (21) at the confidence level 1 − ˛ = 0.9. The threshold values * in the likelihood-contour enclosures R ( * ) (25) are computed using the cutting-plane SIP algorithm described in Section 4.1, with first-order optimality cuts as in the discretized subproblem (31) . When the subsets k 0 are initialized with the corresponding maximum-likelihood estimateÂ, the cutting-plane Fig. 6. Comparison of outer-approximation strategies to enclose the SMR-2 regions for the BOD example with 4 (left) and 8 (right) measurement points: enclosures based on likelihood-contour cuts (25), and polyhedral cuts (34). The error set E is constructed based on (21) at the confidence level 1 − ˛ = 0.9. The triangles represent the real parameter values.
Table 1
Comparison between the thresholds * and log L(Â | x m ) − 1 2 2 2 (0.9) corresponding to the SMR-2 region (16) and the likelihood-based confidence region (7), respectively, for the BOD example with 4, 8 and 16 measurement points. algorithm finds the exact solutions * during the first iteration, irrespective of the number of measurement points. Even for such a simple estimation problem though, the solution of the discretized subproblem (31) to global optimality using GAMS-BARON proves computationally challenging as the number of measurement increases, here taking 404 CPU-sec for 4 measurement points, 3810 CPU-sec for 8 measurement points, and failing to close the gap within 7200 CPU-sec for 16 measurement points. 3 The GAMS code is provided as part of the Supplementary Information (see Appendix C) for the sake of reproducibility. The likelihood-contour enclosures R ( * ) are found to provide a very close approximation of the SMR-2 regions in Fig. 6-these enclosures are computed using the set-inversion algorithm described in Appendix A. This is expected given the fast convergence between the SMR and likelihood-based confidence regions already observed in Fig. 4, and confirmed by the comparison in Table 1 between the thresholds defining these two confidence regions.
For simplicity, the polyhedral cuts in Fig. 6 are constructed from the likelihood-contour enclosures R ( * ) rather than the actual SMR-2 regions R here. The numerical solution of the auxiliary NLP subproblems (33) and (37) to global optimality using GAMS-BARON is fast in comparison with the SIP problems, taking <1 CPUsec.
Finally, the shortest-to-longest-span ratios in the polyhedral enclosures of the SMR-2 regions for 4, 8 and 16 measurement points are 0.284 0.965 ≈ 0.294, 0.249 0.970 ≈ 0.257 and 0.256 0.967 ≈ 0.265, respectively. These small ratios (compared to 1) indicate that the SMR regions are 3-to 4-times flatter in one direction compared to the other direction, which unravels the presence of a strong correlation between  1 and  2 in (13), which is in agreement with the visual impression on Fig. 6.
Case study in temperature-dependent microbial growth
We now apply the SMR framework to a more challenging estimation problem in microbial growth, emphasizing their properties and drawing comparisons with other set-membership and statistical inference methods. Two models describing the effect of culture temperature, T on the growth rate, of a microbial population, each one comprising four parameters, are: (i) The Ratkowsky model [71]: where T min and T max (K) represent the minimal and maximal temperatures, respectively; while b (K −1 h 0.5 ) and c (K −1 ) are extra parameters adding flexibility to the shape of the growth model. The cardinal temperature model [72]: where T min and T max (K) also represent the minimal and maximal temperatures, respectively; T opt (K) corresponds to the optimal growth temperature; and opt (h −1 ) is the maximal growth rate attained at T opt . Experimental data used in the regression are from [71] for the bacterium E. coli. This data set comprises 15 measurement pairs (T k , k ) within the temperature range 294-320 (K), and it is reproduced in Appendix B for completeness. The standard deviation of the growth rate measurements is taken as = 0.1 (h −1 ) throughout. Results of a maximum-likelihood estimation with constant-variance and Gaussian-distributed errors -or, equivalently, a standard least-squares regression -are presented in Fig. 7. Both model predictions are found to be in good agreement with the experimental data, yet with a higher likelihood for the cardinal temperature model. Note also that errors are only taken into account for the growth rate measurements (outputs) herein, i.e. the temperature measurements (inputs) are considered to be exact.
Computational procedure and performance
For both candidate models we use the cutting-plane SIP algorithm of Section 4.1 to compute the threshold values * (27), and we describe tight likelihood-contour enclosures R ( * ) (25) of the SMR regions using a set-inversion algorithm (see Appendix A) in turn. We apply a similar cutting-plane SIP algorithm to determine the box and polyhedral enclosures based on (32) and (36) with | | = 2, as described in Section 4.2. First-order optimality cuts are added in the discretized NLP subproblems for all of the SIP problems, as in (31), in order to expedite the convergence of the cutting-plane algorithm, and the sets k 0 are initialized with the maximum-likelihood estimates at iteration k = 1. All of the NLP subproblems in the SIP algorithm are solved with the global solver GAMS-BARON-these GAMS codes are provided as part of the Supplementary Information (see Appendix C) for reproducibility. 3 Lastly, the set-inversion computations are carried using our in-house library CRONOS [44], which is available from https://github.com/omega-icl/cronos.
In the case of the cardinal temperature model, a single iteration is needed to solve all of the SIP problems exactly -optimality gap = 0 in (30). This behavior indicates that none of the regression problems for this model exhibit local, suboptimal solutions for the measurement error sets of interest in Sections 5.2 and 5.3 below. In the case of the Ratkowsky, the SIP problems are also solved exactly, but the cutting-plane algorithm terminates after several iterations due to the presence of local optima; for instance, computing the solution value * of (27) for the SMR problem in Section 5.2 takes 10 iterations to terminate, as shown in Fig. 8.
Even though the cutting-plane SIP algorithms terminate exactly (after a single or several iterations), certifying global optimality for most discretized NLP subproblems is currently intractable with the state-of-the-art global solvers BARON [73] and ANTIGONE [66]. As already discussed in Section 4.1, this lack of guarantees could result in the likelihood contours or polyhedral cuts excluding parts of the actual SMR regions. The odds of missing a global optimum in a discretized NLP subproblem is nonetheless mitigated by letting BARON or ANTIGONE run up to a time limit of 7200 CPU-sec here.
SMR with jointly Gaussian-distributed errors
We consider SMR-2 regions, where the error set E corresponds to the HPD region of a joint Gaussian distribution, as in (21). In order to draw on the asymptotic equivalence with a 95% confidence region in classical frequentist inference (Theorem 2), we select a 15% HPD region for the joint Gaussian distribution of the measurement errors here (see Remark 3). The likelihood-contour and polyhedral enclosures of these SMR regions are compared in Figs. 9 and 10 for the Ratkowsky and cardinal temperature models, respectively. The results from a random sampling are also shown on these plots, which lie inside the actual SMR regions.
Since the polyhedral cuts are tight by construction (Theorem 3), the seemingly large discrepancy between these cuts and the sampled SMR regions in Figs. 9 and 10 is mainly attributable to the sampling not being sufficiently exhaustive. Moreover, the comparisons between the polyhedral and likelihood-contour enclosures on these figures show that the conservatism introduced by the second remains small for both models in the present case of jointly Gaussian-distributed measurement errors.
Reported above each plot in Figs. 9 and 10 are the shortestto-longest-span ratios in the polyhedral enclosure for the various parameter pairs (see Section 4.2). With the Ratkowsky model, all of these ratios happen to be smaller than 0.4, and even lower than 0.25 for the parameter pair (T min , b), thereby suggesting strong correlations in the parameter set (T min , T max , b, c). With the cardinal temperature model by contrast, most of the ratios are close to or above 0.5, suggesting much weaker correlations amongst the parameters (T min , T max , T opt , opt ) thereof. Moreover, the SMR intervals for the parameters T min and T max -which participate and share the same interpretation in both models -are much larger for the Ratkowsky model than they are for the cardinal temperature model. On the basis of these results, a modeler would normally retain the cardinal temperature model over the Ratkowsky model.
Although the Wald confidence ellipsoids in Figs. 9 and 10 differ significantly from the SMR region enclosures, similar conclusions can nonetheless be drawn with regards to parameter precision and correlation for both the Ratkowsky and cardinal temperature models based on the main axes of the projected Wald ellipsoids. One can also compare the threshold of a likelihood-contour enclosure with its likelihood-based confidence region counterpart (7): for the Ratkowsky model, we find * ≈ 5.9 and log L(Â | x m ) − being quite close to each other for both models provides yet another illustration of the asymptotic equivalence between classical statistical inference approaches and SMR for such choices of the error set (viz. Section 3.2). Also notice the higher likelihood threshold of the cardinal temperature model compared with the Ratkowsky model, which provides yet another indication of a much more confident estimation.
SMR with independently-distributed errors
We consider alternative SMR-2 regions, where the error set E now comprises independent, 1-sigma error bounds on the measurements, E:= e ∈ R 15 ∀k = 1. . .15, e ,k ≤ .
Similar to the jointly Gaussian-distributed case in Section 5.2 above, we compare various approximations of such an SMR region for the cardinal temperature model in Fig. 11; namely, the tight likelihoodcontour and polyhedral enclosures, and an inner-approximation using a random sampling. Despite the error set E in (38) now being significantly different from a Gaussian HPD region, the polyhedral enclosures turn out to be comparable in shape and size to those in Fig. 10; and the shortest-to-longest-span ratios for the various parameter pairs are similar too. The likelihood-contour enclosure in Fig. 11 describes a rather close approximation of the SMR region too, albeit proving to be more conservative than for the jointly Gaussian-distributed case in Fig. 10. A similar behavior is obtained for the Ratkowsky model (results not shown).
In addition to SMR region approximations, Fig. 11 displays the guaranteed parameter region as given by (12), for the same error set (38). One can check that the inclusion property established in Theorem 1 holds. The guaranteed parameter region turns out to be much smaller than the SMR region here due to both the model mismatch and underestimating the measurement noise. For the Ratkowsky model, the guaranteed parameter region even happens to be empty for these data and error sets. Therefore, unlike SMR regions, guaranteed parameter regions do not provide a reliable means of detecting parameter correlations in the present case.
Conclusions and future research directions
This paper has introduced set-membership regression (SMR), a new approach to parameter estimation which seeks to determine the subregion in parameter space enclosing all (global) solutions to a nonlinear regression problem subject to uncertain observations. An SMR region is thus understood as comprising those parameter values that are equally credible under the selected observation error set, in the sense of that regression problem. In particular, this interpretation is not conditional upon the model's structure being correct. Another distinctive feature of SMR is its ability to consider likelihood functions and error sets other than those corresponding to jointly Gaussian-distributed errors, including least-absolute-error ( 1 ) regression, and independent error distributions or simple error bounds when the underlying statistics is unknown.
In a bounded-error context, SMR provides a means of robustifying existing guaranteed parameter estimation methods. By drawing on the principles of maximum likelihood estimation, an SMR region encloses the corresponding guaranteed parameter set, and unlike the latter, it may not become empty in the presence of large model mismatch or measurement errors and outliers. From a statistical inference viewpoint, SMR has been shown to be asymptotically equivalent to the Wald confidence regions for specific choices of the measurement error set. It will be important to keep developing the underlying SMR theory as part of future work, so as to better grasp the links with both frequentist and Bayesian statistical inference analysis.
Another important contribution of this paper is a computational framework for describing tight enclosures of the SMR regions, in the form of likelihood-contour and polyhedral enclosures. These enclosures can be described via the solution of auxiliary optimization problems, which are typically nonconvex and embed semiinfinite constraints. While tractable in principle using global optimization techniques based on complete search, our experience with such optimization problems is that they challenge state-ofthe-art global optimization solvers such as BARON or ANTIGONE, even for small-scale estimation problems as exemplified with the BOD and microbial growth case studies. The tackling of larger-scale problems, including error-in-variables formulations, is a clear call for improved global search techniques; e.g., by exploiting problem structures or creating redundancy to strengthen the relaxations, or by combining with effective heuristics to increase the likelihood of finding a solution early on during the search [65].
One straightforward extension of the SMR methodology includes parameter estimation problems with other sources of uncertainty than just measurement errors. In principle, any set of nuisance parameters could be accounted for in the regression framework based on a description of the corresponding uncertainty set, similar to the measurement error set E in (16).
Lastly, it is worth reiterating that the SMR framework can be extended to parameter estimation in dynamic systems too. The main bottleneck in doing so is of computational rather than conceptual nature, since limited work has been published to date on SIP with differential equations embedded [74]. For instance, applying the cutting-plane SIP algorithm of Section 4 to the dynamic case should rely on efficient complete-search methods for global optimization and constraint satisfaction in dynamic optimization problems [44,75,76].
Data statement
No new data was collected in the course of this research. | 13,223 | 2018-10-01T00:00:00.000 | [
"Mathematics"
] |
Missing dust signature in the cosmic microwave background
I examine a possible spectral distortion of the Cosmic Microwave Background (CMB) due to its absorption by galactic and intergalactic dust. I show that even subtle intergalactic opacity of $1 \times 10^{-7}\, \mathrm{mag}\, h\, \mathrm{Gpc}^{-1}$ at the CMB wavelengths in the local Universe causes non-negligible CMB absorption and decline of the CMB intensity because the opacity steeply increases with redshift. The CMB should be distorted even during the epoch of the Universe defined by redshifts $z<10$. For this epoch, the maximum spectral distortion of the CMB is at least $20 \times 10^{-22} \,\mathrm{Wm}^{-2}\, \mathrm{Hz}^{-1}\, \mathrm{sr}^{-1}$ at 300 GHz being well above the sensitivity of the COBE/FIRAS, WMAP or Planck flux measurements. If dust mass is considered to be redshift dependent with noticeable dust abundance at redshifts 2-4, the predicted CMB distortion is even higher. The CMB would be distorted also in a perfectly transparent universe due to dust in galaxies but this effect is lower by one order than that due to intergalactic opacity. The fact that the distortion of the CMB by dust is not observed is intriguing and questions either opacity and extinction law measurements or validity of the current model of the Universe.
I N T RO D U C T I O N
Observations of the cosmic microwave background (CMB) based on rocket measurements of Gush, Halpern & Wishnow (1990) and FIRAS on the COBE satellite (Mather et al. 1990;Fixsen et al. 1996) proved that the CMB has almost a perfect thermal blackbody spectrum with an average temperature of T = 2.728 ± 0.004 K (Fixsen et al. 1996). The accuracy was improved using the WMAP data, which yielded an average temperature of T = 2.72548 ± 0.00057 K (Fixsen 2009). Observed tiny large-scale variations of the CMB temperature of ±0.00335 K are attributed to the motion (including rotation) of the Milky Way relative to the Universe (Kogut et al. 1993). The small-scale variations of ±300 μK traced, for example, by the WMAP (Bennett et al. 2003;Hinshaw et al. 2009;Bennett et al. 2013), ACBAR (Reichardt et al. 2009) and BOOMERanG (MacTavish et al. 2006) instruments using angular multipole moments are attributed to basic properties of the Universe as its curvature or the dark-matter density (Spergel et al. 2007;Komatsu et al. 2011).
Since the CMB as a relic radiation of the big bang experienced different epochs of the Universe, it interacted with matter of varying physical and chemical properties. Distortions of the CMB due to this E-mail<EMAIL_ADDRESS>interaction comprise the μ-type (at z 10 5 ) and y-type (at z 10 4 ) distortions related to the photon-electron interactions, distortions produced by the reionized IGM and the presence of galactic and extragalactic foregrounds (Wright 1981;Chluba & Sunyaev 2012;De Zotti et al. 2016). The foreground contamination of the CMB due to diffuse emission of intergalactic dust thermalized by the absorption of starlight was estimated, for example by Imara & Loeb (2016b). They found that the predicted contamination is under the detection of the COBE/FIRAS experiments (Mather et al. 1994;Fixsen et al. 1996) but it should be recognized in observations of the Primordial Inflation Explorer (Kogut et al. 2014) and the Polarized Radiation Imaging and Spectroscopy Mission (André et al. 2014) that would exceed the spectral sensitivity limits of COBE/FIRAS by three to four orders of magnitude.
Another possible origin of distortion of the CMB related to galactic and intergalactic dust is absorption of the CMB by dust. Absorbing properties of dust grains have been discussed by Wright (1987), Wright (1991), Henning, Michel & Stognienko (1995), Stognienko, Henning & Ossenkopf (1995) and others, who pointed out that the long-wavelength absorption of needle-shaped conducting grains or complex fractal or fluffy dust aggregates might provide a sufficient opacity for the CMB. Hence, it is worth to model the CMB attenuation by dust and to check if it is detectable or not. In this paper, I study the spectral and total distortions of the CMB due to absorption by dust. I find that the imprint of cosmic dust in the CMB predicted by theory is not negligible; however, it is missing in observations even though it is above their current detection level.
Optical depth
Effective optical depth τ (z) for light emitted at redshift z is expressed as (Peebles 1993, his equation 13.42) where n D is the comoving dust number density, σ is the attenuation cross-section, E(z) is the dimensionless Hubble parameter c is the speed of light, H 0 is the Hubble constant, m is the total matter density and is the dimensionless cosmological constant. Equation (1) can be rewritten using galactic and intergalactic attenuation coefficients ε G and ε IG as with where κ is the mean galactic opacity and γ is the mean free path of a light ray between galaxies in the comoving space where a is the mean galaxy radius and n is the galaxy number density in the comoving space. Equation (3) is valid for frequencyindependent attenuation. Considering the 'λ −β extinction law', where λ is the wavelength of light (Mathis 1990;Calzetti, Kinney & Storchi-Bergmann 1994;Charlot & Fall 2000;Draine 2003), we can express the galactic and intergalactic attenuations at frequency ν using the reference quantities related to observed frequency ν 0 , Equation (3) is then modified to expressing the fact that light is more attenuated at high z because of its shift to high frequencies.
Extinction of the CMB
Assuming the CMB to be a perfect blackbody radiation, its spectral intensity (i.e. energy flux received per unit area from a unit solid angle in the frequency interval ν to ν + dν, in Wm −2 Hz −1 sr −1 ) is described by the Planck's law where ν is the frequency, T CMB is the CMB temperature, h is the Planck constant, c is the speed of light and k B is the Boltzmann constant. Since the CMB is attenuated by galactic and intergalactic opacities, we can evaluate the distortion of the spectral CMB intensity at frequency ν along light ray coming from redshift z as where τ ν and I ν are defined in equations (7) and (8). Consequently, the reduction of the total CMB intensity (in Wm −2 sr −1 ) is Evaluating equations (9) and (10) for different redshifts z, we can predict the distortion of the CMB intensity by the opacity of the Universe when going back in cosmic time up to redshift z. Such approach is advantageous because it suppresses uncertainties in observed parameters needed in calculations. We start at present time, when the galactic and intergalactic opacities are best constrained from observations, and gradually extrapolate the prediction to higher redshifts.
O PAC I T Y O B S E RVAT I O N S
In order to evaluate the CMB distortion due to absorption by dust, we need estimates of the dust mass in the Universe and its history. The most straightforward way is to use observations of the galactic and intergalactic opacities at visual wavelengths mapping the distribution of dust in galaxies and intergalactic space and relate the visual and CMB opacities using the extinction law describing the dependence of attenuation of light on wavelength.
Galactic and intergalactic opacities
The opacity of galaxies basically depends on their type and age (for a review, see Calzetti 2001). The most transparent galaxies are elliptical with an effective extinction A V of 0.04 − 0.08 mag. The light extinction by dust in spiral and irregular galaxies is higher (González et al. 1998;Holwerda et al. 2005a. Typical values for the inclination-averaged extinction are as follows: 0.5-0.75 mag for Sa-Sab galaxies, 0.65-0.95 mag for the Sb-Scd galaxies and 0.3-0.4 mag for the irregular galaxies at the B band (Calzetti 2001). Considering the relative frequency of galaxy types in the Universe, we can average the visual extinctions of individual galaxy types and calculate the mean visual extinction and the mean visual galactic opacity. According to Vavryčuk (2017), the average value of visual opacity κ V is about 0.22 ± 0.08 at z = 0.
The intergalactic opacity is lower by several orders than the galactic opacity being observed, particularly, in galaxy haloes and in cluster centres (Ménard et al. 2010a). The opacity in the galaxy clusters has been measured by reddening of background objects behind the clusters (Chelouche, Koester & Bowen 2007;Bovy, Hogg & Moustakas 2008;Muller et al. 2008). The intergalactic opacity can also be measured by correlations between the positions of lowredshift galaxies and high-redshift quasi-stellar objects. Ménard et al. (2010a) correlated the brightness of ∼85 000 quasars at z > 1 with the position of 24 million galaxies at z ∼ 0.3 derived from the Sloan Digital Sky Survey (SDSS). The estimated value of A V is about 0.03 mag at z = 0.5 and about 0.05-0.09 mag at z = 1. A consistent opacity is reported by Xie et al. (2015) who investigated the redshifts and luminosity of the quasar continuum of ∼90 000 objects. The authors estimated the visual opacity to be ∼0.02 h Gpc −1 at z < 1.5. As mentioned by Ménard, Kilbinger & Scranton (2010b), such opacity is not negligible and can lead to bias in determining cosmological parameters if ignored.
Evolution of opacity with redshift
The galactic and intergalactic opacities depend on redshift. First, they increase with redshift due to the expansion of the Universe. This geometrical effect has already been taken into account in equation (1) by considering an increasing dust density with redshift because the Universe occupied a smaller volume in its early epoch. Second, a redshift-dependent formation and evolution of global dust mass in galaxies and in intergalactic space must be taken into account.
Observations indicate that interstellar dust mass M d is strongly linked to the star formation rate (SFR) of galaxies. da Cunha et al.
(2010) analysed 3258 low-redshift SDSS galaxies with z < 0.2 and reported the relation M d ∼ SFR 1.1 . Calura et al. (2017) extended the data set with high-redshift galaxies from Santini et al. (2010) and found a similar relation with a slightly lower slope of ∼0.9. The same slope is also reported by Hjorth, Gall & Michałowski (2014). Adopting the M d -SFR relation, we deduce from the SFR history (see Fig. 1) that the global dust mass steeply increases for z < 2-2.5, it culminates at z = 3-4 and then it starts to decline (Madau et al. 1996;Hopkins & Beacom 2006;Madau & Dickinson 2014;Popping, Somerville & Galametz 2016). The decline is not, however, substantially steep because dust is reported even in starforming galaxies at redshifts of z > 5 (Casey, Narayanan & Cooray 2014). Based on observations of the Atacama Large Millimeter Array, Watson et al. (2015) investigated a galaxy at z > 7 highly evolved with a large stellar mass and heavily enriched in dust. Similarly, Laporte et al. (2017) analysed a galaxy at a photometric redshift of z ∼ 8 with a stellar mass of ∼2 × 10 9 M , a SFR of ∼20 M yr −1 and a dust mass of ∼6 × 10 6 M .
Extinction law
The light extinction due to absorption by dust is frequency dependent (see Fig. 2). In general, it decreases with increasing wavelength but displays irregularities. The extinction curve for dust in the Milky Way can be approximated for infrared wavelengths between ∼0.9 and ∼5 μm by a power law A λ ∼ λ −β with β ranging between 1.61 and 1.81 (Draine 2003(Draine , 2011. At wavelengths of 9.7 and 18 μm, the absorption displays two distinct maxima attributed to silicates (Mathis 1990;Li & Draine 2001;Draine 2003). At longer wavelengths, the extinction curve is smooth obeying a power law with β = 2. This decay is also predicted by the Mie theory modelling graphite or silicate dust grains as small spheres or spheroids with sizes up to 1 μm (Draine & Lee 1984). However, Wright (1982), Figure 2. Normalized frequency-dependent attenuation (Draine 2003, tables 4-6). The black and red dashed lines show the long-wavelength asymptotic behaviour predicted by the power law with β = 2 and β = 1.5. Henning et al. (1995) and Stognienko et al. (1995) and others point out that the long-wavelength absorption also depends on the shape of the dust grains and that needle-shaped conducting grains or complex fractal or fluffy dust aggregates can provide higher long-wavelength opacity with the power law described by 0.6 < β < 1.4 (Wright 1987).
P R E D I C T E D C M B D I S TO RT I O N
I consider the intergalactic opacity at visual wavelengths of 0.01 mag h Gpc −1 that is two times lower than that reported by Xie et al. (2015). The ratio of the CMB and visual attenuation ε CMB /ε V of 1 × 10 −5 is taken from Mathis (1990) and Draine (2003). Actually, this ratio is very low being obtained for a steep decrease of attenuation at long wavelengths (β = 2). Realistic values for dust particles with complex shapes might be higher by one order (Wright 1987, β = 1.5). I intentionally use the low value of ε CMB in order to be sure that the predicted level of the CMB distortion is the lower threshold of expected values.
The CMB distortion is calculated for two models. Model A is based on an assumption that the comoving dust density is independent of redshift. Model B adopts an interstellar and an intergalactic dust density evolving with redshift in accordance with the SFR (see Fig. 1). The spectral and total CMB distortions are calculated using equations (9) and (10) with parameters summarized in Table 1. In calculations, both of the galactic and intergalactic opacities (G+IG) or the galactic opacity only (G) is considered. Fig. 3 shows the spectral CMB intensity and its corresponding distortion produced by dust in the epoch of 0 < z < z max with z max of 6 and 10. As expected, the distortion increases with increasing z max , but the effect of dust absorption is visible even for z max of 6. The distortion is more pronounced for Model B than for Model A. This is caused by abundance of dust for z ∼ 2-4 considered in Model B but neglected in Model A. The maximum distortion is observed at a frequency of 300 GHz and reaches a value of 5.1 × 10 −22 Wm −2 Hz −1 sr −1 for Model A and 51.0 × 10 −22 Wm −2 Hz −1 sr −1 for Model B. These values exceed the detection level of the COBE/FIRAS (absolute sensitivity of ∼1-2 × 10 −22 Wm −2 Hz −1 sr −1 ; Fixsen et al. 1996) or WMAP and Planck flux measurements (absolute sensitivity of ∼7 × 10 −23 Wm −2 Hz −1 sr −1 ; Hinshaw et al. 2009;Planck Collaboration VIII 2014). The total CMB distortion is about 0.2 and 1.7 nWm −2 sr −1 for z max = 6 for Models A and B, respectively 10 0.02 160 0.22 2.0 1.4 × 10 −3 9.2 × 10 −3 1.0 × 10 −5 1.4 × 10 −8 9.2 × 10 −8 Note. Quantity a is the mean effective radius of galaxies, n is the comoving number density of galaxies, γ is the mean free path between galaxies, κ V is the mean visual opacity of galaxies, β is the slope in the extinction law, ε G V is the visual galactic attenuation coefficient defined in equation (4), ε IG V is the visual intergalactic attenuation coefficient and ε G CMB and ε IG CMB are the galactic and intergalactic attenuation coefficients at the CMB wavelengths, respectively. . The full black line shows the spectral CMB intensity. Full blue/red lines: z max = 10; dashed blue/red lines: z max = 6. Blue lines: distortions due to galactic and intergalactic dust (G+IG); red lines: distortions due to galactic dust (G). The grey area marks intensities that are under the sensitivity of the COBE/FIRAS measurements at 300 GHz (Fixsen et al. 1996) ( Fig. 4). Model B predicts a faster increase of the total CMB distortion with z max than Model A. The maximum distortion increases up to z max ∼ 7. At higher z, the CMB is not distorted because the model is effectively free of dust. Note that the reported values are the lower thresholds; the realistic distortions should be higher.
D I S C U S S I O N
It is commonly considered that the CMB is distorted by foreground diffuse far-infrared and submillimetre emission of dust in the Milky Way, other galaxies and intergalactic space (Draine & Fraisse 2009; Imara & Loeb 2016b). However, the CMB can also be distorted due to absorption by dust producing a decline of the CMB intensity at all frequencies. This distortion should be high enough to be observable in the CMB measurements. The maximum spectral distortion of the CMB light coming from z = 10 is predicted at 300 GHz, which is at least 20 times higher than the detection level of the COBE/FIRAS measurements (Fixsen et al. 1996) and at least 35 times higher than the detection level of the WMAP or Planck measurements (Hinshaw et al. 2009;Planck Collaboration VIII 2014). The CMB should also be distorted in a perfectly transparent universe just due to absorption by dust in galaxies. This effect is about one order lower than that for the intergalactic opacity, but still above the detection level of the current CMB measurements. Finally, let us shortly discuss why the imprint of dust is missing on the CMB. First, we can speculate that the parameters used in modelling are seriously biased. However, it contradicts observations of the intergalactic opacity (Ménard et al. 2010a;Xie et al. 2015;Imara & Loeb 2016a), opacity of galaxies (González et al. 1998;Calzetti 2001;Holwerda et al. 2005a and the extinction law data in the Milky Way (Draine & Lee 1984;Mathis 1990;Li & Draine 2001;Draine 2003). Secondly, we can question the big bang as the origin of the CMB and revive theory of the CMB as the thermal radiation of dust itself being produced at much later times than big bang (Layzer & Hively 1973;Wright 1982Wright , 1987Wright , 1991Aguirre 2000). In such theory, the CMB should not be distorted because the CMB would concurrently be absorbed and reradiated by dust. In any case, it is clear that the missing dust imprint on the CMB is an intriguing puzzle that should be further studied and confronted with current measurements and models of the Universe. | 4,330.2 | 2017-06-15T00:00:00.000 | [
"Physics"
] |
eu ON THE STABILITY OF THE SOLUTIONS FOR A DELAY DIFFERENTIAL EQUATIONS WITH DISCONTINUITY
A numerical method of solving delay differential equations with fixed time delay and variable discontinuities of the solutions is considered. Runge-Kutta methods of higher order are used. The effectiveness of the method is shown by an example with proper initial data. The existence of a stable solution is discussed. AMS Subject Classification: 34A37, 34K45
The discontinuous systems have many applications and we refer the reader to [17], where many practical examples are constructed.
Note that in the applications there are mathematical models, where it is taken into account not only the given moments of the time, but also the prehistory.The general theory of FDE as well as many concrete examples are comprehensively studied in [14].Solving such kind of problems numerically it turns out that although the solutions exist their derivatives are discontinuous at certain points which requires new form of numerical solving.Numerical solutions to these equations via Runge-Kutta (RK) methods are comprehensively studied in [6].Numerical methods for ODEs including RK can be extended to DDEs, that is discussed in [7].
The problems including nonlinearity and FDEs are more complicated if the solutions contain jumps, i.e. under impulsive discontinuity.Note that the impulsive systems are applicable mainly in population dynamics, optimal control and economics.For the application of these equations we refer the reader to [17].Note also that effective numerical solving of FDEs can be accomplished with aid of software as Maple, Mathematica, MatLab and so on, and systems of FDEs under impulsive effect could not be solved directly.
Further in the paper we study numerical approximations via Runge-Kutta (RK) methods of impulsive systems with fixed time of delay.The numerical methods concerning impulsive systems with no fixed jumps are studied mainly in [2,3,9,10].Also we refer the reader to [1], where numerical treatment of non smooth systems are studied.
In the next paragraph 2 "Preliminaries" we consider some definitions and known facts that use when we study FDEs.
In the paragraph 3 entitled "Runge-Kutta approximation of the solution" we study discrete approximation of a system with impulsive effect.
In the paragraph 4 entitled "Numerical examples" we construct an example in order to demonstrate existence of stable solution of a DDE under certain initial condition.
In the paragraph 5 we give some conclusive remarks.
Preliminaries
In this section we recall the main definitions and notations used in this paper.Assume that there exists a continuous extension of the solution obtained with such a precision which can be achieved by RK methods.Note that in this approach we use Hermite polynomials.
(ii) It possesses points of discontinuity (jumps) such that τ i (x) = t, t ∈ I, and with values satisfying Standing hypotheses (SH) Suppose that f (•, •) is sufficiently smooth w.r.t.its arguments (hence locally Lipschitz) with a growth condition, that is: There exists a continuous function v : such that the maximal solution of ṙ = v(t, r) exists on I for any initial condition r(0) ≥ 0.
A1. τ i (•) and S i : R n → R n are Lipschitz with constants M and µ, respectively.
Denote by ∇τ i (•) the gradient of τ i (•) and by ∇ C (τ j (x)) the Clarke subdifferential (see e.g., [16]), having in R n the form ∇ C (τ j (x)) = co lim where co A is the closed convex hull of A.
We assume further that either A3 or A4 holds.
The multiple hitting of one switching surface is called beating phenomenon.By the following statements it can be demonstrated that if the hypotheses SH are satisfied, then the beating phenomenon is impossible.
The proof can be seen in [3].
Theorem 2.2.Under (SH) the system (1a)-(1c) admits a unique solution defined on [0, T ], and there exists a constant K such that the exact solution along with the approximate solutions are K -Lipschitzean on the intervals of continuity.Moreover, there exists a constant λ > 0 such that τ i+1 (x(t)) Existence and uniqueness for the solution of ( 1) is proved in [5], and the existence of λ is proved in [3].
Runge-Kutta Approximation of the Solution
In this section we study discrete approximation of the discontinuous system (1) with the RK scheme stated below.We refer the reader to [8] for the general theory of ODEs, and to [6] for RK methods used to the numerical solving of DDEs.
Let 0 = t 0 < t 1 < . . .< t N < t N +1 = T be a subdivision of [0, T ] for some natural number N .Note that an s-stage RK method computes iteratively the solution for the system (1) without jumps using the following relations: The RK method is accurate up to order p, if it provides the exact approximation of a polynomial solution x(•) up to degree p.It is known that the grid function η h (•) of the RK approximation satisfies the estimate max j=0,...,N under appropriate smoothness conditions on the right-hand side of the FDE with delay and with suitable choice of the coefficients b ν , c ν , a ν,l in (2), (3).
Here x(•) is the solution of (1) without jumps.Notice, however, that due to the delay terms there are discontinuities of the derivatives of the solution x(•) (see e.g., [6]).The discontinuity points must be included in the set of the grid points.We first include in the grid the points σ, 2σ, . . ., kσ, where either k = p or kσ > T .If we find a jump point τ , then we include in the grid τ, τ + σ, τ + 2σ, etc.
To get an approximation of the term x(t − σ) the continuous extension of the solution must be available.We must know the value of the solution not only on the grid point, but for every t.There are different methods to solve delay differential equations and we refer the reader to [6] for the theory.
Next, extend the solutions with their Hermite polynomials.In general, if p denotes the order of RK method used here, then the interpolation order of Hermite polynomials must be greater or equal to p. Let l p denotes the number of support points for Hermite interpolation, then 2l p > p.Here in particular we use Hermite approximation polynomials of degree 3 in the cases when RK methods are of order 2 or of degree 5, and RK is of order 4. Further, we will not make difference between the approximate solution and its Hermite polynomials extension [2], [15].
The third order Hermite polynomial is defined for every coordinate x k of the approximate solution x on (t j , t j+1 ) w.r.t. the values and Ḣk (t j+1 ) = f k (t j+1 , x(t j+1 ), x(t j+1 − σ)).
Here x(t j − σ) is the value of Hermite extension of the approximate solution x(•).
We will now apply the RK method to discontinuous systems and set Calculate for this purpose some approximations by RK method to the differential system in (1) and for subsequent grid points t j , j = 1, . . ., N .On each interval [t j , t j+1 ] we check, whether one of the functions ϕ i,h (•) changes its sign.
If it does (for some i), then the discrete trajectory η h (•) needs to jump within the interval (t j , t j+1 ) which is close to the i-th jump of the exact solution x(•).
Afterward we use some strategies to determine the jump points.Using the Hermite extension H(t) of η k (•) we solve the equation τ k (H(t))−t = 0.The solution τ k is then the first approximation of the jump point τ k (x).We find then the RK solution at the point τi and calculate the value of ϕ i,h (η k ( τi ))−τ i .If the value is less then h p+1 that we set t i = τi .Otherwise we continue by the same method, i.e. verify the interval where ϕ i (•) change it sign and then use Hermite interpolation of the solution in that subinterval.Again solve τ i (H(t)) − t = 0.The second approximation is almost enough.The approximate τi is then included in the grid point and continue by using the standard delayed RK methods.
Although RK method provides the values of approximate solutions only on the grid points, we will consider the approximate solutions as it is defined on the whole interval [0, T ] with unknown values outside the grid.
Here one may conclude that the precision of the method follows from the above stated Theorem 3.1.Let Q T is a sphere, and the solution z(t) of the problem ((1)a), z(t) ∈ Q T , t 0 ≤ t < ∞.
We remind that the Liapunov stability means that an arbitrarily narrow ε neighborhood of the solution x(t) contains all those solutions of the problem ((1)a) which are sufficiently close to z t 0 at initial moment t 0 .Here x t 0 and z t 0 are solutions sufficiently close to initial moment t 0 .
Numerical Examples
In this section we construct an example in order to demonstrate existence of stable solution of a DDE under certain initial condition.
The accuracy of the method we use is shown in the next theorem.In numerical calculations it is used the implicit RK methods of order 4 with step 0.1 for approximate solution (the calculations are accomplished by MatLab 7.14.0.739).
Example 1.Consider the following system: where The equations of the surfaces have the form: The impulsive effect is formulated by the equations: To solve the system (5) we set z(t) = x(t) + y(t), then get ż(t) = z(t) + 1, with initial condition z(0) = 0.10.The solution of the differential equation ż where c = 1.10, and hence z(t) = 1.10e t − 1.Here we use implicit RK methods for order 4 with step 0.1 for approximate solution of the system (5) and Hermite polynomial of degree 5.The unique solution of the transcedental equation is the time of the first jump τ 1 .The unique solution for the transcendental equation 1.1719011834e with new initial condition The equations of the surfaces are: The impulsive effect is the following: To solve the system (5) we set z(t) = x(t) + y(t), then the system takes the form ż(t) = z(t) + 1, with initial condition z(0) = 0.06.The solution of the differential equation ż(t) = z(t) + 1 is where c = 1.06, and thus z(t) = 1.06e t − 1.
We use implicit Runge-Kutta methods for order 4 with step 0.1 for approximate solution of the system (5) and Hermite polynomial of degree 5.
The unique solution of the transcendental equation is the time of the first jump τ 1 .The unique solution for the transcendental equation 1.1249985898e τ 2 − τ 2 − 2 = 0 is the time for the second jump τ 2 .Exact impulsive times are τ 1 = 0.4308046117, and τ 2 = 0.9711760782.The approximate jump points are τ 1ap = 0.430373807, and τ 2ap = 0.970204902.In the table we put the exact values of z(t), and approximate values of x ap (t) and y ap (t).The results for exact solutions and approximate solutions are given in the table below.Γ1 is the graphics of the solution for the equation ( 5) with initial condition x(t) ≡ y(t) ≡ 0.03 corresponding to the results of Table 1.
Γ2 is the graphics of the solution for the equation ( 5) with initial condition x(t) ≡ y(t) ≡ 0.05 corresponding to the results of Table 2.
Here we discuss an example (5) with two initial conditions, x(t) ≡ y(t) ≡ 0.03, and x(t) ≡ y(t) ≡ 0.05.To solve the example with initial condition x(t) ≡ y(t) ≡ 0.03 first we set z(t) = x(t) + y(t), then the system becomes ż(t) = z(t) + 1, x(t) ≡ y(t) ≡ 0.03.We use implicit RK method of order 4 with step 0.1 for approximate solution.Denote the components of the approximate solution by x ap and y ap .Exact jump times are τ 1 = 0.4308046117 and τ 2 = 0.9711760782.After passing the first jump we solve the initial problem with a new initial condition, and after the second jump solve the initial problem with the new initial condition.
Conclusion
Note here that by the same method one could show stability for impulsive FDEs, also it is applicable for problems with inclusions, [2,3] as well as fuzzy FDEs, [12].Similar methods can be used for FDEs with maxima, and delay in the cases considered in [4].Other applications of the methods used in the present paper are the cases of evolutionary DEs (for instance parabolic PDEs) with maxima and/or delay.The problem for stability and asymptotic stability can be resolved also by the aid of similar estimates.In parabolic case one may reduce the problem to an FDE, and the above stated estimates can be applied as well (see, e.g., [11]).
Theorem 4 . 1 .
([13]) Under assumption SH the measure of distance between the exact solution y(•) and the approximate solution η h (•) is O(h p ) for N being big enough.
Table 1 :
τ 2 − τ 2 − 2 = 0 is the time for the second jump τ 2 .Note here that the impulsive times are τ 1 = 0.3298774626, and τ 2 = 0.9092773408.The approximate jump points are τ 1ap = 0.329547585, and τ 2ap = 0.908368063.In the table we put the exact values of z(t), and approximate values of x ap (t) and y ap (t).The error is r(t) = |x ap (t) + y ap (t) − z(t)|.The results for exact solutions and approximate solutions are given in the table below.Exact values and the RK4 values of Example 1 Example 2. Consider the same system (5):
Table 2 :
Exact values and the RK4 values of Example 2
Table 3 :
τ 2 − τ 2 − 2 = 0 is the time for the second jump τ 2 .Exact impulsive times are τ 1 = 0.3482589793 and τ 2 = 0.9117508014.The approximate jump points are τ 1ap = 0.34791072 and τ 2ap = 0.910839051.In the table we put the exact values of z(t), and approximate values of x(•) and y(•).The error is r(t) = |x ap (t) + y ap (t) − z(t)|.The results for the exact solutions and approximate solutions are given in the table below.Example 4. Consider the impulsive differential equation, which is constructed in such a way that in the moment t the solution decays.Here we shell use the RK methods.The example is constructed such that the solution x(t) "decays", i.e. we have beating phenomena of the solution.Exact values and the RK4 values of Example 3 ẋ 1 5 + 1 5 2 , . . .,n-th jump is t n = 1 5 + 1 5 2 + 1 5 n .So it is easy to see that t n < t n+1 .On the other hand we have lim n→∞ t n = lim | 3,536.4 | 2017-01-26T00:00:00.000 | [
"Mathematics"
] |
Genesis of Color Zonation and Chemical Composition of Penglai Sapphire in Hainan Province, China
: The Penglai sapphires are mainly hosted in alkaline basalts and derived in alluvial sediments. Previous studies have investigated the formation of the Penglai sapphires; however, the genesis of color zoning remains ambiguous. In this paper, we report spectral and chemical composition data of sapphires using ultraviolet–visible spectroscopy (UV–Vis), Fourier transform infrared spectroscopy (FTIR), and laser-ablation–inductively coupled plasma–mass spectrometry (LA–ICP– MS). The results show that the Penglai sapphire has a magmatic origin, mostly showing various shapes of incomplete girdles, barrels, and flakes. The content of Ti in rims is higher than in cores of color-banded sapphire, which results from ubiquitous Ti-bearing inclusions within grown bands. The main chromophore of the deep-blue core is Fe 2+ -Ti 4+ , which pairs with Fe 3+ -Fe 3+ , Cr 3+, and V 3+ in the core, likely producing purple-hued blue in an oxidizing environment. The yellowish-brown rim is due to Fe 3+ and Cr 3+ in a reduced environment. Compared with the basaltic sapphires worldwide, the Fe content is moderately higher than those of most Asian sapphires but obviously lower than those of Changle sapphires in Shandong, China, and overlaps with those of African sapphires.
Introduction
Gem-grade corundum is also known as sapphire and ruby. As a variant of natural α-Al 2 O 3 , corundum with a perfect lattice is generally pure, colorless, and transparent. However, corundum always exhibits different colors due to the replacement of the Al element with Fe, Ti, Cr, V, and other transition metal elements [1,2]. The existence of trace elements in corundum is closely related to the geological background of their formation, so the study of their genesis provides important ways to interpret its formation history and identify specific origins [3,4]. Since metamorphic and basaltic sapphires have significant differences in trace elements, previous studies have proposed a variety of classification methods based on experimental data [5,6]. They found that trace elements such as Mg, Fe, Ti, Cr, Ga, and V and their ratios can well distinguish the origin and genesis of corundum. Ratios of Cr 2 O 3 /Ga 2 O 3 , Fe 2 O 3 /TiO 2 , as well as trace element contents of Zn, Sn, Ba, Ta, and Pb of 35 sapphires from different origins, were analyzed via LA-ICP-MS, and the resulting data provide useful criteria to interpret and identify the geographical origins of sapphires [7]. Moreover, the presence of Sn and Ta elements suggests that corundum probably formed close to syenites or granites. Having various concentrations of Zn, Nb, Sn, Ba, and Pb and no Ta indicates that sapphires likely occurred in nepheline-corundumbearing syenites or syenitic gneisses [7]. Studies on the genesis of rare elements above provide unique ways to determine the geographic origin of corundum [8,9].
The Penglai deposit in Hainan Province is one of the most important sapphire mines in China [10,11]. It lies within a basalt-hosted corundum belt along the western Pacific continental margin (Figure 1a). The homogenization temperature and REE distribution patterns of Hainan sapphire and its inclusions are significantly different from those of the host basalt. The assimilation and metasomatism of sapphire with mafic magma have roughly been performed during its formation. Corundum is generally produced via highpressure metamorphism near the interface between the crust and subcontinental lithosphere mantle and is delivered to the surface by alkaline basalt magma [12,13]. The zonation in Hainan sapphire was interpreted as a periodic change of Fe 3+ content, implying that the periodic change in redox condition also occurs in its wall rock magma [14]. The coloring mechanism of Hainan sapphire was investigated using photoluminescence spectroscopy and showed that interactions of different ions occurred in the 500-700 nm absorption broadband, and spectral peaks appeared [15].
Previous studies have mainly focused on basalt-hosted sapphire deposits in Hainan and insufficiently on alluvial-type ones. The mechanism of color zonation and its compositional characteristics, in particular, remain ambiguous. Therefore, in this paper, we present the physical and chemical characteristics of Penglai sapphires and analyze the relationship between color mechanism and geological origin via microscopic observation, UV-Vis, FTIR, and LA-ICP-MS. These new data and our interpretation provide robust insights into the color genesis and compositional characteristics of the Penglai sapphire.
Geological Settings
The sapphire deposit hosted in basaltic rocks is located near Penglai town in the northern part of Hainan Island. Hainan Island is separated from mainland China by the Qiongzhou Strait. The formation of these sapphire deposits is associated with extensive basaltic magmatism in eastern China during the Cenozoic [16]. These volcanic rocks occur in northern Hainan Island and the adjacent Leizhou Peninsula (Figure 1b). The roughly west-east striking regional Wangwu-Wenjiao fault forms a boundary of Cenozoic volcanic rocks and pre-Cenozoic rocks [13] (Figure 1c). The Cenozoic rocks exposed in northern Hainan Island have been subdivided into five eruptive episodes-namely, Shimengou/Shimacun Formation (Pliocene-Miocene), Duowen Formation (middle Pleistocene), Dongying Formation (middle Pleistocene), Daotang Formation (late Pleistocene) and Shishan Formation (Holocene). The sapphire-bearing basalts in Penglai town are hosted in the Shimengou/Shimacun Formation that may have lasted from 3.0 to 6.0 Ma and about 13 Ma for incipient volcanism [16].
The mining area is about 35 km 2 . The Penglai sapphires in Hainan Province mainly occur in mafic volcanic rocks, including limburgite, olivine basalt, and other alkaline volcanic rocks [13,17]. Corundum crystals ranging from several centimeters to millimeters in size are mainly distributed in the porphyritic olivine basalt [13,18]. The olivine basalt is gray-black, with porphyritic, vesicular, and dense massive structure. The rock phenocryst is mainly composed of olivine, clinopyroxene, and plagioclase. Olivine basalt and limburgite are the main two rocks in the early eruptive episodes. The former has phenocrysts of olivine and a small amount of clinopyroxene, and the matrices have intergranular texture. The latter exhibits a typically porphyritic texture, the phenocrysts are olivine, pyroxene, and plagioclase, and the matrix has a tholeiitic texture. Compared with the early erupted basalt, its vesicular structure is more developed. The phenocrysts are dominantly plagioclase rimmed by corrosion and reaction edge, and the brecciated structure is obvious. Corundum giant crystals are often associated with other minerals such as zircon, olivine, pyroxene, ilmenite, magnetite, niobium, feldspar, and quartz, as well as mantle peridotite and olivine pyroxenite xenoliths [13]. such as zircon, olivine, pyroxene, ilmenite, magnetite, niobium, feldspar, and quartz, as well as mantle peridotite and olivine pyroxenite xenoliths [13].
Materials
Six sapphire samples (Sap-1, Sap-2, Sap-3, Sap-4, Sap-5, and Sap-6) were investigated from the alluvial deposit in Hainan, China ( Figure 2). Grains ranged mainly in size from 4 × 4 × 3 mm to 6 × 8 × 5 mm, and their colors vary in depth and hues ( Figure 2). In order to attenuate the effects of rough and uneven surfaces, they were carefully cut and polished into slices with two large parallel faces. Sap-1 was polished on both sides along the vertical optical axis (c) and the parallel optical axis (‖c), whereas Sap-2 was doubled-polished on both planes over a visible ribbon, and the other samples were performed on both sides of their large facets. The alluvial sapphire deposits commonly form lenses and layers in the Quaternary loose sediments [17]. The samples in this paper were collected from the Penglai alluvial deposit.
Materials
Six sapphire samples (Sap-1, Sap-2, Sap-3, Sap-4, Sap-5, and Sap-6) were investigated from the alluvial deposit in Hainan, China ( Figure 2). Grains ranged mainly in size from 4 × 4 × 3 mm to 6 × 8 × 5 mm, and their colors vary in depth and hues ( Figure 2). In order to attenuate the effects of rough and uneven surfaces, they were carefully cut and polished into slices with two large parallel faces. Sap-1 was polished on both sides along the vertical optical axis (⊥c) and the parallel optical axis ( c), whereas Sap-2 was doubled-polished on both planes over a visible ribbon, and the other samples were performed on both sides of their large facets.
Sap-1 shows a trapezoid-shaped dark-blue core rimmed by lightly yellowish-brown diffuse-zoned bands (Figure 3a), sap-2 has distinct colored parallel bands in various widths (Figure 3b), sap-3 has both yellow and blue colors (Figures 2c and 3c), sap-4 has evenly dark-blue patch in the core rimmed by light blue edges (Figure 3d), sap-5 has obvious streaks on the surface (Figure 2e), and sap-6 has a hexagonal pyramid-shaped in the core (Figure 2f).
Methods
UV-Vis absorption spectra were obtained at the Gem Research Laboratory of China University of Geosciences (Beijing). The testing instrument was a UV-2000 UV-Vis spectrophotometer produced by Lab Tech in Beijing, China. The UV-Vis wavelength range was 300-800 nm. The record width was 2.0. The slit width was 2 nm. The room temperature was 24 °C. The measurement method was T%. In this study, UV was applied to only sample Sap-1. Sap-1 shows a trapezoid-shaped dark-blue core rimmed by lightly yellowish-brown diffuse-zoned bands (Figure 3a), sap-2 has distinct colored parallel bands in various widths (Figure 3b), sap-3 has both yellow and blue colors (Figures 2c and 3c), sap-4 has evenly dark-blue patch in the core rimmed by light blue edges (Figure 3d), sap-5 has obvious streaks on the surface (Figure 2e), and sap-6 has a hexagonal pyramid-shaped in the core (Figure 2f). Sap-1 shows a trapezoid-shaped dark-blue core rimmed by lightly yellowish-brown diffuse-zoned bands (Figure 3a), sap-2 has distinct colored parallel bands in various widths (Figure 3b), sap-3 has both yellow and blue colors (Figures 2c and 3c), sap-4 has evenly dark-blue patch in the core rimmed by light blue edges (Figure 3d), sap-5 has obvious streaks on the surface (Figure 2e), and sap-6 has a hexagonal pyramid-shaped in the core (Figure 2f).
Methods
UV-Vis absorption spectra were obtained at the Gem Research Laboratory of China University of Geosciences (Beijing). The testing instrument was a UV-2000 UV-Vis spectrophotometer produced by Lab Tech in Beijing, China. The UV-Vis wavelength range was 300-800 nm. The record width was 2.0. The slit width was 2 nm. The room temperature was 24 °C. The measurement method was T%. In this study, UV was applied to only sample Sap-1.
Methods
UV-Vis absorption spectra were obtained at the Gem Research Laboratory of China University of Geosciences (Beijing). The testing instrument was a UV-2000 UV-Vis spectrophotometer produced by Lab Tech in Beijing, China. The UV-Vis wavelength range was 300-800 nm. The record width was 2.0. The slit width was 2 nm. The room temperature was 24 • C. The measurement method was T%. In this study, UV was applied to only sample Sap-1.
The FTIR was analyzed using an instrument model LUMOS infrared spectrometer with transmission and reflection methods under 32 scans and a resolution of 4 cm −1 . The test wavelength range was 3000-3600 cm −1 . The test temperature was 14°C, and the relative humidity was 50%. The laser direction is non-polarized. To further explore the variation pattern of FTIR in individual sapphire crystals, sample Sap-1 was investigated linearly from core to the rim, with 6 points ( Figure 4).
Other parameters included the following: resolution mode, low resolution; scan mode, E-scan; rest time, 1 ms; sampling time, 3 ms; the number of points per peak, 100; detector dead time, 14 ns; background acquisition time, 20 s; ablation time, 40 s; signal intensity (Th), 160,000 cps; and oxide yield (ThO/Th), 0.24%. Data processing was normalized using Al as the internal standard, standard sample Nist612/Nist610, and multiple external standard matrices. To ensure accuracy, as a quality control measure, after every 30 runs, one of the standard samples was run again as an unknown substance. LA-ICP-MS was applied punctually to only one sample (Sap-1). In order to further explore the variation law of trace elements in a single sapphire crystal, sample Sap-1 was sampled linearly, with points from A to B, C to D, and evenly distributed between A and B, and C and D ( Figure 4).
Physical Characteristics of the Penglai Sapphires
The Penglai sapphires have hexagonal columnar, hexagonal bipyramid, rhombohedron, and other aggregated shapes, but single crystals with euhedral shapes are rare, Spectroscopic measurements of trace chemical composition were performed in the same area using a Thermo Element XR mass spectrometer at the National Center for Geological Analysis and Research (CAGS) equipped with a New Wave UP193 model laser. The laser parameters were set as follows: laser wavelength 193 nm, beam spot size 35 um, pulse frequency 10 Hz, output energy 100%, and pulse energy 6.0 mJ. Plasma mass spectrometry parameters were as follows: cooling gas flow (Ar) 16.21 L/min, auxiliary gas flow (Ar) 0.86 L/min, carrier gas flow (He) 0.743 L/min, sample gas flow (Ar) 0.910 L/min, and radio frequency generation, for which the power of the device was 1300 W. Other parameters included the following: resolution mode, low resolution; scan mode, E-scan; rest time, 1 ms; sampling time, 3 ms; the number of points per peak, 100; detector dead time, 14 ns; background acquisition time, 20 s; ablation time, 40 s; signal intensity (Th), 160,000 cps; and oxide yield (ThO/Th), 0.24%. Data processing was normalized using Al as the internal standard, standard sample Nist612/Nist610, and multiple external standard matrices. To ensure accuracy, as a quality control measure, after every 30 runs, one of the standard samples was run again as an unknown substance. LA-ICP-MS was applied punctually to only one sample (Sap-1). In order to further explore the variation law of trace elements in a single sapphire crystal, sample Sap-1 was sampled linearly, with points from A to B, C to D, and evenly distributed between A and B, and C and D (Figure 4).
Physical Characteristics of the Penglai Sapphires
The Penglai sapphires have hexagonal columnar, hexagonal bipyramid, rhombohedron, and other aggregated shapes, but single crystals with euhedral shapes are rare, mostly in the shape of incomplete waist drum, barrel, cone, and various irregular shapes.
The surface of most samples shows ablation rounding, and the surface is in a ground glass (opaque) state. The overall color is dark, and color distribution is uneven, and the color band, color nucleus, a two-color phenomenon, and strong glass luster can be observed. Ne = 1.761-1.762, No = 1.771-1.772, DR = 0.010, and the average specific gravity is 3.96. Many micro-internal flaws and inclusions occur in the typical characteristic growth bands of the Penglai sapphire, e.g., milky white in reflected light, and yellowish-brown in transmitted light can be observed ( Figure 5).
Minerals 2022, 12, x FOR PEER REVIEW 6 of 18 mostly in the shape of incomplete waist drum, barrel, cone, and various irregular shapes. The surface of most samples shows ablation rounding, and the surface is in a ground glass (opaque) state. The overall color is dark, and color distribution is uneven, and the color band, color nucleus, a two-color phenomenon, and strong glass luster can be observed. Ne = 1.761-1.762, No = 1.771-1.772, DR = 0.010, and the average specific gravity is 3.96. Many micro-internal flaws and inclusions occur in the typical characteristic growth bands of the Penglai sapphire, e.g., milky white in reflected light, and yellowish-brown in transmitted light can be observed ( Figure 5).
UV-Vis Spectral Characteristics
The UV-Vis absorption spectra of the studied sample Sap-1 are shown in Figure 6. The main peaks in the deep-blue core of the blue sapphire were sharp narrow peaks at 380 nm and 453 nm, a broad absorption band at 576 nm, and small peaks at 658 nm and 533 nm were also observed. The yellowish-brown rim had weak absorption peaks at 376 nm, 448 nm, 533nm, and 658 nm ( Figure 6).
UV-Vis Spectral Characteristics
The UV-Vis absorption spectra of the studied sample Sap-1 are shown in Figure 6. The main peaks in the deep-blue core of the blue sapphire were sharp narrow peaks at 380 nm and 453 nm, a broad absorption band at 576 nm, and small peaks at 658 nm and 533 nm were also observed. The yellowish-brown rim had weak absorption peaks at 376 nm, 448 nm, 533nm, and 658 nm ( Figure 6).
FTIR Features
The FTIR features of Sap-1 were characterized by a strong absorption peak at 3309 cm −1 , and weak absorption peaks near 3232 cm −1 and 3271 cm −1 . The absorption intensity of the three peaks generally decreased from rim to core (Figure 7).
FTIR Features
The FTIR features of Sap-1 were characterized by a strong absorption peak at 3309 cm −1 , and weak absorption peaks near 3232 cm −1 and 3271 cm −1 . The absorption intensity of the three peaks generally decreased from rim to core (Figure 7). Figure 6. Ultraviolet and visible absorption spectra of the dark-blue core and yellowish-br from Penglai sapphire (sample Sap-1).
FTIR Features
The FTIR features of Sap-1 were characterized by a strong absorption peak cm −1 , and weak absorption peaks near 3232 cm −1 and 3271 cm −1 . The absorption i of the three peaks generally decreased from rim to core (Figure 7).
Chemical Composition of the Penglai Sapphires
The trace elements of sample Sap-1 from two profiles analyzed from core to shown in Table S1. The main contents of Fe, Ti, Cr, V, Ga, and Mg and their ra shown in Table 1
Chemical Composition of the Penglai Sapphires
The trace elements of sample Sap-1 from two profiles analyzed from core to rim are shown in Table S1. The main contents of Fe, Ti, Cr, V, Ga, and Mg and their ratios are shown in Table 1 (Table S1). The contents of Fe, Ga, Mg, and V in the core were obviously higher than those of elements in the rim (Figure 8a,b). We combined the two sets of data and found that Fe content in the core ranged from 5091.60 ppm to 8697.46 ppm, with an average of 7117.45 ppm (n = 12). In contrast, lower Fe content was observed ranging from 2906.85 ppm to 5488.46 ppm, with an average of 4377.55 ppm (n = 23) (Figure 8b). The mean values of Ga, Mg, and V in the core were 374.45 ppm, 15.79 ppm, and 2.33 ppm, respectively, and the corresponding data of three elements in the rim were 255.54 ppm, 6.27 ppm, and 1.30 ppm, respectively. There was no significant difference in Cr content between the core and the rim. However, the content of Ti in the rim (mean 334.85 ppm) was obviously higher than that in the core (mean 127.53 ppm) (Figure 8a). (Table S1). The contents of Fe, Ga, Mg, and V in the core were obviously higher than those of elements in the rim (Figure 8a,b). We combined the two sets of data and found that Fe content in the core ranged from 5091.60 ppm to 8697.46 ppm, with an average of 7117.45 ppm (n = 12). In contrast, lower Fe content was observed ranging from 2906.85 ppm to 5488.46 ppm, with an average of 4377.55 ppm (n = 23) (Figure 8b). The mean values of Ga, Mg, and V in the core were 374.45 ppm, 15.79 ppm, and 2.33 ppm, respectively, and the corresponding data of three elements in the rim were 255.54 ppm, 6.27 ppm, and 1.30 ppm, respectively. There was no significant difference in Cr content between the core and the rim. However, the content of Ti in the rim (mean 334.85 ppm) was obviously higher than that in the core (mean 127.53 ppm) (Figure 8a).
Color Genesis of Growth Bands
The UV-Vis absorption peaks near 377 and 450 nm are due to the replacement of the adjacent 2Al 3+ cations with Fe 3+ -Fe 3+ ion pair [1,20,21]. The absorption peaks of sapphire at or near 575-580 nm are related to Fe 2+ , Ti 4+ , V 3+ and Cr 3+ [1,22,23]. The absorption peak at 658 nm is the result of Cr 3+ [1], and it is worth noting that it has been observed that Co 2+ in sapphire can also produce absorption peaks around 450 nm and in the range of 600-700 nm [24,25].
The distribution patterns of trace elements from core to rim of the Penglai sapphire ( Figure 8) and the correlation of their trace elements (Figure 9) showed that (1) Fe and Ti contents relatively changed with respect to the crystallization position and crystal color. From deep-blue core to yellowish-brown rim, Fe content decreased, while Ti content increased (Figure 9a). (2) The variations in Fe and Ti concentrations were negatively correlated in the core but positively correlated in the rim (Figure 9b). (3) The contents of V and Cr had slight differences in the core and the rim, but there was almost no change relative to the position. (4) There was a positive correlation between Fe and Ga. The radius of Ga 3+ is similar to that of Al 3+ and Fe 3+ , with positive three valence electrons distributed on the outermost electron layer. Its chemical features reveal a strong oxophilic affinity. Gallium's geochemical properties under oxidizing conditions are similar to those of Fe and especially of Al. The content of Ga decreased from core to rim, indicating that the core sapphires crystalized in a much higher oxidation environment than those in the rim. The positive correlation between the content of Ga and Fe showed that the content of Fe 3+ also decreased from core to rim. These compositional variations well indicated that the deep-blue core was mainly caused by Fe 2+ -Ti 4+ and Fe 3+ -Fe 3+ , Cr 3+ , and V 3+ in an oxidizing environment, whereas the yellowish-brown rim dominantly resulted from Fe 3+ -Fe 3+ in a reduced environment.
Phlayrahan et al. [31] argued that the peak at 3309 cm −1 in the FTIR diagram is caused by the stretching vibration of Ti-OH, which is stronger in the c-axis direction, and the peaks near 3232, 3271 cm −1 are considered to be the absorption of the stretching vibration of Ti-OH in different crystallographic directions. According to the Beer-Lambert law A = εcd, the principle of quantitative analysis in infrared spectroscopy, the absorption peak near 3309 cm −1 corresponding to -OH in the sapphire ribbon can be analyzed by c = A/εd for quantitative comparison. The studied samples were double-cut and parallel-polished; therefore, one sample showed the same d value, so the magnitude of c was proportional to the A value (c is the concentration of -OH in the sapphire; A is the absorbance). According to Figure 7, the content of -OH increased sequentially from core to rim. The absorption peak near 3310 cm −1 was indicative of sapphire grown in a strongly reducing environment [32]. Phlayrahan et al. [33] proposed the heating-induced The obvious peaks at 380 nm and 453 nm for the deep-blue core and two distinct peaks at 376 nm and 448 nm for the yellowish-brown rim in the UV-Vis absorption spectra of the studied sample were attributed to Fe 3+ -Fe 3+ ion pairs, as was the weak peak at 533 nm. The weak peak at 576 nm was caused by Fe 2+ -Ti 4+ ion pairs, which appeared at the core but not the rim ( Figure 6). Generally, Fe 2+ -Ti 4+ charge transfer contributes to blue color, when Ti content is high in sapphire crystals [1,26]. However, there was no characteristic absorption peak of Fe 2+ -Ti 4+ in the UV-Vis absorption spectra of the yellowish-brown rim, compared with the peak in the core, indicating that Ti in the rim was not involved in the formation of Fe 2+ -Ti 4+ even if the content of Ti in the rim was obviously higher than that in the core. The high concentration of Ti resulted from the great number of microscopic inclusions over growth bands that were clearly observed under the microscope ( Figure 5). These inclusions are difficult to investigate using infrared and Raman spectroscopy [27]. Having different colors within core and rim but slight differences in Cr 3+ and V 3+ contents ( Figure 8) indicates that they were not the chromophores producing the deep-blue color in the core. The absorption peaks at 658 nm at the core and rim were mainly caused by Cr 3+ (Figure 6). The hole-Cr 3+ preferentially pairs with trapped holes, compared with Fe 3+ , and the hole-Cr 3+ pairing produces an orange color. High concentrations of Fe 3+ produce a yellow color. We propose that the mixing of Fe 3+ -Fe 3+ and cavity-Cr 3+ pairs produces a marginal yellowish-brown color, and the Fe 2+ -Ti 4+ ion pair is the main factor in producing the deep-blue color of the core of the pungent sapphire. The impact of Co 2+ on the coloration of sapphire cannot be ruled out, and further research is needed. In addition, based upon the compilation of UV-Vis spectra of basaltic sapphires from different origins ( Table 2), we suggest that the deep-blue coloration in the core is closely related to Fe 2+ -Ti 4+ , and it pairs with Fe 3+ -Fe 3+ and Cr 3+ in the core, likely producing purple-hued blue. The chromogenic factors of the yellowish-brown rim are Fe 3+ -Fe 3+ and Cr 3+ . This study This study [28] [29] [30] The radius of Ga 3+ is similar to that of Al 3+ and Fe 3+ , with positive three valence electrons distributed on the outermost electron layer. Its chemical features reveal a strong oxophilic affinity. Gallium's geochemical properties under oxidizing conditions are similar to those of Fe and especially of Al. The content of Ga decreased from core to rim, indicating that the core sapphires crystalized in a much higher oxidation environment than those in the rim. The positive correlation between the content of Ga and Fe showed that the content of Fe 3+ also decreased from core to rim. These compositional variations well indicated that the deep-blue core was mainly caused by Fe 2+ -Ti 4+ and Fe 3+ -Fe 3+ , Cr 3+ , and V 3+ in an oxidizing environment, whereas the yellowish-brown rim dominantly resulted from Fe 3+ -Fe 3+ in a reduced environment.
Phlayrahan et al. [31] argued that the peak at 3309 cm −1 in the FTIR diagram is caused by the stretching vibration of Ti-OH, which is stronger in the c-axis direction, and the peaks near 3232, 3271 cm −1 are considered to be the absorption of the stretching vibration of Ti-OH in different crystallographic directions. According to the Beer-Lambert law A = εcd, the principle of quantitative analysis in infrared spectroscopy, the absorption peak near 3309 cm −1 corresponding to -OH in the sapphire ribbon can be analyzed by c = A/εd for quantitative comparison. The studied samples were double-cut and parallel-polished; therefore, one sample showed the same d value, so the magnitude of c was proportional to the A value (c is the concentration of -OH in the sapphire; A is the absorbance). According to Figure 7, the content of -OH increased sequentially from core to rim. The absorption peak near 3310 cm −1 was indicative of sapphire grown in a strongly reducing environment [32]. Phlayrahan et al. [33] proposed the heating-induced binding between Ti, Fe, and -OH in the blue sapphire structure, and that the intensity of peak at 3309 cm −1 series gradually decreases with increasing heating temperature in any given condition. However, the trend from core to rim in our sample was reversed (Figure 7), suggesting that the deep-blue rimmed by yellowish-brown bands in the investigated sample was not caused by heating but by a different redox condition during the formation.
Compositional Characteristics of the Penglai Sapphire
Trace elements of minerals are often considered favorable tools to explain minerals' origin and also help us to understand the geological processes related to the mineral formation [3,4,[34][35][36]. As for sapphire, previous studies mainly focused on the distribution of Fe, Ti, Mg, Ga, Cr, and V in sapphire with respect to identifying different types of deposits ( Figure 10). Generally, the content of Ga in basaltic corundum is higher than 100 ppm, while in metamorphic corundum, it is lower than 100 ppm [37]. The Ga/Mg ratio is an effective indicator for distinguishing metamorphic from magmatic sapphire.
The ratio of the magmatic sapphire is >10, while that of the metamorphic is <10 [38]. The Ga/Mg ratio greater than 6 would indicate a magmatic origin, while the Ga/Mg ratio less than 3 indicates a metamorphic one [39]. As shown in Table 1, the Ga content in all spots of sample Sap-1 was higher than 100 ppm, with Ga/Mg ratio > 10. All data showed Ga/Mg > 6. The Fe vs. Ga/Mg plot showed that all spots fell into the magmatic field (Figure 10b).
Ga/Mg > 6. The Fe vs. Ga/Mg plot showed that all spots fell into the magmatic field (Figure 10b).
The Fe/Mg ratio of the magmatic sapphire is commonly greater than 100, while that of the metamorphic and metasomatic sapphire is less than 100. Moreover, the Cr/Ga ratio of the magmatic sapphire is less than 0.1, while the metamorphic is greater than 1 [40]. The Fe/Mg ratio of Penglai sapphire had a large range, from 221 to 6268, and Cr/Ga ratio ranged from 0.02 to 0.09; all these values were similar to those of magmatic sapphires. In Fe-Mg-Ti plot and Cr-Fe-Ga ternary diagram, all data fell within a magmatic sapphire field (Figure 10a,c). The Cr/Ga and Fe/Ti diagram displayed that almost all plots fell within a magmatic sapphire area, and only several plots were scattered near the boundary line between magmatic and metamorphic fields (Figure 10d). Therefore, all values of studied sapphires indicated a magmatic origin. In addition, the presence of Sn and Ta (Table S1) suggested that the sapphires likely formed related to magmatic rocks, e.g., syenite or granite [41,42]. The Fe/Mg ratio of the magmatic sapphire is commonly greater than 100, while that of the metamorphic and metasomatic sapphire is less than 100. Moreover, the Cr/Ga ratio of the magmatic sapphire is less than 0.1, while the metamorphic is greater than 1 [40]. The Fe/Mg ratio of Penglai sapphire had a large range, from 221 to 6268, and Cr/Ga ratio ranged from 0.02 to 0.09; all these values were similar to those of magmatic sapphires. In Fe-Mg-Ti plot and Cr-Fe-Ga ternary diagram, all data fell within a magmatic sapphire field (Figure 10a,c). The Cr/Ga and Fe/Ti diagram displayed that almost all plots fell within a magmatic sapphire area, and only several plots were scattered near the boundary line between magmatic and metamorphic fields (Figure 10d). Therefore, all values of studied sapphires indicated a magmatic origin. In addition, the presence of Sn and Ta (Table S1) suggested that the sapphires likely formed related to magmatic rocks, e.g., syenite or granite [41,42].
Trace element contents of the Penglai sapphire in Hainan, China, and other magmatic sapphires worldwide are compared in Table 3 and Figure 11. The Penglai sapphires are characterized by high Fe content and are as variable in Fe content as sapphires from Changle, Shandong [43], Chantaburi, Thailand, Houai Sai, Laos, and Lake Turkana area, Kenya [38] (the variation is seen in the large σ value in the table). The Ti content also varies greatly, which is related to the ubiquitous titanium-containing inclusions in sapphire. Such inclusions exist in most basalt types, including sapphires from Pailin in Cambodia, and Chantaburi in Thailand, Australia, and Ethiopia [44]. The data of Shandong Changle sapphire samples show that the concentrations of Fe, Ti, Cr, and Mg are than those of other sapphires [43], and it varies greatly, with the content of Fe much larger than that of Hainan sapphire.
Conclusions
There are abundant melting erosion and growth marks on the surface of rough sapphires from the Penglai alluvial deposits in Hainan, China. They exhibit a strong glass luster after polishing. Inclusion-bearing growth bands that are milky white in reflected light and yellowish-brown in transmitted light can be observed, which is significant for its origin identification.
The color distribution is uneven, and visible color nuclei, color bands, and multiple colors can be observed on one crystal. The color chromatic factors of the deep-blue core Figure 11. Ga/Mg versus Fe diagram of the magmatic blue sapphires worldwide cited from [38]: (a) main Asian field of magmatic blue sapphires hosted in alkaline basalt; (b) main African field of magmatic blue sapphires in alkaline basalt. Figure 11 presents the Fe content versus Ga/Mg ratio of the studied sapphires in Hainan, China, and those of famous deposits in Asia and Africa. The basaltic sapphire deposits in Asia include Cambodia, Thailand; Laos in Southeast Asia; and Shandong, Fujian, and Hainan in China. Compositions of the basaltic sapphires from these places are concentrated (Figure 11a). In the rectangular box [38] in the figure, the Ga/Mg ratio of sapphires in these mines is generally higher than that of metamorphic ones. Figure 11a shows that the Penglai sapphire in Hainan, China, falls within the magmatic sapphire field. Compared with sapphires from Cambodia, Bo Phlo in Thailand, and Houai Sai mines in Laos, the Fe content of sapphires from the Penglai mine is relatively high. In Figure 11b, the Ga/Mg ratio and Fe content of the Penglai sapphire in China and the magmatic sapphire from African deposits partly overlap. Combined with Figure 11a, the data indicate that the magmatic sapphires in Africa are roughly enriched with Fe, compared with those of the Penglai sapphires.
Conclusions
There are abundant melting erosion and growth marks on the surface of rough sapphires from the Penglai alluvial deposits in Hainan, China. They exhibit a strong glass luster after polishing. Inclusion-bearing growth bands that are milky white in reflected light and yellowish-brown in transmitted light can be observed, which is significant for its origin identification.
The color distribution is uneven, and visible color nuclei, color bands, and multiple colors can be observed on one crystal. The color chromatic factors of the deep-blue core are Fe 2+ -Ti 4+ in an oxidizing environment, it pairs with Fe 3+ -Fe 3+ , Cr 3+ , and V 3+ in the core, likely producing purple-hued blue. The Fe in the core mainly exists in the form of Fe 3+ that decreased uniformly from core to rim. The chromogenic factors of the yellowish-brown rim are Fe 3+ -Fe 3+ and Cr 3+ in a reduced environment. Ti was not involved in forming Fe 2+ -Ti 4+ ion pairs during the formation process of sapphire in the rim.
The chemical composition showed that the Penglai sapphires have a magmatic origin. The oxidation of the formation environment decreased sequentially from the deep-blue core to the yellowish-brown rim. Compared with the basaltic sapphires worldwide, the Fe content was moderately higher than most of those of Asian sapphires but obviously lower than those of Changle sapphires in Shandong, China, and overlapped with those of African sapphires. The content of Ti in the rim was higher than the core, which resulted from ubiquitous Ti-bearing inclusions in sapphire.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/min12070832/s1; Table S1: The main trace element contents (ppm) of sample Sap-1 analyzed via LA-ICP-MS over two profiles (AB and CD) from core to rim. | 8,183 | 2022-06-29T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Second Order Periodic Boundary Value Problems Involving the Distributional Henstock-Kurzweil Integral *
We apply the distributional derivative to study the existence of solutions of the second order periodic boundary value problems involving the distributional Henstock-Kurzweil integral. The distributional Henstock-Kurzweil integral is a general intergral, which contains the Lebesgue and Henstock-Kurzweil integrals. And the distributional derivative includes ordinary derivatives and approximate derivatives. By using the method of upper and lower solutions and a fixed point theorem, we achieve some results which are the generalizations of some previous results in the literatures.
Introduction
where and are the first and second order distributional derivatives of and f is a distribution (generalized function).
If the distributional derivative in the system (1.1) is replaced by the ordinary derivative and , then (1) converts into here : 0, g T , and x and x denote the first and second ordinary derivatives of . The existence of solutions of (1.2) have been extensively studied by many authors [1,2].It is well-known, the notion of a distributional derivative is a general concept, including ordinary derivatives and approximate derivatives.As far as we know, few papers have applied distributional derivatives to study PBVP.In this paper, we have come up with a new way, instead of the ordinary derivative, using the distributional derivative to study the PBVP and obtain some results of the existence of solutions.
This paper is organized as follows.In Section 2, we introduce fundamental concepts and basic results of the distributional Henstock-Kurzweil integral or briefly the whose distributional derivative equals f .From the definition of the HK -integral, it includes the Riemann integral, Lebesgue integral, HK-integral and wide Denjoy integral (for details, see [3][4][5]).Furthermore, the space of D HK -integrable distributions is a Banach space and has many good properties, see [6][7][8].
D
In Section 3, with the HK -integral and the distributional derivative, we generalize the PBVP (1.2) to (1.1).By using the method of upper and lower solutions and a fixed point theorem, we achieve some interesting results which are the generalizations of some corresponding results in the references.
The Distributional Henstock-Kurzweil Integral
In this section, we present the definition and some basic properties of the distributional Henstock . The distributions are defined as continuous linear functionals on .The space of distributions is denoted by , which is the dual space of .That is, if then , and we write For all , we define the distributional derivative of , where is a test function.
Let
be an open interval in , we define and are and respectively if , .
Note that is a Banach space with the uniform norm With the definition above, we know that the concept of the HK -integral leads to its good properties.We firstly mention the relation between the , we define the Alexiewicz norm by The following result has been proved.
Lemma 2.2. ([3, Thoerem 2]). With the Alexiewicz norm, HK
We now impose a partial ordering on D is a Banach space.
e details in [9]).By this definition, if , (se . We also have other usual relations between the HK -integral and the ordering, for instance, the following result.
, and if and are HK D -integrable, then g is also HK We say a sequence It is also shown that the following two convergence theorems hold.
Lemma 2.4.([9, Corollary 4], Monotone convergence theorem for the HK -integral).Let be a se- quence in We now give another result about the distributional derivative.
Lemma 2.6.Let f g be the distributional derivative of , F G , where where the supremum is taken over every sequence where and denote the first and second order distributional derivatives of The distributional derivative subsumes the ordinary derivative.And if the first ordinary derivative of exists, the first ordinary derivative and first order distributional derivative of Recall we say if and only if for all We impose the following hypotheses on the functions and on 0, x is a solution of PBVP (1) if We say that and satisfies (1).Before giving our main results in this paper, we first apply Lemma 2.1 to convert the PBVP (1) into an integral equation.
: 0, f T be a distribution and Lemma 3.1.
where : , is relatively compact, then has a maximal fixed point x and a minimal fixed point * .
then the Equations (1)-( 3) define a nondecreasing mapping , , , , , , , x y x y x y x y For any , we have This paper is devoted to the study of the existence of solutions of the second order periodic boundary value problem (PBVP for brevity)
HKD
-integral.A distribution f is HK -integrable on D , a b if there is a continuous function F on , . Analogously, we denote HK -integral and Lebesgue integral.The space of HK D -integrable distributions is defined by It follows from the definition of the distributional derivative and (3.1) that
3 .
Periodic Boundary Value ProblemsConsider the second order periodic boundary value problem(1.1)
6 )
y by the same way.Thus x and = It follows from (3.1) and (3.3) that for each
3 . 1 .
the assertion.□ With the preparation above , we will prove our main result on the existence of the extremal solutions of the periodic boundary value problem (1.1).Theorem Assume that conditions (D0)-(D2) are satisfied.Then the PBVP (1.1) has such solutions x and x in , v u that x x x and Dx Dx Dx for each solution x of (1.1) in , In view of Lemma 3.4 the Equations (3.1)-(3.3)define a nondecreasing mapping : , , G .
on 0,T for all of (1), it follows from Lemma 3.1 that is a fixed point of .It follows from the extremality of x and x that x z x v x Dv Dx Du , i.e., and u .As a consequence of Theorem 3.1 we have then g is called a function with bounded variation.The set of functions with bounded variation is denoted .It is known that the dual space of D Lemma 2.7.([3, Definition 6], Integration by parts). , 1 , , d | 1,516.2 | 2012-09-25T00:00:00.000 | [
"Mathematics"
] |
Impact Analysis of the Influence of Institutional Capacity and the Use of Performance-Based Budgeting on Original Regional Revenue Receipts in the Tanggerang Regency Area
This study aims to analyze the impact of institutional capacity and the use of performance-based budgeting on local revenue (PAD) in the Tangerang Regency area. This research uses a quantitative descriptive analysis method using secondary data obtained from various trusted sources. This research identifies institutional capacity, which includes aspects of human resources, institutional systems, and managerial capacity, and analyzes the implementation of performance-based budgeting to increase PAD revenues. The results of the analysis show that strong institutional capacity significantly contributes to increasing PAD revenues. Apart from that, the use of performance-based budgeting has also been proven to have a positive impact on PAD revenues. The implication of this research is the importance of increasing institutional capacity and implementing performance-based budgeting effectively as a strategy to increase PAD revenues in Tangerang Regency. This study makes an important contribution in the context of regional financial development and public financial management, as well as providing a basis for more effective policymaking in increasing regional income.
INTRODUCTION
One of the requirements of government organizations, which additionally predominantly determines the progress of local states in fulfilling government capabilities, is the requirement for provincial support (Nuradhawati, 2019).At the very least, the functions of development, community empowerment, and public service that are being carried out by the government in question are included in their implementation.The point of carrying out these three government capabilities is obviously to acknowledge local government assistance (Yaqub & Suradinata, 2019).
The requirement for territorial funding can be distinguished from the use segment, which is organized in the Provincial Income and Use Financial Plan (APBD) (Wulansari, 2015).The provincial funding part is acquired from the government financial plan moves to the locales and from the receipt of unique territorial income and other authentic territorial incomes.Territorial support is not entirely set in stone by the degree of provincial financing needs.Thus, the more territorial government execution improves, the more the requirement for provincial support will naturally increase.In terms of territorial funding needs, the more prominent the district's capacity to meet its supporting necessities, the less reliant the locale will be on government monetary strategies (Khusaini, 2018).
In any case, running against the norm, the more modest the provincial ability to meet its funding needs, the more noteworthy the territorial reliance on government monetary approaches (Komara, 2013).The outcome of carrying out decentralization and provincial independence arrangements as controlled in Regulation Number 32 of 2004 concerning local government can be surveyed in terms of the area's capacity to meet territorial funding needs.
Step-by-step instructions to acknowledge straightforward, compelling, productive, and responsible provincial monetary and resource execution are one of the genuine difficulties for territorial states in carrying out decentralization and local independence approaches (Mardiasmo, 2021).
The division of power (political sharing) and financial sharing between the central government and regional governments is the core of Law No. 32 of 2004 (Basri, 2013).In practice, this power, local state-run administrations get assets from the focal government (specifically Broad Portion Assets (DAU) and Exceptional Allotment Assets (DAK), as well as wellsprings of Unique Provincial Pay (Cushion) and other authentic territorial pay (Ardhani & Ardiyanto, 2011).The implication for districts and cities is not only to concentrate on funds for financial balance but also to investigate and develop regional economic potential to further optimize the source of development funds for regions derived from regional original income and make it a contributor to future regional development funds (Azzumar & HANDAYANI, 2011).
To achieve the objectives of regional development, sufficient funding sources are required.To accomplish this, the focal government provided strategies in the field of territorial income, which were arranged to expand the capacity of locales to support their family issues and focused on preparing assets from provincial sources (ANGGRAINI PK, 2011) (Nasir, 2019).
Unique Local Pay (Cushion) is a huge kind of revenue for routine support and improvement in an independent district.How much income from local assessment and territorial duty parts is significantly impacted by the number of sorts of provincial expenses and provincial exacts not set in stone and acclimated to the pertinent guidelines connecting with the receipt of these two parts (PURWANTO, n.d.).Regional governments must be able to work toward increasing revenue derived from the region itself because PAD is anticipated to be the primary support for financing development activities in the region.This will expand the accessibility of territorial money, which can be utilized for different free improvement exercises.However, PAD still makes a small contribution to regional income and expenditure in several areas.Up to this point, the predominance of focal government commitments to these districts is still enormous, so to diminish reliance on the focal government, local states need to attempt to expand Cushion, one of which is by investigating provincial potential (Christina, 2013).
Tangerang Regency, as part of Banten Province, one of the regions that has an expansion area, also called the New Autonomous Region (DOB), will also try to increase regional development after its parent, namely Tangerang Regency.As an autonomous region, Tangerang Regency, in carrying out regional development, requires a reliable source of revenue.The development funds were managed entirely by the regional government and came from revenues from the Tangerang Regency regional government.The source of financing for government needs, Regional Original Income (PAD), comes from the processing of resources owned by the Tangerang Regency area, in addition to revenues from the Banten provincial government, the central government, and other regional revenues.In line with this authority, the Regional Government of Tangerang Regency is expected to be more able to explore financial resources, especially to meet government and development financing needs in the region through Original Regional Revenue.With the existence of a new autonomous area in the Tangerang Regency area, like it or not, the income from Tangerang Regency, which was previously in the South Tangerang City area, must be handed over to the new South Tangerang City government to be managed by the new government.This will certainly have an impact on reducing the income earned by Tangerang Regency.The income of a region, including Tangerang Regency, is summarized in its original regional income (PAD).One of the components of regional original income (PAD) is regional taxes.
This regional tax can be optimized by the regional government to increase its original regional income.The types of regency and city taxes according to Law Number 28 of 2009 concerning regional taxes and regional levies are: 1. hotel tax; 2. restaurant tax; 3. entertainment tax; 4. advertisement tax; 5. street lighting tax; 6. parking tax; 7. tax on non-metal minerals and rocks; 8. groundwater tax; 9. Swallow's nest tax; 10.PBB rural and urban; 11. fees for the acquisition of land and building rights.
Government science is a logical discipline that is firmly connected with the organization of government frameworks.As per (Suradinata, 1998), etymologically, the significance of government science can be separated into two, specifically the importance of science and the significance of government.Science implies information that is acquired etymologically and applies all around.In the meantime, government is the course of government exercises, the exercises of public establishments in their capability to accomplish state objectives.(Ndraha, 2000) said that administration science can be characterized as a science that concentrates on how the public authority (public work unit) attempts to satisfy and safeguard the requests (assumptions, needs) of the people who are represented by public administrations and common administrations in government relations.As per (Syafiie, 2011), government science is the science that concentrates on the most proficient method to complete administration (chief), guidelines (regulative), authority, and coordination of government (both focal and territorial) and individuals and their administration in different government occasions and side effects, well.
For this situation of decentralization, (Robi, 2023) says that the rule of decentralization is a standard that expresses the handover of various government undertakings from the central government or lower-level territorial states to lower-level local legislatures so they become family undertakings for the district.Alluding to this public help, (Setiawan, 2014) said, "The center public assistance undertakings of aggregate memory, strategy-centered exhortation, scholarly capital, and independence are fundamental for fruitful public administration.Along these lines, public administrations become a benchmark for the outcome of public administration.The public administration being referred to is government, which the executives did for the organization.In such a specific situation, a fundamental system for carrying out decentralization strategies is required.Thus, an insightful methodology is expected to figure out the essential structure of decentralization.For this situation, (Cohen, 2003) expressed, "Six ways to deal with recognizing types of decentralization can be distinguished in the writing."
RESEARCH METHODS
For this situation, this exploration utilizes quantitative examination strategies (Sugiyono, 2013).As per (Sugiyono, 2016), the populace is a summed-up region comprising items or subjects that comprise specific amounts not entirely settled by specialists to be concentrated and afterward drawn.
Because the research population is quite large and, as a result, difficult to reach as a whole, several respondents were selected from a portion of the population for the sample.(Umar, 2001) offered Gay's viewpoint that the base OK example size depends on the examination configuration utilized, specifically enlightening techniques, no less than 10% of the populace; for moderately little populaces, at least 20% of the populace; correlational illustrative strategy, at least 30 subjects; ex post facto technique, at least 15 subjects for every gathering; trial technique, at least 15 subjects for each gathering.
Concerning registration strategy, Rosady (Rosady, 2008) said: The justification behind leading statistics is that specialists ought to consider looking at all components of the populace if the populace components are moderately not many and the changeability of every component is high (heterogeneous).Registration is more plausible if the examination is expected to make sense of the qualities of every component of a population.
RESULTS AND DISCUSSION
Original Regional Revenue (PAD) is one of the sources of revenue for Tangerang Regency, not only showing indicators of performance achievement in managing sources of original regional revenue but also showing indicators of the Tangerang Regency Government's ability to implement decentralization and regional autonomy policies.With these indicators, Tangerang Regency, which is on the Sumatra-Java crossing and close to DKI Jakarta Province, certainly has the potential to receive original regional income such as regional tax revenue, regional levies, proceeds from regionally owned companies, management of separated regional assets, and other income.-other valid.
The performance of managing the potential for regional original revenue certainly depends on the institutional capacity of the Regional Revenue Agency and the use of performancebased budgeting at that agency.shows that the influence of the use of performance-based budgeting on Tangerang Regency's PAD revenue has proven to be positive.This means that it can be proven that there is a causal relationship between the use of performance-based budgeting and Tangerang Regency's PAD revenue.Testing the influence of these two variables shows a probability value of 0.000 (P<0.05),so: The second hypothesis proposed is that the magnitude of the influence of the use of performance-based budgeting on Tangerang Regency's PAD revenue is determined by the efficiency of public services in the field of managing PAD revenue sources and effectiveness.public services in the field of managing PAD revenue sources.
Analysis of the Impact of Institutional Capacity on Tangerang Regency's Original Regional Income Revenue
Based on the results of measuring the influence of institutional capacity ( 1) on Tangerang Regency's PAD revenue ( ), it is known that the magnitude of the influence of institutional capacity on Tangerang Regency's PAD revenue reaches 0.64.The magnitude of this effect is considered strong and significant because statistic: 13.51 > value: 1.96.The magnitude of the influence of institutional capacity on Tangerang Regency PAD revenue is theoretically determined by vision and mission, (2) leadership, (3) resources, (4) partnerships, and (5) services and products.The magnitude of the influence of institutional capacity on Tangerang Regency PAD revenue is practically determined by 21 manifest variables of institutional capacity.The existence of this influence shows that between Institutional Capacity and PAD Revenues of Tangerang Regency, a meaningful cause-and-effect relationship is formed: if Institutional Capacity is increased or increases, then the increase in Institutional Capacity is stimulated, followed by an increase in PAD Revenues of Tangerang Regency.Therefore, practically, Tangerang Regency's PAD revenue can be increased by increasing 21 manifest variables (indicators) of institutional capacity.This means that the solution to the non-optimal PAD revenue of Tangeran Regency can be done by increasing the 21 manifest variables of the institutional capacity of the Regional Revenue Agency of Tangeran Regency. | 2,888.2 | 2024-04-06T00:00:00.000 | [
"Economics",
"Business"
] |
Time-resolved infrared spectroscopic techniques as applied to channelrhodopsin
Among optogenetic tools, channelrhodopsins, the light gated ion channels of the plasma membrane from green algae, play the most important role. Properties like channel selectivity, timing parameters or color can be influenced by the exchange of selected amino acids. Although widely used, in the field of neurosciences for example, there is still little known about their photocycles and the mechanism of ion channel gating and conductance. One of the preferred methods for these studies is infrared spectroscopy since it allows observation of proteins and their function at a molecular level and in near-native environment. The absorption of a photon in channelrhodopsin leads to retinal isomerization within femtoseconds, the conductive states are reached in the microsecond time scale and the return into the fully dark-adapted state may take more than minutes. To be able to cover all these time regimes, a range of different spectroscopical approaches are necessary. This mini-review focuses on time-resolved applications of the infrared technique to study channelrhodopsins and other light triggered proteins. We will discuss the approaches with respect to their suitability to the investigation of channelrhodopsin and related proteins.
Introduction
Marked with the first description of channelrhodopsin as a light-gated ion channel in 2002 (Nagel et al., 2002) the new field of optogenetics emerged and has since gone through rapid development. It utilizes light sensitive proteins like channelrhodopsins, bacteriorhodopsin, rhodopsin, blue light receptors (BLUF) (Kennis and Mathes, 2013), phytochromes (Yang et al., 2013), or engineered proteins (Möglich and Moffat, 2010) as tools to control some defined events in living cells by light (Zhang et al., 2006).
The most commonly used channelrhodopsin is composed of the 7-helical apoprotein opsin and a retinal chromophore, covalently attached by a protonated Schiff base. Light causes retinal isomerization which in turn triggers conformational changes of opsin then forming the ion conductive pore. First information on the channelrhodopsin photocycle came from electrical measurements and from time-resolved UV-visible spectroscopy . Further structural information was revealed by the X-ray structure (Kato et al., 2012).
However, still to date a little is known about its exact gating mechanisms and photocycles. Providing information on a molecular level, infrared (IR) spectroscopy has become an important tool for investigation of structure/function relationships in proteins. An overview of its applications in biophysics is given in (Siebert and Hildebrandt, 2008). The most commonly used spectral region is between ∼800 and ∼2500 cm −1 (4-12.5 µm) (Barth, 2007) and a resolution better than 8 cm −1 is usually desired. One advantage over other commonly used methods, EPR, NMR, or X-ray crystallography for instance, is that IR investigates systems in their native environment. However, a drawback of this technique is that the extinction coefficients of most functional groups are low (see Barth, 2007). To compare, in the UV-visible region, the protonated Schiff base absorbs near 500 nm with an extinction coefficient of ∼40,000 M −1 cm −1 (Bridges, 1971) whereas in the IR-spectrum, the Schiff base protonation can be indirectly assigned by the strong protonated carboxylate C=O stretching mode of the corresponding counter ions. For a glutamate or aspartate, the extinction coefficient (∼200-300 M −1 cm −1 ) is over 100 times lower. This mini-review focuses on current IRspectroscopic techniques and their applications to the study of proteins like channelrhodopsin.
IR-spectroscopy of Channelrhodopsin
IR-spectroscopy was among the first techniques used to obtain structural information on the channelrhodopsin photocycle Radu et al., 2009). Since 2008, several bands have been assigned by biophysical methods such as site-directed mutagenesis, H 2 O/ 2 H 2 O exchange or isotopic labeling. Figure 1 (light gray) shows the IR-spectrum of Channelrhodopsin-2 with some important bands marked. For example, its overall helical structure is typically discerned from the amide I and II bands (∼1660 and ∼1550 cm −1 ) (Bandekar and Krimm, 1979;Byler and Susi, 1986;Goormaghtigh, 1990). However, when only the modes that undergo a change during conformational alterations of the protein are of interest, the difference spectrum calculated by subtracting the spectrum of the functional (illuminated) state from the spectrum of the resting (dark) state is used (Figure 1, black line). Hereby structural changes connected with the preformation, opening or closing of the pore become visible. The band at 1661 cm −1 indicates conformational changes of the protein, the band pattern between 1100 and 1300 cm −1 reflects the all-trans/13-cis chromophore isomerization Bruun et al., 2011), whereas changes in hydrogen bonding and proton transfers of functional aspartates and glutamates are seen between 1700 and 1800 cm −1 . The protonation states and hydrogen bonding of the Schiff base counter ions E123 and D253 (1760 cm −1 ) and the proton donor D156 (1737 and 1760 cm −1 ) can be directly observed as well as the protonation state of E90 which, as a part of the central gate, plays a role in channel selectivity. For further band assignments see for instance (Kuhne et al., 2015;Lórenz-Fonfría et al., 2015) and citations therein.
The conductive state of channelrhodopsin arises within ∼200 µs and decays within ∼20 ms (Ernst et al., 2008). In contrast, the retinal isomerization occurs within femtoseconds FIGURE 1 | Infrared spectroscopy of Channelrhodopsin. The absorption spectrum (gray) of retinal proteins like Channelrhodopsin-2 reconstituted in lipid vesicles shows bands associated with the lipid environment and protonated carboxyl groups (∼1700-1800 cm −1 ), water (1644 cm −1 ) and the overall helical structure of the protein (amide I ∼1650 cm −1 ; amide II ∼1550 cm −1 ). Note, that the lipid vesicles allow a very dense packing of the protein in the cuvette thus reducing the water content. Light induced alterations are represented by the difference spectrum (black), where negative bands (blue) occur due to the dark state while positive bands (red) are due to the illuminated state, achieved by illumination with blue (480 nm) light. The spectrum was recorded at cryogenic conditions where a mixture of species, including the Schiff base deprotonated state and the conducting state is observed. Note that, while total absorbance is in the order of 0.9 OD (left scale, gray), largest changes in the difference spectrum are within 0.004 OD (right scale, black). In the picture, some bands assigned so far to their structural counterparts are marked. For details of the band assignments, see (Eisenhauer et al., 2012;Lórenz-Fonfría et al., 2013Kuhne et al., 2015). (Neumann-Verhoefen et al., 2013), de-and re-protonation of the Schiff base is faster than 1 ms (Ernst et al., 2008), and the recovery of the fully dark-adapted state, thereby closing the photocycle, is accomplished within minutes . In addition, multiple photocycles with different reaction kinetics exist in parallel (Hegemann et al., 2005), and depending on the illumination conditions, additional side-ways can be populated (Ritter et al., 2013). Therefore time-resolved methods covering time-regimes from femtoseconds to minutes are necessary to understand the structure-function relationships. In the following chapters, we review IR-spectroscopic methods with focus on temporal resolution, sample and technical requirements, as applied to the study of proteins like channelrhodopsin.
Rapid-scan Spectroscopy
In Fourier-transform infrared (FTIR) spectrometers the light from a broadband IR-source passes an interferometer where the incident beam is split by a beam splitter. The partial beams are back-reflected to the beam splitter by two mirrors one of which is a sliding mirror introducing a position-dependent phaseshift. The beam splitter allows the partial transmission of the reflected beams to the detector, where an interference signal is (Schade et al., 2014) with Synchrotron light source, dispersive prism and focal-plane array detector. (C) Laser based pump-probe setup. A first pulse from the pump laser starts the photoreaction. A subsequent short pulse from the probe laser probes the system. The probe pulse can be dispersed to obtain spectra; however, spectral bandwidth is determined by the duration of the probe pulse.
recorded as a function of the optical path difference (Griffiths and De Haseth, 2007) (Figure 2A). This so-called interferogram is converted into a spectrum by a Fourier transformation (Herres and Gronholz, 1984). FTIR spectrometers benefit from high-throughput (Jacquinot), multiplex (Fellgett), and high registration precision (Connes) advantages (Perkins, 1987). The temporal resolution is only limited by the speed, sliding pathlength (corresponding to the resolution of the spectrum) and reversal-time of the movable mirror. For a spectrum of 4 cm −1 resolution, 40 ms time-resolution can be achieved (Smith and Palmer, 2002). Due to the symmetry of the interferogram around the position of equal optical path length ( s = 0), one movement of the mirror yields two spectra by splitting the interferogram at ( s = 0). Utilizing both forward-and backward movement of the mirror for data acquisition, a timeresolution of 10 ms is achieved. Further improvement to 5 ms (8 cm −1 resolution) was reported with the rapid-sweep method (Braiman et al., 1987). However, using sliding mechanisms means that after data acquisition the mirror has to be stopped and its direction reversed. This time-consuming process becomes significant when fast processes are investigated and the mirror is moved with high speed over a short distance. To avoid this, different types of interferometers have been utilized. For instance, a continuous rotary motion of a tilted mirror was used to measure an interferogram in less than 1 ms (4 cm −1 resolution) (Griffiths et al., 1999). However, difficulties in maintaining the alignment made an optical tilt-compensation necessary (Manning, 2002). Due to the limited time-resolution, rapid-scan FTIR is only suited to investigate the late stages of the channelrhodopsin photocycle. The conducting state can only be observed by this technique in exceptional cases, for example by cryotrapping or when slowcycling mutants (i.e., ChR2-C128T, Berndt et al., 2009) are used .
Step-scan Spectroscopy
Here, time-courses at the particular interferogram data points corresponding to distinct mirror positions are recorded separately (Murphy et al., 1975). This is achieved by stopping the movable mirror, initiating the reaction to be followed and recording the time-trace while the mirror is at rest. The mirror is then moved to the next position ("step"). This process is repeated for each sampling point of the interferogram. Finally, the interferograms corresponding to given times after light flash are reconstructed using the intensities from the time-traces. This means that the experiment has to be repeated at least as often as the number of digital points of the interferogram which is usually more than 1000 times. The time-resolution is then only limited by the detector and the analog-digital converter of the acquisition system. Additional noise sources that potentially influence the experiments, for example instrument vibrations or slow source drifts are described by Andrews and Boxer (2001). To ensure sharp triggering (required for high time-resolution) and to minimize multi-photon processes, the reaction is triggered by a laser flash usually shorter than the desired time-resolution. Several techniques avoid the complicated process of stopping the mirror by utilizing the time delay between the subsequent digitized interferogram points. In these synchronized continuous-scan measurements, the experiment also has to be triggered for each interferogram data point (see Fleischmann et al., 2003 and citations therein).
Siebert and coworkers described the set-up of a step-scan device based on a commercial interferometer designed to study the photoreaction of bacteriorhodopsin with µs time resolution (Uhmann et al., 1991). With current set-ups, fast detectors and electronics, time-resolutions down to nanoseconds have been achieved (Garczarek and Gerwert, 2006). The step-scan technique is ideal for investigation of fast cycling non-degrading systems like bacteriorhodopsin, however its application to many other light-sensitive proteins can be difficult. For instance, the long recovery kinetics of most channelrhodopsins requires a prolonged delay between two subsequent light flashes. The recording time of a spectral data set with a resolution of 4-8 cm −1 , a spectral width of ∼1000 cm −1 and appropriate signal-to-noise ratio (∼1000 experiment repetitions) can be in the order of days. For example, first results on channelrhodopsin activation, with 6 µs time-resolution took 5 days of accumulation time (Lórenz-Fonfría et al., 2013). Later the time-resolution was improved to the nanosecond range (Kuhne et al., 2015;Lórenz-Fonfría et al., 2015), however long measuring times are still an issue.
Non-cyclic systems can only be investigated using this technique when each point of the interferogram is recorded from a fresh sample. For liquid samples a flow-through cell is advantageous (Kaun et al., 2006), however for non-liquid ones, the sample has to be replaced once the time-course of a single data point of the interferogram has been measured. Set-ups utilizing rotating discs (Rödig and Siebert, 1999) or translational stages (Rammelsberg et al., 1999) have been developed for such cases. However, homogeneity of the samples is important here. For a more detailed review of the step-scan and other FTIR techniques, see (Kötting and Gerwert, 2005;Radu et al., 2011).
Synchrotron Based Dispersive Techniques
Dispersive spectrometer approaches have long been considered outdated since they typically suffer from low light intensity due to losses at the entrance slit and the dispersive grating and also from low data acquisition speed limited by the grating movement. Modern focal-plane-array (FPA) detectors allow simultaneous measurements of all data points. The light from the entrance slit, after passing through the grating, is imaged to the FPA where each detector element is used to record its own spectral interval.
To achieve sufficient spectral resolution, echelle gratings with higher diffraction orders are commonly used, particularly in astronomical sciences (Lacy et al., 1989). The low light intensity, and consequently the low signal-to-noise ratio makes them rather unsuitable for time-resolved IR-studies of proteins. Another drawback of gratings in combination with planar arrays is the significant curvature of the recorded spectral image (Pelletier et al., 2005), a problem which has to be addressed to avoid artifacts. Furthermore, array detectors require precise imaging of the entrance aperture at the detector elements and thus a highly brilliant light source such as that provided, for example by synchrotron radiation is particularly attractive. A conceptual design of a combined dispersive IR/X-ray spectroscopy set-up for simultaneous time resolved measurements using synchrotron light was proposed by (Marcelli et al., 2010). The high brilliance of the synchrotron IR-light allows optimal utilization of the spectrometer entrance aperture. Marcelli et al. calculated a signalto-noise ratio of >1000 for integration times >0.3 µs using a time-resolved grating spectrometer in combination with a focal plane array and cooling all optical elements to 77 K.
A prism-based infrared spectrometer with synchrotron source, designed for single-shot measurements of photosensitive proteins like channelrhodopsin and enzyme rhodopsins is currently being developed (Schade et al., 2014). Design goals are microsecond time-resolution and a spectral resolution of 4-8 cm −1 in the 2000-950 cm −1 range while maintaining a signal-to-noise ratio of 1000 in single-shot mode. The concept is based on a Féry-spectrograph (Féry, 1911), where a prism consisting of two spherical surfaces is used. A spherical mirror behind the prism facilitates a second pass of the light (Figure 2B), and all spherical surfaces follow aplanatic conditions (Warren, 1997). This arrangement guarantees a coma and aberration free, non-tilted flat image of the entrance aperture in the image plane, and a high spectral resolution (Wilson, 1969). The usage of a prism rather than a grating has the advantage of higher optical transmission and the absence of interferences caused by order effects or stray light. The ray aberrations of this set-up were calculated to be less than 15 µm and therefore much smaller than the corresponding Airy disk, demonstrating the diffractionlimited operation over the whole spectral range. The expected signal-to-noise ratio calculation was based on parameters suitable for the IRIS Beamline at BESSY II (Peatman and Schade, 2001). For 1 µs accumulation time, a signal-to-noise ratio of ∼600 was calculated for an operation temperature of 300 K, which improves to ∼1000 when a cold-stop (77 K, f/1.5) in front of the detector array is introduced. This however requires a re-imaging system to map the image to the linear FPA through the cold-shield of the detector housing.
A direct comparison of the signal-to-noise ratio to other timeresolved methods like FTIR is rather complicated, since either the time-resolution is not achieved (rapid-scan methods are only applicable down to milliseconds), or the method is conceptually based on thousands of repetitions of the same experiment (stepscan). Using the data of (Schade et al., 2014) and neglecting other sources of noise in the setup, a signal-to-noise ratio of 10,000 is theoretically achievable by accumulating 100 measurements, corresponding to 100 µs accumulation time. This is comparable to the signal-to-noise ratio of rapid-scan FTIR experiments in the millisecond time range (for example-spectra of singleshot experiments, see Elgeti et al., 2008). The combination of synchrotron light with FPA detectors is largely compensating for the loss of FTIR advantages. This setup will allow for the direct observation of the formation and decay of the channelrhodopsin conductive state as well as crucial proton transfer reactions.
For example, de-and re-protonation of the Schiff base, under native environmental conditions can be observed in single-shot mode thus avoiding possible sample degradation due to the long recovery period necessary for repetition-based methods like step-scan FTIR.
Spectroscopy with Lasers
Time-resolved infrared spectroscopy takes advantage of laser light sources. For example, a PbS diode laser has been used to record conformational changes of the Ras protein in the nanosecond time regime with a flash-photolysis set-up (Lin et al., 2014). The intensity of the laser beam was measured, after passing through the sample, by an infrared detector. In this case, the photoreaction was initiated by photolysis of caged compounds through a UV laser flash. Such setups however only allow the acquisition of signals at fixed wavelength. Quantum cascade lasers (QCLs) emitting in the mid-and far-infrared range are currently under heavy development. Their tunability and high output intensity, while maintaining a narrow bandwidth, make them ideal light sources for infrared spectroscopy. Intrinsic temperature fluctuations however introduce noise that has to be considered (Borri et al., 2011;Liu and Wang, 2011). Current developments in laser absorption spectroscopy based on QCLs are reviewed elsewhere (Zhang et al., 2014). They are becoming more frequently used in spectrochemical imaging (Clemens et al., 2014) and nanospectroscopy (Amenabar et al., 2013). A QCLbased spectrometer has been applied to study the first steps of the channelrhodopsin activation process (Lórenz-Fonfría et al., 2015). The authors used a tunable QCL in a flash-photolysis setup, where the laser is tuned to the desired wavelength, the photoreaction then initiated by a VIS flash and the timedependent signal change recorded by an infrared detector. This procedure has to be repeated for each desired wavelength. A timedependent dataset of channelrhodopsin in the range 1610 and 1680, and 1700 and 1780 cm −1 at a resolution of 1 and 0.5 cm −1 could thus be acquired with a repetition rate of 0.33 Hz by using a fast-cycling channelrhodopsin mutant (ChR2-E123T, Gunaydin et al., 2010).
For time-resolutions of nanoseconds or better, pump-probe technologies can be used. The photoreaction of a protein is started by a first laser pulse, usually in the fs time regime. A second pulse with a certain time delay probes the protein's response. For each pump-probe cycle, a difference spectrum can be obtained when the probe pulse, after passing the sample, is fed through a dispersive element and measured at an infrared detector array (Hamm and Zinth, 1995) (Figure 2C). An overview on how this is applied to dynamics of lighttriggered proteins is given in Groot et al. (2007). This technique has been used to investigate ultrafast dynamics of bacteriorhodopsin, photoactive yellow protein (see for example, Van Wilderen et al., 2006), and LOV domains (Alexandre et al., 2009). Channelrhodopsin-2 was also studied by Vis-pump/IRprobe spectroscopy (Neumann et al., 2008) in the fs-timescale. The experiments showed amide-I vibrational modes occurring within ∼500 fs thus demonstrating a very strong proteinchromophore coupling (Neumann-Verhoefen et al., 2013).
An alternative method to measure mid-infrared pulses is to optically convert them into the UV-visible range where a broad variety of array detectors is available. Zhu et al. (2012) used chirped pulse upconversion facilitated by a non-linear optical crystal. The authors investigated the photoreaction of BLUF photoreceptors on a picosecond time scale and demonstrated the method is suited for investigation of signal changes down to the mOD range.
Summary/Outlook
While the time regime of milliseconds and slower can be accessed by the rapid-scan FTIR technique for most biological samples, for faster systems special considerations have to be taken into account. Ultrafast alterations can be observed by pump-probe spectroscopy.
Step-scan FTIR facilitates a good signal-to-noise ratio and a time-resolution down to nanoseconds but requires perfectly cyclic systems under investigations. For non-cyclic or slow cycling systems, fast time-resolved investigations are challenging. However, developments addressing this problem by QCL-based setups or dispersive spectroscopy in combination with highly brilliant light sources are in progress. | 4,788 | 2015-07-07T00:00:00.000 | [
"Biology",
"Physics"
] |
Data integration uncovers the metabolic bases of phenotypic variation in yeast
The relationship between different levels of integration is a key feature for understanding the genotype-phenotype map. Here, we describe a novel method of integrated data analysis that incorporates protein abundance data into constraint-based modeling to elucidate the biological mechanisms underlying phenotypic variation. Specifically, we studied yeast genetic diversity at three levels of phenotypic complexity in a population of yeast obtained by pairwise crosses of eleven strains belonging to two species, Saccharomyces cerevisiae and S. uvarum. The data included protein abundances, integrated traits (life-history/fermentation) and computational estimates of metabolic fluxes. Results highlighted that the negative correlation between production traits such as population carrying capacity (K) and traits associated with growth and fermentation rates (Jmax) is explained by a differential usage of energy production pathways: a high K was associated with high TCA fluxes, while a high Jmax was associated with high glycolytic fluxes. Enrichment analysis of protein sets confirmed our results. This powerful approach allowed us to identify the molecular and metabolic bases of integrated trait variation, and therefore has a broad applicability domain.
Suggestions for manuscript improvement
Paragraphs at lines 41-58: It is not clear what the authors consider as high-throughput and what technique as low throughput. Also here, I am confident that many researchers would disagree that there are not current metabolomic approaches that can be considered high-throughput. E.g direct injection FTICR-MS provides you ten-thousands of masses and their intensities in less than a minute per sample. Thus, when discussing metabolomics in the context here, the authors should refrain from using the term low/high-throughput but instead clearly describe the potential and shortcoming of metabolomics techniques to understand phenotypic variation on metabolic flux level.
Lines 55-56: "Technical developments in mass spectrometry have boosted metabolomics by enabling the characterization of the metabolome, i.e. the complete set of metabolites in a cell.". Please rephrase and clarify this sentence. First, why mention only mass spectrometry and not mass spectroscopy? Especially since NMR was mentioned just two sentences before in the same paragraph. Second, there is to date no technique that is able to characterize all metabolites in a cell; each technique can measure only a specific range of metabolites (e.g. with respect to a specific mass range, polarity, hydrophobicity, etc.).
Lines 52-59. We thank the reviewer for this remark. Indeed, there is no technique able to characterize all metabolites in a cell. We removed the last part of the sentence. We mentioned mass spectrometry and not mass spectroscopy because it is well mass spectrometry that is used for metabolomics. ("Essentially, mass spectroscopy is the study of radiated energy and matter to determine their interaction, and it does not create results on its own. Spectrometry is the application of spectroscopy so that there are quantifiable results that can then be assessed" [https://verichek.net/spectroscopy-vs-spectrometry.html]).
Line 68-69. Why is this a specific population genetics view? Isn't it more a cell physiology/evolutionary view? Line 70. Right. We deleted this non-relevant phrase.
Subsection 2.1 is very short. It begins by stating that the two algorithms (HT and EP) are compared; but it does not report any results from the comparison and only states that EP "gave a good approximation", without providing any quantitative results from the comparison. There's more detail in the appendix, but the reader would appreciate more details in the main text in order to understand the author's steps in this work.
Lines 138-148. We agree that this subsection 2.1 was too short and that the quantitative results were missing. So we added the required additional information.
A central notion in the manuscript is the distinction between "observable" versus "non observable" traits. Yet, the manuscript does not provide a clear definition for this distinction. For instance, are enzyme abundances "observable"; what about metabolite concentrations or reaction fluxes? Does non-observable mean that these traits are just difficult to measure?
We agree that the term "observable" is improper. We thus replaced "observable" traits with "integrated" traits or "high-level phenotypes", terms that seem more suitable to us. The use of the term "secondary metabolites" is somehow different than in most publications. I am aware that in the scientific community "secondary metabolites" is loosely defined, but pyruvate, succinate and acetate are usually considered metabolites of the central metabolism and not secondary. Thus, I would use a different term than secondary metabolites (e.g. lines 217, 234, 362) to prevent misunderstandings. Perhaps, in the context of the present work, a term like "minor fermentation products" would be more fitting?
We fully agree that pyruvate, succinate, acetate, etc. are not secondary metabolites. In the context of the present work we should rather distinguish between fermentation products and downstream metabolites, those that are produced in the downstream steps of the Krebs cycle. We modified the text accordingly. Fig. 5: Word clouds are not a scientifically sound way of presenting quantitative data since visual differences might be misleading. Since the font size corresponds to the correlation of the respective fluxes in those groups with the LD1-axis, a better way to present this information would be a simple bar plot with the correlation value as bar height.
Fig 5. We thank the reviewer for this suggestion. Now we represent the functional enrichment results in a bar plot. Of note, we would like to underline that font size in the previous representation did not correspond to the correlation of the respective fluxes but on the proportion of proteins positively/negatively correlated to the LD1 axis belonging to a functional category divided by the proportion of proteins from the same category found in the MIPS database. We added this information in the caption for clarity. Table S1. Because adding the names directly on the figure would make it too loaded, we added a supplementary table with the full metabolite names. Figure 3D: Why is the x-axis scaled in a way that it shows ranges without data? If the scale is adjusted, the difference between groups might be more obvious from the visualization.
Reviewer #2:
The work by Petrizzelli er al. uses a constraint-based metabolic core model of S cerevisiae together with quantitative proteome data to predict metabolic flux distributions. These flux distributions parallel observations on the trait level and thus provide a rational and mechanistic interpretation.
In general, the work is interesting as it provides a data science approach to bridging disparate data sets. The presented work is sound, however, its main weak point is the lack of experimental validation. The authors aim to predict flux distributions in diverse yeast strains and confirm their validity indirectly by locking at phenotypic variation but lack validation at the flux level (at least for some strains). Given the many simplifications applied I think it is necessary to provide a direct experimental validation at least for certain fluxes in selected strains to establish the feasibility of the suggested approach.
The goal of the work is to provide an original approach to bridge the gap between proteomic variation and high-level phenotypic variation, not to give estimates of real flux values. The strategy consisted in computing fluxes from the integration of data from different scales that reflect the dependency structure between observations. We applied this approach to a large experimental dataset as a proof of concept, and the results fully confirmed its validity. Besides, we here used a previously published dataset (from the HeterosYeast project) that did not include flux measurements other than the CO2 flux. Obtaining biologically sound results with limited information about flux values was among the objectives of the work, and this objective has been achieved.
Major
* The authors simulate growth on minimal glucose limited medium and compare it to experimental data on chemical complex medium. Please justify that assumption. In particular, why do the authors not expect any impact from amino acid metabolism or extracellular TCA supplements.
We thank the reviewer for this important point that we forgot to mention. Indeed the experimental data were obtained on yeasts grown on a complex medium close to enological conditions (Sauvignon blanc grape juice), while we simulated growth on minimal glucose medium. Despite this, we were able to obtain consistent results that show that the negative relationship between growth/fermentation traits and production traits is accounted for by a differential usage of the energy production pathway. This indicates the robustness of these processes with regard to the carbon source and gives more generality to the results.
In addition, this model has been previously used to study yeast growth on grape must. We added the following text and the corresponding reference in the Results section (lines 116-119): "In the DynamoYeast model, the only entry is glucose, and the model does not take into account the complexity of metabolism like the recycling of amino-acids or extracellular TCA supplements. However, it was shown to accurately predict growth on complex medium like grape must [26]." We also added the following in the Discussion (lines 357-363): "Despite the fact that the DynamoYeast metabolic model is an oversimplified model of central carbon metabolism with glucose as the only external carbon source, we show that protein abundance variations were sufficient to capture quantitative changes in the orientation of central carbon metabolism that occurred between strains and between growing temperatures in our dataset. Even though our flux predictions may not be very accurate, we are confident that we captured the main patterns of flux variation. Predicting unobserved fluxes from observed protein abundances overall adds information about the functioning of the actual metabolic network." * The authors limit themselves to a core model of central carbon metabolism although for instance with yeast8 a highly curated metabolic model would be available too. It is even more surprising as the authors can therefore only use 33 protein abundance data of a much richer data set. This raises the concern that the observed correlation between the proteome and fluxome is a consequence of the very restricted degrees of freedom in the model. The authors should at least indicate the number of independent fluxes and the overlap with their proteome. In addition, the authors should enlarge their model and verify that the observed correlation remains similar.
We added a subsection (lines 639-652) in the Material and Methods section that provides information about the size of the null space (Ker(S)=16) and its structuring into metabolic modules showing that the number of degrees of freedom is not too small regarding the size of the model. We also added sentences to indicate (section 2.2): -that the correlations become more stable and less sensitive to the sampling of the reactions whenever the number of pseudo-observations exceeds the number of degrees of freedom (lines 167-169); -that the simulations, along with the observation of the distributions of the observation between the metabolic modules, can be used to check the quality of the metabolic model coverage (lines 177-179).
We also added the following sentence in the Discussion (lines 382-384): "The structure of the stoichiometry matrix allows defining metabolic modules that correspond to the main metabolic pathways [28]. Our simulations showed the importance of covering most metabolic modules with observations of protein abundances." * the authors strictly use the GPR mapping, in particular they use min(P1,P2) for an AND association. In their data, how often do the authors see that P1 is upregulated, while P2 is downregulated? This could be a hint at post-translational regulation at those points and should be at least mentioned. What if you exclude such data ?
We added this information in section 4.1.4 of the Material and Methods (lines 542-548). All concerned pairwise correlations were either positive or null.. We thank the reviewer for this observation, which shows that this point was not clear in the manuscript.
Lines 94-95. We stated more clearly in the introduction that previous studies failed to find a clear link between proteomic data and integrated phenotypes.
Lines 665-680. We revised the Statistical analyses Method section (4.3) to better explain the approach, and we performed the same approach as in Fig 5 by discarding the flux level and directly studying the protein-trait relationship.
We revised the Results section 2.6 and added an additional Supplementary Figure (Fig S5). We show that neglecting the flux levels leads to a poorer discrimination between groups of traits. Besides, while this approach would allow us to find some proteins related with some trait groups, it would not allow us to connect this variation with changes in central carbon metabolism.
The end of the Introduction (lines 105-107) could not have been drawn without integrating the flux level. This was also specified in the Discussion (lines 405-406). * L.148 "algorithm was efficient for" How was efficiency determined or measured? please define or reformulate We wrote that the algorithm was actually efficient since the simulated fluxes were highly correlated to its initial values (see values in figure 2). We reformulated in the text (lines 170-179). * L.294 "we were able to show that the metabolic flux level retains information" Please reformulate as you don#t know whether your predicted flux levels are correct.
Actually, we do not claim that our predicted flux levels are correct, we just write that they "retain information". This statement seems to us valid, since introducing the predicted flux levels in our modeling allowed us to bridge the gap between proteomic data and integrated traits and to show that the negative relationship between growth/fermentation traits and production traits was accounted for by a differential usage of the energy production pathway. * L.329 "Therefore, it is important that protein abundance observations cover the main features of the architecture …" This is a key point that I hinted above. However, the authors do not highlight how that can be achieved or what principle should govern that choice.
We thank the reviewer for this remark. We omitted this point in section 2.2. As explained above, we rewrote section 2.2 and added a new subsection in the Material and Methods section (lines 639-652) to better explain how we checked that the enzymatic proteins associated with the CBM were good predictors of metabolic fluxes by means of null space analysis and numerical simulations.
Minor
L.62, "The idea that a given set of environmental conditions will drive a cell to a steady state …" I think that is only true in the artificial setting of a chemostat but not true in any more realistic setting. Please reformulate.
Lines 63-64. Done. We rephrased into "The idea is to explore system's properties at a steady state, during which internal metabolites stay at a constant concentration while exchange fluxes are constant and correspond to a constant import/export rate." L.65 "the number of metabolites is much higher than the number of reactions." It's the other way round in a (genome-scale) metabolic model. Indeed, we corrected it (line 66).
L..72 Please texpand your argument why [12] seems more promising a method than others. You say that fluxes should covary with enzyme abundance, essentially ignoring any post-translational regulation. Why should that be a realistic assumption?
We better explained our choice of the method [12] in the Introduction (lines 76-82). Out of interest, since your model is small, could you have done an elementar flux mode/vector analysis and characterised the totality of the solution space explicitly rather than doing sampling?
Yes, we could have done it. However, the objective of the paper was to provide a proof of concept that could also be applied in a more realistic, higher dimension metabolic model. | 3,581.2 | 2020-06-23T00:00:00.000 | [
"Biology"
] |
Suggestions for a Parametric Typology of Dance
Dance and language are produced and performed by the body and governed by cognitive faculties. Yet regrettably little scholarship applies the tools of formal analysis from one field to the other. This article aims to enrich the dialogue between the two fields. The authors introduce an approach to dance typology informed by an analogy with the parametric theory of language analysis, which is useful in typologizing languages. This initial exploration paves the way for a physiological typology of dance that does not reference culture.
Typology of Dance D o n n A J o n A P o l i a n d l i S A K R A u S
This work is the collaboration of a professional dancer and dance educator with a linguist who was a student of dance in her youth and returned to dancing after a twenty-year lapse. We noted how dance students apply prior knowledge to a new task, affecting how they carry it out. Something similar happens with language: A second-language learner applies knowledge of a first language and hence makes predictable errors. Our continued discussion led us to a systematic way of connecting observations of dancers' habits to physical properties of the (moving) body-a typology of dance that uses an approach to language typology as model.
We do not claim that dance is a type of language nor vice versa. It would be silly to demand of a movement event that it have an agreed-upon "meaning" in order for it to qualify as dance [1]. We do not even claim that language and dance "build on a shared cognitive architecture" [2]. We simply recognize that both dance and language have a biological foundation (see supplemental Appendix A; appendixes provided with online version of this article) and the body is their expressive tool; thus applying linguistic methods grounded in biology to the study of dance might reveal insights.
Others have incorporated culture when comparing language to dance (Appendix B). Certainly, the appreciation of a dance tradition demands attention to culture; aesthetic judgments (partially cultural) are made by choreographers and dancers who interpret that choreography, just as, in language, the poet, novelist, storyteller and preacher all make culture-linked choices. Thus the task of analyzing dance is enormous; dance is artistic, philosophical, political and emotional. For this reason, it may be helpful to separate dance into components. One part is physical-the tool is the human body (although it can be augmented by props, lighting, etc.). Dance has a biological foundation. Biological evolution is enormously slower than cultural evolution, and biology transcends culture in that physical considerations limit or allow movement. Therefore, attention first to the physical allows later attention to cultural impositions without confusion of the two. A physical approach may clarify connections among dance traditions around the globe that belong to disparate cultures.
Precisely this has happened with language: Comparisons between distant languages (genetically and culturally) reveal patterns that are free of culture and, in this sense, more basic (Appendixes A and C).
In the hope that a parametric approach to dance might offer something similar, this work is a narrow study of the physical aspects of dance. We propose that dance types are characterized by a small set of parameters whose settings account for the wide range of movement differences.
linguiSTiC PARAmeTeRS
Appendix C offers examples of linguistic parameters. While no area of theoretical linguistics is without controversy, parameters are a primary way of making comparative generalizations across the components of the grammar (Appendix A). While a parametric typology distinguishes between grammars rather than particular utterances, the parameters are responsible for aspects of the organization of the sounds and words in utterances. Thus parameter settings are realized in utterances. This is important to bear in mind as we approach the analysis of dance-where parameters belong, arguably, to dance technique but are realized in dance movements.
exiSTenCe of DAnCe PARAmeTeRS
Seeking dance parameters is a new endeavor in dance typology (see Appendix D for approaches in other disciplines). Establishing a parameter-based approach to dance typology requires the same level of scholarship that linguists devote to the task of analyzing language. It would take years of scholars studying dancers and nondancers learning many types of dance, interviews with teachers and audience perception studies. This article does not do that; rather, it is a thought piece intended to initiate discussion. Still, we do present three arguments for the existence of dance parameters.
First, choices in dance lead to movements coherent with those choices and therefore characterize dance traditions. When we make these choices, we set the parameters of the dance traditions. Consider the fact that it is difficult to override existing movement memories [3]. In a modern dance technique class (MDTC), it is easy to tell who has had prior ballet training-an observation that spurred our study. Those with ballet training will often have trouble allowing movement to follow weight shift rather than vice versa. They will maintain alignment of body parts in the face of their teacher's demonstrations that break such alignments. Their movement quality will be balletic. Their inclination will be to turn their feet out. In sum, they carry ballet parameter settings into MDTCs-they have a dance "accent. " The dance student's work here is similar to a second-language learner's, where influence from the parameter setting of a first language causes errors in a second (Appendix A). Next, consider the task of circumscribing dance traditions. Some scholars object to making distinctions between dance forms at all. Take, for example, Twyla Tharp's opinion on modern and ballet: "The division was very artificial. It was a war zone that we didn't need. I think that movement is movement" [4]. Further, many dance types are in flux; contemporary proponents present new directions. Here, we work with gross classifications to demonstrate the potential of our approach. But even while the lines between dance forms are not clean, (these very) dancers and choreographers recognize different forms as they talk about "crossover" dances and dancers, as witnessed by Tharp's dances "Deuce Coupe" and "Push Comes to Shove. " Our third type of evidence for dance parameters is a "negative" one: When you learn a dance, some movements are easily adjusted for. We asked a class of linguistics students from seven countries to pinch their earlobes as they learned a new dance. No one had trouble. Further, they didn't find themselves pinching their earlobes when they were asked to learn another dance where we made no mention of hand-ear contact. The ease of learning and the absence of interference from or into other dances indicate that no previous dance experience had set a parameter for hand-ear contact (which does not preclude the existence of such a parameter-it indicates only that these students had not set the putative parameter). This contrasts with the work involved in learning to point toes in ballet and the interference of ballet point when learning other dance types.
liKely DAnCe PARAmeTeRS
Scholars discuss many physical factors relevant to dance traditions, working from observations of dances (Appendix E). In contrast, we start not from a set of dances, seeking parameters within, but from the physical realities of a body at rest and in motion, realities that are unavoidable regardless of dance tradition.
We begin with these physical facts: • The human body is symmetrical, left to right.
• It has a most prominent facing.
• When standing at rest, paired joints are aligned vertically and horizontally. • Gravity affects a dancer. • Motion goes through space and time.
• In a natural resting posture, the head is highest. • Motion initiates somewhere. • Motion is composed of parts, thus sequencing arises.
• Motion has quality in terms of flow. • Motion varies with muscle tension.
These observations allow us to propose the potential parameters below, where we often mention how a MDTC might teach the settings. This is not because MDTCs are a standard but because we, personally, can draw from our experiences with them.
Our discussion is sometimes of participatory dance, sometimes of performance dance, although the latter witnesses almost constant introductions of new techniques. For example, David Parson's dance "Caught" subverts expectations about gravity-the lights strobe, and, when they are on, the dancer is mid-leap. Here, however, we discuss unavoidable physical realities. Talking about dance seen only under a strobe is fascinating but adds little to the understanding developed here.
Prominence
Human bodies are symmetrical across the sagittal plane; maintenance of balance across that plane in ordinary activity helps avoid pain and deformity. MDTC teachers show consideration of this balance by having students do a movement figure "on the right" and then "on the left. " Evidence that students learn this balance comes from the fact that while injuries are frequent, common causes are intensity and frequency of movement [5], not fatigue on one side. Studies of injuries rarely mention which limb gets injured, presumably because the side on which an injury occurs is not significant [6].
Individual dances, and even dance traditions, on the other hand, can be imbalanced with respect to sides. Studies of injuries in elite preprofessional ballet dancers do record a side difference (in contrast to general studies of dance injuries), where the right foot and ankle are typical injury locales [7]. Perhaps ballet is right-side prominent (Fig. 1). We therefore suggest a Side Parameter.
Second, locomotion involves a path with direction. The human body can move upright in any direction. Forward and backward movements are primary, and in these movements inter-limb (arm/leg) coordination aids, but it does not in sideways locomotion. Forward is more primary than backward, probably because of facing (discussed below) and how the central networks function in the control of locomotion [8]. However, dancers do not usually move forward exclusively in a dance, and if they do, we do not expect them to maintain a straight path and/or to turn their backs to the audience. We therefore propose a second prominence parameter: the Direction Parameter. Again, it is common in MDTCs to do an exercise first "to the front" then "to the back, " and often "to the side" as well; so teachers explicitly teach the various settings of the Direction Parameter.
This naturally leads to another prominence parameter: the Facing Parameter. The motor orientation of our bodiesforward-correlates with our sensory reception being largely on the body's front facing. Additionally, nonlinguistic expressions of emotion occur mainly through the manipulation of facial features, postures and respiration (panting, heaving) [9], which are easily detected from the front, less so from the side and least from the rear.
Evidence for prominence parameters comes from noticing student errors-another observation that spurred our study. Generally, students find one side easier-typically the right (dominant for most). If a dance sequence is long, complex and fast, we often find the following situation: A student will perform the sequence properly on the right. Then the student will attempt the left. She might perform a few phrases properly, mess up the next by performing it on the right and from that point on the rest will be performed on the right; the student will switch parameters from left to right, midway.
Prominence matters in dance traditions. In weapon dances, the hand holding a sword can be predeterminede.g. the right hand in English Morris dancing (Fig. 2). The fact that one hand holds weight with a visual extension into space while the other doesn't affects the (quality of) various arm movements. Likewise, many dance traditions maintain continual awareness of facing front, with the dancer largely facing the audience or in partial profile (as in ballet), whereas others have a 360-degree orientation (as in folk and western performance dances). There are even performance dances in which the back can be prominent (such as Trisha Brown's "If you couldn't see me, " a deliberate departure), subverting our expectation of front facing conveying the most information.
Alignment
In MDTCs, the cautionary reminders to "keep your shoulders over your hips" or, when in a lunge, to "keep your knee over your ankle" are calls for body-part alignments. If each of the pairs of shoulders, hips, knees and ankles are aligned horizontally and vertically, we have no locomotion with respect to the lower limbs. Unless we move exclusively on our hands, we must throw off this alignment to achieve locomotion.
Alignment has physiological effects, which might offer motivations for the parameter. Proper alignment avoids health problems; misalignment exacerbates them.
All dance traditions break some alignments, but in at least one, the shoulders and hips maintain vertical and horizontal alignment: Irish step dance (Fig. 3). Thus there is a Vertical/ Horizontal (V/H) Alignment Parameter.
A second alignment parameter involves orientation, which follows from the fact that some joints both flex and rotate. Men tend to have natural foot rotation inward [10], while women have it outward [11]. This tendency might motivate the Orientation Parameter. In MDTCs and ballet classes, one contrasts "turnout" to "parallel. " The student is taught to point toes in the direction in which knees point, to protect the health of the joints. Early ballet set this parameter to turn out as the default position (Fig. 4).
Throwing off the alignment of any joint pair leads to weight shift and possibilities for locomotion. Dancers can vary dynamics so that their movement follows weight dis-Downloaded from http://www.mitpressjournals.org/doi/pdfplus/10.1162/LEON_a_01079 by guest on 15 August 2021 tribution (a hurled body part) or their weight distribution follows movement (a controlled placement).
Dance traditions and individual dancers can resolve a change in alignment in a physically natural way (with flow) or in a jarring (but perhaps pleasing) way. Thus the misalignment and weight shift caused by a dancer lifting her right leg forward, for example, may be resolved by stepping forward on her right foot but it could as well be resolved by leaning her torso and head backward or perhaps even falling backward.
Gravity
In MDTCs, teachers help students master reactions to gravity, including submission (dropping), resistance (lifting limbs) and defiance (jumping), in preparation for the ways dance traditions treat body weight; that is, preparing them to respond properly to the particular dance tradition's setting of the Gravity Parameter. Some dance forms re-embrace gravity, working with deeply folded joints and an earth-hugging weightedness. Some do the opposite, aiming for flight. Often, changes in weight position affect other parameters; the ballerina doing a pirouette arabesque must increase abdominal tenseness (a parameter below) to maintain vertical alignment [12].
Aerial dance allows innovations in vertical movement and changes the effects of weight shift. Ballet creates the illusion of freedom from gravity. In a leap, as a dancer's legs rise, the center of gravity rises, affecting the path of the dancer's head. What would have been a simple arc becomes a curve upward, then a straight line, then a curve downward-the dancer seems to sail. Partnering in lifts further allows the appearance of weightlessness. Providing support as a dancer turns extends the number of turns possible, defying laws of momentum and inertia [13].
Inversion
The human body favors the upright position, as evidenced by the weight and shape of our bones and joints, as well as by the range of motion of our legs compared to our arms. This very naturalness makes inversion noticeable, and noticeable variations-particularly those requiring skill, strength or flexibility-are a fecund source of parameters, hence we propose an Inversion Parameter. Unsurprisingly, in MDTCs we often invert (e.g. cartwheels, moon rolls, etc.).
Some dance traditions are built on inversions. Capoeira and break dance place weight on the arms, head and upper back, freeing the legs to kick, twist and flutter (Fig. 5). Aerial dance removes weight and balance constraints on inversion, allowing even a finger to be the lowest part of the body. Contemporary dancers have playfully transferred dance phrases created with arms to legs and vice versa, or they have taken the arms from one choreographed dance and joined them to the legs from another (consider the works of Sara Rudner or Trisha Brown, as in Brown's "Present Tense").
Space
Dance can involve movement in the area immediately surrounding the body as far as one reaches with legs and armsthe kinesphere-as well as movement across the floor and in the air and aquatic space. Thus we propose a Space Parameter. Western dance traditions stretch to the edges of one's kinesphere [14]. There may be leaps and jumps, with arms and legs thrust to fully straightened length. In contrast, Eastern dance traditions tend to cover less space, and joints are more often softly bent [15]. Each dancer rests within his kinesphere.
Sequencing
Movement can be analyzed in parts; therefore dancers must learn to sequence linearly and produce movements simultaneously. MDTCs often practice isolating movement, perhaps to help dancers understand Sequencing Parameters. A dancer might make a circle with the top of her head, whole head, shoulders, ribcage, pelvis, knees or ankles. Often, teachers will have students do "the leg part" without the arms, then "the arm part" without the legs, before "putting it all together. " Sometimes linear sequencing is not a choice for physiological reasons; a plié must precede and follow a jump, for example. Other times, sequences of movement may be constrained by momentum and flow-physics in a casing of esthetics [16].
Choreographers might choose to initiate two actions simultaneously or to begin one action before the last one is completed. Simultaneous actions can have different speeds, directions and energy qualities, making danced action less predictable.
Rhythm
Dance moves through time, so a Rhythm Parameter is evident, where the absence of rhythm is a possible setting. Teachers in many traditions count aloud (or clap, etc.) as students dance. If there is a musician accompanist, the teacher may request a certain timing. That we recognize timing distinctions in musical (or other) accompaniments independently of other auditory input [17] supports the idea that rhythm is an independent parameter.
In MDTCs, teachers might repeat exercises with different timing. Even when teachers have students walk "naturally, " or tell them to do a sequence with whatever timing "works" for them, the accompanist might still be playing at her own choice of tempo. Teachers use music to guide the rhythm of dancers' movement, which is no surprise, given that dance and music go hand in hand in many dance traditions.
Rhythm is so deeply ingrained in dance that when untrained people dance they adjust their movements to match a rhythm change in the musical accompaniment, especially with bass drums [18]. Likewise, small children are inculcated with their culture's dance traditions at events that usually feature music. When we see a two-year-old responding to a musical beat, sometimes the child's movements are typical of their developmental stage, but other times the movements are particular to their culture's dance traditions.
Initiator
Movement always starts somewhere; hence we propose an Initiator Parameter. However, we must differentiate between which part of the body seems to be pulling or pushing the other parts along versus where an actual movement starts.
In MDTCs, teachers might ask students to "initiate movement" with a given body part in the former sense. With this notion of initiator, any visually apparent body part could serve. These initiators are "external, " since the movement could initiate with the head-perception-wise (but also production-wise)-but it then carries the whole body through space.
We can also consider "internal, " or intrabody, initiators of movement. Where within the body does movement start and where does it go? MDTC teachers talk about letting the movement start from "the core" and radiate (or ripple, or move in jagged ways) to the extremities or vice versa.
Quality of Movement
Movement has something almost textural about it, which is difficult to describe but that we call here "quality", and that leads us to propose a Quality of Movement Parameter. You can prance or march. A hand can jab the air or glide through it.
The quality of movement characterizes dance traditions. In Cambodian Khmer dance, for example (Fig. 6), some dancers are assigned roles based on their body type and facial structure. Within each role there is a specific gestural language, recognizable by the movement's strength or smoothness or jaggedness, and a further distinction in its manner, with gradations from most to least refined [19]. In Phnom Penh, Soeur Thavarak demonstrated classical Cambodian versions of actions as done by "refined, " "ordinary" and "wild" monkeys [20]. Each has its version of a scratch, a walk on four limbs or three limbs, etc. Differences in movement quality concern placement, size and tone: The wildest monkey has looser, more "flung" movement. The most refined monkey is slower, with smaller actions and less (seemingly involuntary) repetition.
Downloaded from http://www.mitpressjournals.org/doi/pdfplus/10.1162/LEON_a_01079 by guest on 15 August 2021 Sophiline Cheam Shapiro, director of the Khmer Arts Ensemble in Phnom Penh and a former court dancer, trains dancers to become "stars" [21]. Shapiro implies that what imbues dancers with their unique, ineffable quality is a spiritual connection to deceased teachers and former dancers who literally possess the dancer. What are the differences one senses that make one dancer's qualities sublime compared to another dancer who is competent but lacks that certain something? It lies in the subtleties of their movement manner.
Tenseness
Muscles are involved in voluntary movement. In MDTCs, teachers often have students lie on the floor with their arms and legs extended in an X. The students then roll to one side, curl into the fetal position and extend fully before taking the fetal position again and then returning to the X. This is typical of exercises meant to teach dancers to contract, relax and extend their muscles-to master the settings of the Tenseness Parameter.
Many types of dance can be partially characterized by extreme contractions in the torso or in toe pointing or, alternatively, by the lack of these.
An APPRoACh To DAnCe TyPology
We have suggested several possible parameters: While the motivation for these parameters was physical, not cultural, many are common to cultural approaches (Appendix E). This considerable overlap encourages us in thinking that parameters might usefully typologize dance purely by articulatory factors.
In Table 1 we explore these parameters by comparing six dance forms: Cambodian, Modern/Contemporary (M/C), West African, Ballet, Release and Hip-Hop. These were selected to offer some distribution across the world and across traditional and newer western forms.
Evaluating the usefulness of this approach of distinguishing between dance traditions requires us both to take a closer look at the details and to step back for an overview.
Let's consider just one detail: use of the torso. This is largely the interaction of several parameters: V/H Alignment, Initiator, Quality, Rhythm, Sequencing and Tenseness. In Cambodian, the shoulders and hips align, while the torso front and elbows can both lead and move in different directions. In Ballet, the spine is fully extended and used in a refined way, with backward arches and head bowing. In the other four dance types, the spine is fluid, with motion initiating from within it. In M/C, we find contractions in the pelvis and other areas and freedom of alignment. In West African, we find rhythmic undulations, with isolations of subparts and an active pelvis. In Release and Hip-Hop, there are refined articulations of the spine within small sections, with Hip-Hop showing percussive rhythm.
When taking a step back for overview, we see that Cambodian is the outlier in the table, although it has similarities with Ballet. M/C shares characteristics with all but Cambodian. Ballet differs dramatically from all the others, with the strong exception of M/C. West African is similar to M/C but also shares characteristics with Release and Hip-Hop.
Additional work is needed to refine these and add new parameters. As they presently stand, the parameters raise sticky issues regarding discreteness. V/H Alignment affects the ways in which a dancer can respond to gravity, and gravity (and physiology) affects the alignments a dancer can achieve. The use of space is affected by sequencing, which in turn affects the extent to which gravity becomes relevant, and so on. Further exploration should winnow away unenlightening distinctions. With respect to adding or replacing parameters, the parameters here should not limit the discussion. Our parameters thus far do not mention partnering, for example. All but Release dance makes use of choral action in unison. Cambodian partnering is mostly without contact, while M/C, Release and Hip-Hop make extensive use of contact. In M/C and Release, women support men and vice versa, whereas in Ballet, partnering is mostly handholding or support for lifts and balances. In West African, individual dancers create personal variations and duets show mock battles and courtships. In Hip-Hop, dancers take turns being soloists, often having interactive gymnastic-like routines. Partnering may be a major factor in distinguishing dance traditions. Likewise, our parameters do not give attention to upper limbs or the head. Study of the use of hands could offer contrasts between some traditions and new connections between others.
ConCluSion
Dance parameters of a purely physical nature exist and offer new ways of grouping dance traditions. While we have suggested a handful of parameters, precisely which ones exist and how they interact should be established by further rigorous research. Still, we hope to have shown that one of the more insightful ways of typologizing languages may fruitfully be applied to typologizing dance forms. | 5,902.4 | 2017-07-20T00:00:00.000 | [
"Art",
"Linguistics"
] |
A WISPR of the Venus Surface: Analysis of the Venus Nightside Thermal Emission at Optical Wavelengths
Parker Solar Probe (PSP) conducted several flybys of Venus while using Venus’ gravity for orbital adjustments to enable its daring passes of the Sun. During these flybys, PSP turned to image the nightside of Venus using the Wide-field Imager for Solar PRobe (WISPR) optical telescopes, which unexpectedly observed Venus’ surface through its thick and cloudy atmosphere in a theorized, but until-then unobserved near-visible spectral window below 0.8 μm. We use observations taken during PSP’s fourth Venus gravity assist flyby to examine the origin of the Venus nightside flux and confirm the presence of this new atmospheric window through which to observe the surface geology of Venus. The WISPR images are well explained by emission from the hot Venus surface escaping through a new atmospheric window in the optical with an overlying emission component from the atmosphere at the limb that is consistent with O2 nightglow. The surface thermal emission correlates strongly with surface elevation (via temperature) and emission angle. Tessera and plains units have distinct WISPR brightness values. Controlling for elevation, Ovda Regio tessera is brighter than Thetis Regio; likewise, the volcanic plains of Sogolon Planitia are brighter than the surrounding regional plains units. WISPR brightness at 0.8 μm is predicted to be positively correlated to FeO content in minerals; thus, the brighter units may have a different starting composition, be less weathered, or have larger particle sizes.
Introduction
The Wide-field Imager for Solar PRobe (WISPR; Vourlidas et al. 2016) is the sole imager on board the Parker Solar Probe (PSP; Fox et al. 2016;Raouafi et al. 2023) mission, which was designed to study the solar corona.PSP resides in a heliocentric orbit with an aphelion around the orbit of Venus and a perihelion designed to gradually decrease from 35 R e to 9.86 R e over the course of seven years via seven Venus gravity assists (VGAs).
As with previous interplanetary missions, VGAs provide an opportunity to conduct brief, but impactful science while at Venus.During VGA3 on 2020 July 11 and VGA4 on 2021 February 20, PSP/WISPR was used to observe the nightside of Venus, and the resulting broadband optical images revealed a surprising sensitivity to the surface.Wood et al. (2022) provided initial and novel evidence that the WISPR flyby observations of the Venus nightside are sensitive to the thermal emission from the hot Venus surface at wavelengths shortward of 0.8 μm.While sensitivity to the Venus surface at red-optical wavelengths had been previously predicted (Moroz 2002;Knicely & Herrick 2020), the PSP/WISPR observations demonstrated it clearly and present the shortest wavelength Venus nightside observations to date (Wood et al. 2022).
Venus' thick and cloudy atmosphere poses a formidable challenge to remote sensing observations of the subcloud region.However, thermal emission from the hot surface is able to emerge from the Venus atmosphere through discreet opacity windows between CO 2 and H 2 O absorption bands, where it can be observed on the planet's nightside.As a result, there is a rich scientific literature of using these nightside opacity windows to conduct studies of the Venus lower atmosphere (e.g., Allen & Crawford 1984;Crisp et al. 1989;Pollack et al. 1993;Meadows & Crisp 1996;Arney et al. 2014;Peralta et al. 2017;Iwagami et al. 2018) and surface (e.g., Hashimoto & Sugita 2003;Hashimoto et al. 2008;Mueller et al. 2008;Smrekar et al. 2010;Basilevsky et al. 2012;Gilmore et al. 2015;Shalygin et al. 2015).Over time, opacity windows have been discovered at ever shorter wavelengths where the thermal flux from Venus is a lower contrast, and the signal-to-noise requirements are higher.The nightside thermal brightness of Venus rapidly decreases toward shorter wavelengths in the near-IR (NIR) and optical, thus requiring higher precision instruments and flyby spatial resolution to detect.Notably, observations made during VGA flybys have led to critical insights into Venus nightside remote sensing, and include flybys by Galileo in 1990 (Carlson et al. 1991;Drossart et al. 1993;Hashimoto et al. 2008), Cassini in 1999 (Baines et al. 2000), and now PSP in 2020, 2021, and forthcoming in 2024.
In this paper, we build on the work of Wood et al. (2022) and examine the potential for Venus surface science using the WISPR flyby observations.We correlate WISPR emission to mapped geologic units to assess the sensitivity of WISPR observations to potential surface compositional differences; simulate thermal emission from the surface through the thick and cloudy Venus atmosphere across a broad range of surface temperatures and emission angles to provide robust evidence that surface thermal emission dominates the observations; and discuss potential contributions to observed spatial variability.In Section 2, we describe our methods, including descriptions of the WISPR instrument and data, how we use World Coordinate System (WCS; Thompson & Wei 2010) and Spacecraft Planet Instrument Camera matrix Events (SPICE; Acton 1996;Acton et al. 2018) information to project various Venus data sets to the WISPR perspective, and our atmospheric radiative transfer model.In Section 3, we present our results, including the validation of the in-band photometry against any possible light leak, how the WISPR images correlate with known Venus surface for specific known geologic units, and provide a comprehensive model explanation and reproduction of the WISPR images.In Section 4, we discuss the implications of our findings for current and future explorations of Venus and discuss limitations.We conclude in Section 5.
In this paper, we focus exclusively on the 2021 February fourth PSP flyby of Venus. Figure 1 shows a top-down view of the PSP trajectory during VGA4 (in the J2000 SPICE reference frame).Based on the relative field of views (FOVs) of each WISPR telescope, WISPR-O began observing Venus prior to WISPR-I, and WISPR-I continued observing Venus after WISPR-O.For this work, we study only the WISPR-O images of Venus during VGA4.Although we were able to generally reproduce the major findings in this paper using WISPR-I data, errors in the projection of reference data onto the image plane (see Section 2.3) limit the accuracy of our latitudinal and longitudinal knowledge on Venus in the WISPR-I images such that it is difficult to precisely identify correlations between WISPR brightness and known surface properties (e.g., elevation).An analysis of the WISPR-I data will therefore be the subject of future work.Figure 2 shows the sequence of WISPR-O images observed during the 2021 Venus flyby (VGA4).A figure set showing analogous flyby images for WISPR-I is available in the online journal.
WISPR Reduction
As with Wood et al. (2022), we use the WISPR level 2 images of the flyby.These images are calibrated in units of mean solar brightness (MSB) according to the procedure detailed in Hess et al. (2021).To convert between MSB and DN s −1 , a calibration factor of C f = 9.24 × 10 −14 is used for WISPR-O based on the in-flight star calibration (Hess et al. 2021), but modified to account for the gain setting used for the Venus observations.
Beginning with the level 2 calibrated data products, we remove residual striping in the images along the readout direction, likely due to light smearing during the readout, since WISPR cameras lack a shutter.We first mask the pixels that contain Venus' disk, as well as 20 pixels on each side around the edges of the frame that are impacted by reflection off the protective barrier at the edge of the detector (edge effects were treated similarly for WISPR-I images in Stenborg et al. 2021).The median of the remaining pixels in each row is then calculated and subtracted from that row, significantly improving the apparent striping effect, which appears constant across an entire row.This step also effectively removed the sky background from each image, which appeared due to excess scattered sunlight that was particularly apparent in the first two WISPR-O frames, but decreased as PSP entered the Venus penumbra.
Coregistered Reference Data and Coordinate Transformations
We use a series of reference radar data sets from the Magellan mission (frequency = 2.4 GHz, λ = 12 cm, Ford & Pettengill 1992;Ford et al. 1993) to explore correlations with known surface conditions.These include Magellan Global Topographic Data Records, Magellan Global Emissivity Data Records; each of these mosaics are resampled to a spatial resolution of 4.6 km pixel −1 , which is the standard for Magellan data products as described in the data reference5 (Ford & Pettengill 1992;Ford et al. 1993).We also use the Magellan Synthetic Aperture Radar (SAR) FMAP Left Look Global Mosaic with a spatial resolution of 75 m pixel −1 .Surface geological units were identified in the Magellan data on the basis of morphological characteristics in the SAR data (e.g., Brossier & Gilmore 2020).These were then mapped to exclude pixels at the edges of the units that might contain multiple types of surface materials given the spatial scale of the WISPR-O observations and motion blur.No presupposition was made of any correlation between geology and WISPR-O; instead, we aim to test for evidence of correlations between WISPR-O signatures and radar-derived quantities for surface units previously identified to be geologically distinct.Throughout this work, elevation values are quoted relative to 6051.8 km, the mean planetary radius.
The Venus reference data products are projected onto the WISPR FOV to facilitate scientific investigations.The projection follows exactly what was done by Wood et al. (2022), but now expanded to include more data products described using physical units.Since PSP is a solar orbiter, the fits files use the WCS Helioprojective-Cartesian (HPC; Thompson & Wei 2010).The date and time for each exposure is used with SPICE (Acton 1996;Acton et al. 2018) kernels to extract the HPC coordinates of each pixel.We use the Python port SpiceyPy (Annex et al. 2020).The SPICE kernels allow us to track spacecraft location and location, Venus' location, shape, and size, as well as conduct coordinate and reference frame transformation.From HPC, we convert the coordinates to Venus mean equator (VME) of date coordinates (which requires a transformation from HPC to heliocentric Aries ecliptic, and then to VME).The result is that each pixel has a resulting latitude and longitude on Venus from the computed line of sight-surface intersection point.Sky values are treated as NaNs.The maps, e.g., Venus Magellan Global Topography 4641m v2, are projected in simple cylindrical coordinates, and thus, a projected image is built up by filling in the requisite data at each latitude and longitude pixel coordinate.We use the IAU rotation period for Venus of 243.0185 days as is given in the SPICE kernels.Mueller et al. (2012) suggest that longitude offsets are on the order of −0.3°-0.08°,which is a smaller effect than the motion blur in the WISPR-O images.
The flyby begins with an excess of scattered sunlight in both WISPR-O and WISPR-I as PSP enters the Venus shadow.The spacecraft orientation is such that WISPR-I is essentially looking nadir at Venus' barycenter.The flyby occurs over a timescale of roughly 500 seconds, and the trajectory is shown in Figure 1.Exposures vary between roughly 3-7 s for the WISPR-I camera and roughly 4 s for the WISPR-O camera during the flyby.The spacecraft orientation remains fixed during the flyby and does not remain targeted at Venus' barycenter causing Venus to progressively slip out of the FOV.Thus, throughout each image, a certain amount of blurring is visible, which we had originally attributed solely to atmospheric scattering, but is also caused by the motion of the spacecraft during the duration of the exposure.Thus, we project each map at 9 different times within a single exposure time duration and average them together to reproduce this motion blur.For the maps of individual surface units, we combine all the maps to one single map where each value represents a different geologic unit.Instead of averaging the 9 subtime projections, we instead only count pixels to be in the geologic unit if they fall consistently within the unit in all subtime frames.This helps to ensure that when we evaluate by geologic unit we are only considering pixels that are not contaminated with background sky or other geologic units.
Figure 3 shows one of the WISPR-O flyby images along with the Magellan elevation and radar emissivity data projected into the frame.By visual inspection, it is clear that the WISPR-O intensity correlates (inversely) with the elevation data.The subsequent analysis of correlations between the WISPR and Magellan data sets is presented in Section 3.2.
Radiative Transfer Modeling
We use the Spectral Mapping Atmospheric Radiative Transfer code (SMART; Meadows & Crisp 1996;Crisp 1997) to simulate the top of atmosphere Venus radiances for comparison with the WISPR measurements given contemporary knowledge of the Venus atmosphere and surface.SMART solves the radiative transfer equation for one-dimensional plane-parallel atmospheres using line-by-line, multi-stream, multi-scattering calculations.All radiative transfer calculations in this work were made at 1 cm −1 wavenumber resolution and used 8-streams (four upward and four downward) at Gaussian quadrature computational points, except where specified for finer sampling of observer emission angles.We used spatially averaged Venus International Reference Atmosphere (VIRA) thermal and molecular profiles from Moroz & Zasova (1997) using the lower atmosphere updates from Arney et al. (2014).Line-by-line rovibrational molecular absorption coefficients for CO 2 , H 2 O, CO, H 2 S, HF, and HCl were calculated using the LBLABC code (Meadows & Crisp 1996;Crisp 1997) using the HITRAN2016 line list (Gordon et al. 2017).Sulfuric acid clouds were simulated using the nominal model from Crisp (1986) containing Mode 1, Mode 2, Mode 2', and Mode 3 particles, which are all assumed to be 75% H 2 SO 4 by weight (Arney et al. 2014).Following the approach of Crisp (1986), we include the unknown UV absorber as a modified version of the submicron Mode 1 particles that match dayside observations of the Venus spherical albedo in the optical.All of the vertically resolved atmospheric profiles (cloud optical depths, gas volume mixing ratios, and the thermal profile) were held static at the nominal global profiles throughout this work.We explore the effect of changes in cloud opacity in Appendix, but find that large changes in cloud optical depths lead to relatively small changes in WISPR brightness.This is because the cloud model is roughly 35% more transparent in the optical than in the 1 μm window.We further discuss the implications and caveats of these atmospheric assumptions in Section 4.2.
Given the nominal radiative transfer modeling setup described above, we run a series of models to simulate the expected sensitivity of the WISPR observations to known and/ or predictable variations in the surface conditions.We produce thermal radiances across a three-dimensional grid in (1) the elevation (temperature) of the surface, (2) the emissivity of the surface at relevant WISPR wavelengths, and (3) observer zenith angles.For the surface elevation and temperature, we use the VIRA thermal profile to define the relationship between altitude and temperature, and then, we truncated the atmosphere accordingly across a grid in elevation from −2 to 20 km at 1 km intervals.This procedure ensures that the surface temperature scales physically with systematic changes in surface elevation following the atmospheric thermal structure.While the majority of the Venus surface has elevations 0 km, some of the lowest elevation regions lie below this level and therefore have negative elevation values relative to the zeropoint.For the surface emissivity, we assume wavelength independent values that range from 0.5 to 1.0 at intervals of 0.05.For the observer zenith angles, we ran simulations at the finite angles of 86°.0, 70°.7, 59°.3, 21°.5, and 0°.0.
We followed the same procedure as Wood et al. (2022) to convert spectrally resolved top of atmosphere radiances to photometric counts in units of digital number per second (DN s −1 ) within the WISPR bands.This conversion is given by the following convolution integral over wavelength: where A eff is the WISPR effective area sensitivity curve for WISPR-O (or WISPR-I), R is the Venus thermal emission radiance spectrum, E phot photon energy of each spectral interval, Ω is the angular extent of each pixel, and g is the detector gain.
Validation of Light Sources
The stark sensitivity to the Venus surface in the WISPR images initially raised concerns that it could have been explained by emission from well-known opacity windows, for example, at 1 μm being picked up by excess sensitivity beyond the nominal WISPR bandpass.Wood et al. (2022) reported on lab measurements from the spare WISPR optics that showed no signs of a red light leak.The consistency of the observed counts with thermal emission models provides a second line of evidence supporting the in-band nature of the WISPR Venus flux (Wood et al. 2022).We now present a third line of evidence to evaluate whether any light leaks might be present by looking at the background stars in each WISPR image.
Using SPICE, we query for which stars of sufficient brightness from the Hipparcos catalog (Perryman et al. 1997) should have fallen within the FOV and determined their pixel coordinates.Looking up the stars in Simbad (Wenger et al. 2000) allows us to select stars with known effective temperatures and gravities.We then compare the flux detected by WISPR with that predicted by the appropriate PHOENIX model (Husser et al. 2013) convolved through the WISPR bandpass.We show an example of a WISPR-O image and stellar spectra in Figure 4.If the WISPR bandpass has a previously uncharacterized light leak, additional flux would be measured for all stars.If the light leak were bluer than the known bandpass, only hotter bluer stars would show additional flux, because the flux from redder stars is proportionally less at these wavelengths.If the light leak were redder than the known bandpass, redder stars would be proportionally more affected than bluer stars.We conduct aperture photometry on the stars in the WISPR image and show the results of the measure versus PHOENIX model expected values in Figure 4.The lack of clear trend here is suggestive that there is no light leak.Adding a faux light leak to the WISPR transmission function used to predict the Phoenix model fluxes shows a clear trend with stellar temperature that cannot be confused with random noise.Thus, we can be confident that the observations are only sensitive to in-band photons, and a window through Venus' atmosphere to its surface is in fact present at these wavelengths.
WISPR Correlations with Known Surface Characteristics
Figure 5 shows the relationships between WISPR-O counts, elevation, and radar emissivity for a variety of surface geological units.Only the pixels with emission zenith angles <60°are used in this and the subsequent surface analyses unless otherwise stated to reduce the confounding effects of limb brightening, as discussed later.Five units are mapped, including the tessera terrains Ovda Regio, Thetis Regio, and Haasttse-baad tessera, and two plains units: "Lava," which has a high WISPR-O brightness signature and "Plains," which has WISPR-O values typical of regional plains."Lava" corresponds roughly with the undivided smooth flow and shield terrains Sogolon Planitia of the Niobe Planetia quadrangle of Venus (Hansen 2009), whereas the "Plains" to the northeast are Niobe Planitia proper.The image and analysis shown is for the first frame in the 2021 flyby.Multiple interesting correlations are evident.Together, the combination of mapped geological units span a large range in altitude from about 0 to 5 km and exhibit a strong negative correlation with WISPR brightness.This is a well-known feature of thermal emission from the hot Venus surface (e.g., Mueller et al. 2008) and is a result of the Venus temperature profile, and its characteristic decrease with altitude (Seiff 1987;Lorenz et al. 2018).Ovda and Thetis are the highest elevation regions in view and correspondingly have the lowest WISPR brightness, whereas the region labeled "Lava" is the lowest elevation region in view and has the highest WISPR brightness.Although also tessera terrain, Haasttse-baad has a much lower maximum elevation than either Ovda or Thetis, and correspondingly higher WISPR-O values.The WISPR-O elevation trends of Haasttse-baad and Thetis appear colinear, and divergent from the Ovda trend.
Radar emissivity appears positively correlated with WISPR brightness; however, the relationship between elevation and radar emissivity (e.g., Figure 5 lower left) exhibits a characteristic negative correlation known for some regions of the Venus surface (e.g., Klose et al. 1992) that can propagate the surface elevation (temperature) trend into radar emissivity space, complicating the assessment of trends with radar emissivity.The negative trend of radar emissivity with elevation is ascribed to the volume and type of high dielectric minerals in surface rocks (Klose et al. 1992), while the VNIR emissivity is a function of FeO content (Dyar et al. 2020); it has not been yet shown that there is a systematic relationship between these two characteristics.
Similar to Figure 5, Figure 6 shows the relationships between WISPR-O counts, elevation, and radar emissivity for the same geological units, but now shown using 11 frames from the 2021 Venus flyby.Many of the same features of Figure 5 are seen in Figure 6 although now supported by a significantly larger quantity of measurements.In Figure 6, the relationships between the tessera units are even more distinct, including the Haasttse-baad-Thetis colinearity trend in WISPR-O versus elevation, and the divergence of Ovda from that trend.The distinction between the Lava unit and the Plains is also emphasized in this expanded data set.The relationship between WISPR-O counts and the emission zenith angle is also shown in the upper right of Figure 6.In general, WISPR brightness for a given terrain remains relatively flat at emission angles up to 60°-70°and then rises sharply toward the limb at 90°.This limb brightening trend motivates our choice to restrict the surface analyses to emission zenith angles of 20°-40°.
Figure 7 is a restricted subset of Figure 6, highlighting very narrow regions of the parameter space to facilitate comparison between geological units.We compare units at the same elevation to remove the temperature effect on emissivity and limit WISPR-O emission angles to 20°-40°to consider surface emission viewed with a similar path geometry through the atmosphere.When Ovda and Thetis regio are compared this way, Ovda has a brighter WISPR-O signature than that of Thetis by ∼20% at 1.6σ (or 3.6 DN s −1 on average) where the two tesserae have both similar elevation and radar emissivity.The Lava unit is brighter than the Plains unit by ∼35% at 4σ (or 9.6 DN s −1 on average) at similar elevations and with only small differences in radar emissivity (∼0.05).
Comparison with Surface Thermal Emission Models
Figure 8 shows the WISPR-O brightness as a function of surface elevation compared to our baseline nadir thermal emission models (orange lines) over the same range of elevations.WISPR-O data are used from all VGA4 flyby images, but only those with emission zenith angles <40°.The WISPR-O counts are visibly negatively correlated with elevation and yield a correlation coefficient of −0.81 with a p-value of zero.Although the models exhibit the same trend with elevation, they are clearly offset relative to the measurements by an apparent constant offset, indicating either an additional source of light or an overestimation of the atmospheric opacity in the models.While thinner clouds than those from the nominal model used in our radiative transfer calculations could cause the thermal flux from the surface to be enhanced, as shown in Appendix, the effect is minimal for modest changes in cloud opacity.We note that this could also be a product of imperfect calibration factors used to convert between physical flux units and observed counts.However, since the WISPR images in Figure 2 clearly show a bright limb and limb brightening is apparent at high emission angles in Figure 6, likely due to O 2 and O I nightglow (Wood et al. 2022), we entertain the hypothesis that there is additional nightglow across the entire Venus disk.
The pixels with an expected low brightness from the thermal emission models provide a means to measure the average nightglow brightness at relatively low emission angles.Figure 9 shows the WISPR-O counts for all high elevation (>5 km), low emission angle (<40°) points from all VGA4 flyby images as a function of elevation, radar emissivity, and emission angle, and compressed into a one-dimensional histogram.This subset of points with expected low intensity surface thermal emission have a WISPR-O count rate of 16.1 ± 1.4 DN s −1 .Since our thermal models still predict a measurable surface emission for regions at and above 5 km, we subtract off the 5 km predicted flux (6.6 DN s −1 ) from the mean brightness excess derived in Figure 9 to obtain an offset of 9.5 ± 1.4 DN s −1 .We take this value to be our empirical estimation for the excess brightness near the Venus disk center, which may ultimately be due to a combination of nightglow emission, thinner clouds than our nominal model, scattered sunlight from the dayside, or a flux calibration offset.
Using the empirical offset to correct missing radiative processes in our thermal emission spectral models can help to explain the brightness levels observed with WISPR. Figure 8 shows nadir thermal emission models convolved with the WISPR-O bandpass with the empirical brightness correction added (blue and teal lines).The overlapping lines indicate a range caused by thermal emissivity variations between the models, whereas the faint blue models show the ±3σ bounds around the median empirically corrected models.The fit to the elevation trend is significantly improved using the empirical offset.Therefore, the empirical brightness correction derived using only high elevation pixels helps to explain both the offset and breadth in the scatter of the measured trend with elevation relative to our thermal calculations.Furthermore, the consistency between the thermal models and the WISPR observations robustly validates the nature of the observations as thermal emission from the surface.
Modeling the WISPR-O Images
Extending beyond the insights gleaned from our analysis of empirical trends in the WISPR-O data, we use our precomputed grid of radiative transfer simulations of the Venus thermal emission to construct a full model of the WISPR-O images.We linearly interpolate the thermal radiance models onto the WISPR image projections, taking two-dimensions into account.The first is the WISPR-projected Magellan altitude data to capture the temperature-dependence of the radiance, and the second is the map of subspacecraft emission zenith angles to capture the angle dependence of atmospheric path lengths for rays emitted from the surface.We hold the thermal emissivity of the surface fixed at 0.9 consistent with a surface albedo of 0.1 measured by Venera 9 and Venera 10 (Ekonomov et al. 1980).
Figure 10 shows the third WISPR-O image from the 2021 flyby on the leftmost panel compared to our thermal emission only WISPR-O image model in the second panel.The thermal model captures much of the spatial variability in brightness contrast, which is consistent with the previously determined strong correlation between the altitude (temperature) of the surface and the WISPR brightness.However, the thermal model exhibits limb darkening rather than the stark limb brightening seen in the true image.These characteristics are consistent with the lack of nightglow emission in the spectral models.
A small linear correction was applied to our simulated surface thermal emission image to optimally account for the unknowns associated with cloud opacity and a baseline flux offset.We determine a scale factor (slope) and baseline offset (intercept) that when applied to our nominal thermal emission To account for the missing nightglow component at the limb, we fit a simple model to the residuals between the WISPR-O image and the thermal model.We take the residuals as a function of emission angle and smooth them using a rolling median with a window of size 150 points.The smoothed residuals exhibit a slight linear increase with emission angle from 0°to 65°, before sharply increasing >65°.We use a piecewise analytic model composed of a linear portion and a polynomial portion to fit for the average nightglow contribution.The functional form of the nightglow model is where θ is the emission angle, μ is the cosine of the emission angle, θ 0 is the angle of the break from linear to polynomial, and m (the linear slope) and the set of polynomial coefficients a k are all fitting parameters.The y-intercept b is determined by the value of the polynomial function at θ 0 .We determined that θ 0 = 67°, and N = 7 best captures the average residual trend without overfitting.The relatively high polynomial order is required to fit the sharp rise in brightness.Table 1 lists the best fitting model parameters determined using a nonlinear least squares fit to the smoothed residuals using the Python code scipy.optimize.curve_fit.
The third panel in Figure 10 shows the residual fit to the median excess emission projected back onto the image plane.The fitted nightglow emission model is able to capture the baseline flux contributions near the center of the Venus disk at low emission angles (as discussed in Section 3.3) and, critically, the stark brightening seen at the Venus limb.
The fourth panel in Figure 10 displays our final WISPR-O image model with the thermal only model from the second panel added to the fitted model to the mean excess brightness from nightglow.This model demonstrates that the WISPR-O image is well explained by emission from the hot Venus surface escaping through the new atmospheric window in the optical, with an overlying emission component from the atmosphere that is dominant at high emission angles, but present across the entire disk.
The fifth and final panel of Figure 10 shows the residuals calculated as the WISPR-O image from the first panel minus the final image model in the fourth panel.A diverging colormap is used to highlight regions of Venus where the WISPR data are brighter than the model (red) and darker than the model (blue).Overall, however, the residuals are Gaussian distributed around zero with a standard deviation of around 3.5 DN s −1 at 1σ (−0.3 ± 3.5 DN s −1 ).This indicates that the fit is quite good at capturing the average behavior of the images.
The regions of Venus with >2σ deviations from the model are interesting to note and may point to a variety of different factors.First, the brightest surface region in the WISPR-O image-a low elevation lava plain-is also the brightest region of the surface thermal model, but the model is unable to reach the brightness levels seen in the data.This could be due to surface compositional information, cloud optical depth variations, or nightglow spatial structure, which we discuss in Section 4.2.Second, there are ridges in the lower third of Venus that are not well fit, and this could result from atmospheric or motion blurring, or small errors in the model used to project the Venus reference data sets onto the WISPR-O images.For particularly small surface features, small shifts in the relative position of the WISPR images and reference data can cause excess residuals.Third, the limb of Venus exhibits some of the largest residuals.This could be explained by a combination of the aforementioned image-model offsets, but the limb residuals also show flux excesses (red) and flux deficits (blue) indicating that brightness variations along the limb that cannot be captured by our average limb brightening model also contribute to the residuals.Such brightness variations along the limb may indicate a spatial structure in the nightglow emission, cloud opacity, scattered sunlight, or a complex combination of these factors.Finally, the rightmost part of the image, particularly the upper right, shows a brightness deficit compared to the models that could be caused by vignetting imperfections not fully removed by the data calibration procedure.
Implication for Surface Geology
WISPR-O measurements of geologic units at similar elevations show distinct differences in brightness that can be attributed to factors that are independent of temperature and emission angle (Figure 7).The Lava unit has the highest brightness values of all mapped units, including the Plains unit (30% at 4σ) and Haasttse-baad tessera (25% at 2.9σ).Although less statistically significant, Ovda tessera is notably brighter than Thetis tessera by 20% at 1.6σ where both units cover the same elevations and emission angles.These observations demonstrate that the WISPR-O data may be sensitive to compositional or grain size variations on the Venus surface.
Laboratory work shows a positive correlation between 1 μm emissivity and FeO (ferrous iron) content in rocks at Venus temperatures (440°C; Dyar et al. 2020;Helbert et al. 2021).These measurements are made at longer wavelengths (0.86-1.18 μm) than the WISPR-O broadband spectral range (0.48-0.80 μm; Wood et al. 2022).However, Helbert et al. (2021) demonstrate that the high temperature laboratory emissivity measurements of basalts correlate with emissivity derived from photometer data of the Venus surface collected by the Venera 9 and 10 landers at 5 channels over the range 0.5-1.1 μm (Ekonomov et al. 1980).This implies that higher values of WISPR-O brightness should correspond to greater FeO content in observed rocks.The weathering processes in the deep atmosphere of Venus are modeled to convert ferrous iron to ferric iron on geological timescales of days to tens of thousands of years (e.g., Smrekar et al. 2010;Berger et al. 2019;Filiberto et al. 2020;Radoman-Shaw et al. 2022;Santos et al. 2023).If so, the high WISPR-O brightness of the Lava unit shows that it has the greatest FeO values of the studied units.The Lava unit having greater FeO values than the Plains unit, which are also interpreted to be basaltic lava flows (Hansen 2009), indicates that the Lava unit has experienced less weathering and is thus younger than the Plains flows, or that the Lava flows has an intrinsically higher FeO content.Geomorphologically, the Sogolon Planitia region that corresponds to the Lava unit has smooth, homogeneous volcanic flow materials with a lower density of small volcanic shield features than the adjacent shield terrain to the north.Niobe Planitia to the northeast corresponds to the Plains unit, and is consistent with a different mode and/or timing of emplacement.The high WISPR-O brightness of the smooth flow region may suggest that the Lava unit has a higher FeO content than surrounding shield plains, as well as the Haasttse-baad tessera (see Figure 7).Northern Sogolon Planitia lies partially beneath the parabolic ejecta of the impact crater Merit Ptah.Parabolic ejecta deposits are ephemeral, with a mean crater retention age of ca.10s Ma (Phillips et al. 1991;Izenberg et al. 1994;Campbell et al. 2007).It is possible that Merit Ptah impact ejecta is geologically young and thus has suffered less weathering than surrounding plains units resulting in a slightly higher FeO content and corresponding WISPR-O brightness.
Thetis and Ovda tesserae have distinct WISPR-O brightness values indicating that these regions, of unknown rock type, have different compositions.The lower brightness values for Thetis indicate a lower FeO content than those of Ovda Regio.This may be due to intrinsic differences in the rocks themselves, where Thetis has a more silica-rich composition than does Ovda, or it may suggest that ferrous iron in Thetis has been preferentially consumed due to differences in the style, rate, or duration of surface-atmosphere weathering of these rocks as compared to Ovda (e.g., via the production of ferric hematite as has been suggested to explain spectral measurements collected by the Venera 9 and 10 landers; Pieters 1983), or enhanced distribution of basaltic crater ejecta on the surface of Ovda as compared to Thetis (Whitten & Campbell 2016).The colinearity of the Thetis-Haasttse-baad WISPR-O versus elevation trends may imply that the two tessera regions are more similar in composition than either are to Ovda.Variations in the radiophysical properties of tesserae also show that tessera composition is not uniform across the planet (Whitten & Campbell 2016;Brossier & Gilmore 2020); the WISPR-O data provide an independent method to assess this variability and its causes.
Particle size has a demonstrated effect on NIR reflectance, where finer grain sizes at the 10s-100s μm scale will increase reflectance and therefore lower emissivity (e.g., Pieters 1983).If grain size is the dominant cause of WISPR-O brightness, this would suggest that the materials of the surface of the Lava unit have a larger average grain size in the uppermost 10s of μm than both the Plains and Haasttse-baad units and that Ovda Regio has a larger average grain size than Thetis Regio.Grain sizes are reduced by chemical and physical weathering and/or the addition of sediment, such as from impact ejecta.If due to weathering, this would imply that the Plains and Haasttse-baad and/or Thetis have undergone more extensive weathering due to friability, age, and/or topography than the Lava and/or Ovda units.However, we note that limited measurement of the 1 μm emissivity of powders and slabs at Venus temperatures to date shows no systematic dependence on particle size, concluding that FeO content is the dominant contributor to emissivity (Helbert et al. 2021).
Caveats and Remaining Uncertainties
A few factors limit the alignment precision between our reference Venus maps and the observed WISPR images, including blurring from nonnegligible spacecraft motion and atmospheric scattering, and uncertainty in the pointing and/or image projections.Although we introduced a blur in the direction of spacecraft motion based on the duration of the exposures and we were conservative in selecting the interiors of known geological units, residual errors may remain and are difficult to quantify.While we did account for motion blur, we did not correct for the atmospheric scattering footprint, and as a result, our image model appears more sharp than the real images.Additionally, trends with surface quantities are likely broadened by this additional and unavoidable scattering uncertainty.Since shorter wavelengths experience greater levels of Rayleigh scattering (Knicely & Herrick 2020), we would expect these optical measurements to have a scattering footprint around or above the nominal 50-100 km footprint in the NIR measured from orbit (Moroz 2002).However, our finding that the cloud opacity in the WISPR band is distinctly lower than the cloud opacity in the 1 μm window could lead to slight improvements in the spatial resolution, but Rayleigh scattering will still be a limiting factor.
We also identified a characteristic of the image projections wherein the alignment between the Venus reference data projections onto the WISPR images were seen to become progressively more misaligned as the flyby advanced.This could be caused by minor pointing errors, SPICE kernel errors, or errors in the FOV projection.No corrections were applied to the observed images or in our analyses to account for alignment artifacts because this effect was insignificant in the WISPR-O images that had Venus centered in the frame, and primarily affected the frames late in the flyby with only the Venus limb visible.This issue also affected the WISPR-I images, and to a larger degree than WISPR-O.Although we were able to reproduce many of our results with the WISPR-I images, the projection alignment issue was severe enough in WISPR-I to warrant further investigation that is beyond the scope of this paper.
There remain degeneracies between whether nightglow or clouds are responsible for the excess flux required across the Venus disk relative to our thermal emission models.For example, we attributed the excess flux seen at low emission angles and high elevations in Figure 9 to nightglow at disk center that rises sharply toward the limb (e.g., in Figure 10).However, the bright limb may a red herring.The excess flux near disk center could also be, in part, due to suboptimal cloud opacity in the thermal emission models.Specifically, if the optical depth of the H 2 SO 4 clouds is substantially decreased relative to our nominal cloud model, then the net result would be stronger emission from the surface seen in the WISPR band (see Appendix).However, our radiative transfer model is sufficiently insensitive to small changes in cloud opacity.If the flux offset were entirely attributed to optically thinner clouds, the entire cloud model would need to have roughly half the nominal optical depth to provide a sufficient brightness enhancement to match the WISPR measurements.Moreover, these relative opacity changes linearly impact the thermal emission from the surface, which provides a poor fit to the observed WISPR-O versus surface elevation trend, whereas a constant flux offset that is more indicative of an emission source provides an excellent fit to the measurements (Figure 8).Therefore, the additional flux component near the Venus disk center (∼10 DN s −1 ) is unlikely to be entirely attributable to thinner-than-expected clouds.
Nightglow is clearly needed to capture the limb brightening, and is a plausible explanation for the excess disk brightness.Wood et al. (2022) predicted the WISPR brightness of O 2 nightglow at 0.76 μm should be around 1.6 DN s −1 based on the typical limb-to-disk brightness ratio of the 1.27 μm feature (Gérard et al. 2008).Our derived brightness excess of ∼10 DN s −1 well exceeds this estimate.This could indicate a few different effects such as (1) higher intensity disk emission relative to the limb for the 0.76 μm line, or (2) a combination of multiple in-band nightglow lines, for example, from both O 2 and the atomic O I 5577 Å green line, or (3) unidentified systematic effects.Observations of oxygen nightglow show substantial spatial and temporal variability (Allen et al. 1992;Crisp et al. 1996;Hueso et al. 2008).This variability could help to explain spatial residuals seen in Figure 10, but the residuals remain relatively small-about 35% intensity variations on a baseline effect of order 10 DN s −1 (at 1σ)-compared to previously observed nightglow heterogeneity with up to 10× contrasts across the nightside (Gérard et al. 2008).A perfect explanation remains elusive.Ultimately, the reliance on only a single photometric band in this work provides limited evidence with which to break the degeneracies with nightglow.Additional observations, particularly concurrent spectroscopy or narrowband imaging, as well as further analyses would be helpful in the future.
Future Flybys and Observation Opportunities
PSP will perform its final flyby of the Venus nightside on 2024 November 6 (VGA7).During this encounter, PSP will pass at ∼340 km of the Venus surface at the closest approach, and the event will last approximately 8.5 minutes.VGA7 is expected to probe in the general vicinity of Phoebe Regio, a completely different area of the Venusian surface compared to the VGA4 images studied in this paper.While there may be challenges in directly comparing VGA4 (this work) with VGA7 due to the differing planetary and spacecraft environments, relative differences may still be quite informative.For example, the different surface coverage under VGA7, and the flyby's lower closest approach may allow a comparison of new territory-some quite uniform-over a range of emission angles, which, when compared to Magellan radar data, may aid in deconvolving variable atmospheric effects from the WISPR images.These upcoming WISPR observations will provide one final opportunity to access the Venus surface through the unique WISPR window and to further test and validate the findings presented here in advance of dedicated forthcoming Venus missions.
Additionally, upcoming Venus missions will have an incredible opportunity to advance the state of knowledge of the Venus surface.The regions imaged by WISPR-O were not included in the NIR survey of the southern hemisphere provided by Venus Express; thus, the variations in the NIR properties of geologic units imaged by WISPR-O indicate that the global multiband mapping to be provided by the DAVINCI, VERITAS, and EnVision missions will critically advance our understanding of the diversity, origin and relative age of geologic units on Venus.
Conclusions
The WISPR brightness images are well explained by emission from the hot Venus surface escaping through a new atmospheric window in the optical, with minimal spatial variations due to cloud heterogeneity, and an overlying component of emission from the atmosphere that is most consistent with O 2 nightglow.The surface thermal emission correlates strongly with surface elevation (temperature) and emission angle, and weakly with the thermal emissivity of the surface.While strongest at the limb, the nightglow may persist across the entire nightside disk.
WISPR observations of the nightside of Venus present a new tool for the study of Venus' surface potentially linked to compositional distinctions between geologic units.The thermal emissivity correlations may be the key to identifying distinct surface materials, both in terms of age and signatures of weathering, and of initial composition (e.g., FeO content).The WISPR-O images of the 2021 VGA flyby indicate that tessera terrains in Ovda Regio and Thetis Regio may be compositionally distinct, with Ovda having a higher iron content than that of Thetis.Haasttse-baad Tessera appears more compositionally similar to Thetis than to Ovda.In the lower elevations, the smooth Lava unit of Sogolon Planitia has a higher FeO content and thus is potentially less weathered and younger than the surrounding shield terrains of Niobe Planitia.These data confirm that the WISPR observations shortward of ∼0.8 μm are sensitive to surface characteristics.They also presage the potential compositional diversity of the terrains of Venus that will be revealed by global NIR observations collected by the three upcoming missions to the planet.transfer model (Meadows & Crisp 1996;Crisp 1997), surface compositional variability is not accounted for in these secondary reflections off the clouds (i.e., the ambient illumination environment reflects a singular surface composition).However, the resultant net reduction in spatial contrast in nightside images discussed in Hashimoto & Sugita (2003) may be a less significant factor at the shorter wavelengths observed by WISPR due to the diminished opacity from the clouds presented and discussed here.This may be contributing to WISPR's stark sensitivity to the Venus surface and the observed spatial contrasts in the images.
Figure 1 .
Figure 1.PSP's trajectory during VGA4 (2021 February 20) with the location of the spacecraft during WISPR-O and WISPR-I observations denoted with purple squares and magenta circles, respectively.Tick marks are displayed every 2 minutes along the trajectory, and a slight deflection is seen at Venus due to the change in PSP's orbit from the gravity assist.The Sun located in the −y-direction.
Figure 2 .
Figure 2. Sequence of fully reduced WISPR-O images observed during the 2021 February 20 flyby.A figure set showing analogous flyby images for WISPR-I is available in the online journal.(The complete figure set of 2 images is available.)
Figure 3 .
Figure 3. WISPR-O image (left panel) compared to Magellan elevation (middle panel) and emissivity data (right panel) projected into the WISPR-O FOV.All panels use a different relative gray-scale color scheme to emphasize visual similarities wherein the transition from black to white scales from low to high WISPR-O brightness, high to low surface elevation, and low to high surface radar emissivity.While the WISPR-O image contains numerous visual similarities with surface features seen in both reference data sets, the images have a lower spatial resolution due to intense atmospheric scattering (e.g., Moroz 2002).The date and time corresponding to this WISPR-O image is 2021 February 20 20:03:48.
Figure 4 .
Figure 4. Left: WISPR-O image (200714) with pixel locations of stars that should have fallen within the field of view.Stars that are behind Venus are ultimately not considered.Right: their Phoenix model spectra as compared to the WISPR bandpass for those stars that are on the main sequence.Bottom: measured fluxes from stars in the WISPR images (denoted by color and symbol) as compared to the expected fluxes from Phoenix stellar models as a function of stellar effective temperature.Tick marks label the main-sequence stars from the upper right panel for reference.No clear pattern exists that would be indicative of a light leak in either the blue or red direction (dashed lines).
Figure 5 .
Figure 5. Two-dimensional contours showing correlations between WISPR-O brightness, surface elevation, and radar emissivity for a variety of different surface geological units (colors) using image data from the first frame in the 2021 flyby (upper right).Adjacent axes share the same quantities and axis limits to facilitate comparisons between panels.The lower left elevationemissivity panel shows only reference data.Contours show the density of measurements at the 1σ, 2σ, and 3σ level.
Figure 6 .
Figure 6.Two-dimensional contours showing correlations between WISPR-O brightness, surface elevation, and radar emissivity for a variety of different surface geological units (colors) using image data from all frames in the 2021 flyby.The upper right panel shows WISPR-O brightness as a function of emission zenith angle over a larger range in WISPR counts than shown in the other panels.The black vertical dashed line shows the maximum emission angle cut used in the other panels to reduce the confounding limb brightening effects from the assessment of surface information.Contours show the density of measurements at the 1σ, 2σ, and 3σ level.
Figure 7 .
Figure 7. Similar to Figure 6 except focusing on data with specific emission angles, elevations, and emissivities to facilitate comparison and reduce confounding factors.The two high elevation geological units, Ovda and Thetis, are shown only for data in the series of elevation-emissivity boxeswhere the two units have significant overlap, and for emission angles between 20°and 30°where the two overlap.The low elevation units, Haasttse-baad, Lava, and Plains, are shown in a narrow range of elevations between 0.5 and 1 km and for emission angles between 20°and 40°where they overlap.As before, contours show the density of measurements at the 1σ, 2σ, and 3σ level.
Figure 8 .
Figure 8. WISPR-O count rates as a function of surface elevation (gray points and contours).Our baseline nadir thermal emission models for a variety of thermal emissivity cases are shown in orange.Thermal models corrected by an empirical brightness offset (9.5 DN s −1 ) are shown for the median value (blue) and the ±3σ range (semitransparent blue).Contours show the density of measurements at the 1σ, 2σ, and 3σ level.The baseline thermal models underpredict the WISPR-O brightness, and the empirical correction places them in excellent agreement.
Figure 9 .
Figure 9. WISPR-O counts for pixels that are expected to be low brightness based on high elevation (low surface temperature) and low emission angles (no limb brightening).Counts are shown as a function of elevation (first panel), radar emissivity (second panel), and emission angle (third panel).Contours show the density of points at the 1σ, 2σ, and 3σ level.The rightmost panel shows a histogram of WISPR-O counts for the set of points, which show an average brightness of 16.1 ± 1.4 DN s −1 , nearly 9 DN s −1 in excess of the expected brightness from model predictions.
Figure 10 .
Figure 10.WISPR-O image model of Venus.The first panel shows the third WISPR image in the flyby sequence.The second panel shows our thermal emission models projected onto the WISPR image and contains the characteristic surface features seen in the images.The third panel shows a polynomial fit in emission angle to the residuals of the first two panels and contains the limb brightening component.The fourth panel shows the sum of the second and third panels and represents our model of the full WISPR-O image.The fifth panel shows the residuals between the WISPR-O image (first panel) and our image model (fourth panel) with red (blue) indicating an observed brightness excess (deficit) relative to the model.The residuals are Gaussian distributed as −0.3 ± 3.5 DN s −1 .
Table 1
Nightglow Model Parameters | 11,662 | 2023-11-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Vocabulary, text coverage, word frequency and the lexical threshold in elementary school reading comprehension
Vocabulary knowledge is one of the most important elements of reading comprehension. Text coverage is the proportion of known words in a given text. We hypothesize that text comprehension increases exponentially with text coverage due to network effects and activation of prior knowledge. In addition, the lexical threshold hypothesis states that text comprehension increases faster above a certain amount of text coverage. The exponential relationship between text coverage and text comprehension, as well as the lexical threshold, are at the heart of text comprehension theory and are of great interest for optimizing language instruction. In this study, we first used vocabulary knowledge to estimate text coverage based on test scores from N = 924 German fourth graders. Second, we compared linear with non-linear models of text coverage and vocabulary knowledge to explain text comprehension. Third, we used a broken-line regression to estimate a lexical threshold. The results showed an exponential relationship between text coverage and text comprehension. Moreover, text coverage explained text comprehension better than vocabulary knowledge, and text comprehension increased more quickly above 56% text coverage. From an instructional perspective, the results suggest that reading activities with text coverage below 56% are too difficult for readers and likely inappropriate for instructional purposes. Further applications of the results, such as for standard setting and readability analyses, are discussed.
Introduction
Reading comprehension is a prerequisite for lifelong learning and one of the key goals of elementary education (e.g., Artlet et al., 2003).It is a multi-faceted construct that involves multiple components (e.g., Graesser et al., 2004).Vocabulary knowledge is one of the most influential determinants of reading comprehension during elementary school (e.g., McElvany et al., 2009;Quinn et al., 2015).According to the Simple View of Reading (Gough & Tunmer, 1986), reading comprehension involves two components: word recognition and language comprehension.Vocabulary knowledge is related to both language comprehension and word recognition (Duke & Cartwright, 2021).Vocabulary knowledge provides a link between phonology, orthography, and word meanings (e.g., Ehri, 2014).
Based on the hierarchical relations among reading sub-components (e.g., Kim, 2020), problems with lower-order reading components, such as word recognition and vocabulary, result in problems with higher-level components, such as inferencemaking.Thus, Wang et al. (2019) found a minimum level of word recognition fluency that is necessary for higher-level reading processes.Based on the Model of Lexical Quality (Perfetti, 2007), they suggested that efficient word recognition clears the way for higher-level reading processes, and therefore, problems in word recognition eventually lead to problems in higher-level processes (also Karageorgos et al., 2020).Similarly, O' Reilly et al. (2019) argued regarding vocabulary knowledge that the activation of prior knowledge only spreads properly if a critical number of known content words are present in a text.Thus, text comprehension increases above a certain level of known words in the text.
In this article, we first discuss the relationship between vocabulary knowledge (i.e., the overall number of words a person knows) and text coverage (i.e., the number of known words in a specific text).Second, we examine the linear and non-linear relationship between text coverage and text comprehension.Third, we identify thresholds that can help improve instruction and assessment of reading comprehension.
Vocabulary knowledge and reading comprehension
Vocabulary knowledge is a multi-faceted construct (e.g., Perfetti & Hart, 2002) that is highly associated with the ability to read fluently and comprehend texts (Perfetti, 2007).Two important sub-dimensions of vocabulary knowledge are vocabulary breadth, i.e., the number of words known, and vocabulary depth, i.e., how much knowledge about semantic, orthographic, and phonological aspects of a word are available (Li & Kirby, 2015).Previous research has shown that vocabulary breadth is more strongly associated with reading comprehension than vocabulary depth (e.g., Li & Kirby, 2015;Ouellette, 2006).Additionally, semantic knowledge has a stronger association with reading comprehension than orthographic or phonological 1 3 Vocabulary, text coverage, word frequency and the lexical… knowledge (Richter et al., 2013).Studies generally report strong associations between vocabulary knowledge and reading comprehension (e.g., English : Quinn et al., 2015;German: Richter et al., 2013).Thus, it seems that knowing the meaning of many words increases the probability of correctly recognizing and comprehending the words in a particular text (Perfetti, 2007).
Vocabulary knowledge and text coverage
Text coverage is usually defined as the proportion of words in a text that are known by a particular reader (Hsueh-Chao & Nation, 2000).More specifically, text coverage can be understood as the intersection between the words in a given text and a reader's vocabulary knowledge.It takes relatively few unique words to reach a relatively high text coverage in most texts (Hsueh-Chao & Nation, 2000).According the Zipf's theorem, when the words in a text are ordered according to their frequency, their probability of occurrence is inversely proportional to their place on the frequency list (Piantadosi, 2014).Thus, a small number of words occur very often and many words occur very rarely in authentic texts.Corpus analysis with large samples of texts shows that knowledge of only the 2000 most frequent words is sufficient to achieve an average text coverage of 90.6% for narrative texts and an average text coverage of 78.4% for academic texts (Nation & Waring, 1997).Text coverage for academic texts is lower because such texts include more rare words.Additionally, the relationship between text coverage and the length of the frequency ranked word list (FRWL) is logarithmic; for instance, the first 1000 most frequent words provide 72% text coverage, and the next 1000 only add 7.7 percentage points to text coverage (Nation & Waring, 1997).
To our knowledge, no previous study has examined the relationship between text coverage and readers' actual vocabulary knowledge for a representative sample of texts and/or participants.In a FRWL, the frequency of a word determines whether the word is included in the list or not.For vocabulary knowledge, this relationship is not deterministic but probabilistic, as frequent words are more likely to be known than rare words (e.g., for a review: Brysbaert et al., 2018).Overall, the correlation between the probability of knowing a word and its frequency is high (German third and fourth graders r = 0.74 : Trautwein & Schroeder, 2018).Therefore, the text coverage of a given FRWL and actual vocabulary may be very similar.
Figure 1 panel a illustrates probabilistic relationships between vocabulary knowledge and word frequency.Students with larger vocabularies are more likely to know more rare words compared to students with smaller vocabularies (Brysbaert et al., 2018).
Figure 1 panel b illustrates the logarithmic relationship between vocabulary knowledge and text coverage.The relationship between vocabulary knowledge and text coverage should have a logarithmic shape, similar to the relationship between a FRWL and text coverage.Additionally, given the same vocabulary knowledge, text coverage should be lower for a text with lower compared to higher average word frequency.
Text coverage and reading comprehension
Text comprehension substantially depends on text coverage.According to the construction-integration model (Kintsch, 1988), readers' mental representation of a text is an associative network of concepts and propositions.In this network, concepts represent nodes and associations represent links.The more words are known, the more concepts and the more prior knowledge can be activated.The number of possible associations between concepts grows exponentially with the number of activated concepts.It is much easier for readers to disambiguate the meaning of a text if the words in the text immediately activate the correct concepts.Disambiguating text meaning is highly important for integrating text information with prior knowledge (Richter & Schnotz, 2018).On one hand, readers are usually able to comprehend texts even when they contain some unknown words.This is because readers can make inferences based on contextual information to infer the meaning of unknown words if the network of associations between the known concepts is strong enough (Share & Stanovich, 1995).However, contextual inferences require additional cognitive resources or can lead to false interpretations, which makes text comprehension more challenging when text coverage is low (Cain et al., 2004).Indeed, drawing inferences from the context and building up an understanding of the text is only possible once text coverage reaches a certain level.In the lexical threshold hypothesis (Hsueh-Chao & Nation, 2000), text comprehension is assumed to be significantly impaired below a certain amount of text coverage.
Lexical threshold hypothesis
The lexical threshold hypothesis states that text comprehension increases faster above a certain amount of text coverage (Hsueh-Chao & Nation, 2000).Relativity few and heterogeneous findings exist about the lexical threshold hypothesis.For instance, Hsueh-Chao and Nation (2000) found that individuals need to know the meaning of 98% of the words in a fictional text for comprehension in a reading for pleasure situation, where unknown words were assessed by self-report.In another study, Laufer (1989) reported that reading comprehension increased more rapidly Vocabulary, text coverage, word frequency and the lexical… if individuals knew at least 95% of the words in a text.In this study, individuals were required to translate a vocabulary list in order to determine their text coverage.Laufer and Ravenhorst-Kalovski (2010) suggested two thresholds, 98% and a minimum at 95%.Their analysis was based on participants with high prior knowledge as well as a standardized test of vocabulary size and reading comprehension test.More recently, O'Reilly et al. (2019) found that the reading comprehension of ninth-to twelfth-graders increased rapidly when they knew more than 59% of the critical content words in a text.In this study, knowledge of critical content words was assessed with a multiple-choice test.By contrast, Schmitt et al. (2011) were not able to determine a clear lexical threshold in a carefully designed study with a wordnonword recognition test and a standardized reading comprehension test.
Summary of the theoretical background
Vocabulary knowledge, text coverage, word frequency, and text comprehension are theoretically related: Vocabulary increases text coverage logarithmically, and this relationship depends on the word frequency in the text (word frequency effect: Brysbaert et al., 2011;Zipf's theorem: Piantadosi, 2014).Text comprehension theory (i.e., the construction-integration hypothesis) assumes exponential growth in connectivity and activation of prior knowledge, which means that increasing text coverage should exponentially improve text comprehension (Share & Stanovich, 1995).The lexical threshold hypothesis states that text comprehension increases faster above a certain threshold of text coverage.
Figure 2 summarizes the described relationships.The larger dashed circles represent a person's vocabulary knowledge and the smaller solid circles represent texts.The intersection between the two circles (i.e., the area with diagonal lines) is the text coverage.Texts with many rare words are more likely to be covered when persons have a larger vocabulary knowledge.The discontinuous color scale from white (upper left corner) down to almost black represents the degree of text comprehension.Text comprehension increases with more text coverage.It takes a certain amount of text coverage before comprehension increases more rapidly.
Research question
Although the relationships between (1) vocabulary knowledge and reading comprehension, (2) vocabulary knowledge and text coverage, and (3) text coverage and reading comprehension have been investigated in separate contexts, they have rarely been researched using an integrative approach.
In this study, we used data from a vocabulary knowledge and a text comprehension test administered to a large number of fourth graders participating in a reading support program.We analyzed word frequencies from vocabulary test items and the reading comprehension texts to estimate text coverage for each participant for each text.We compare linear and non-linear models of text coverage explaining text comprehension.In addition, we investigated whether vocabulary knowledge or text coverage were better able to predict children's text comprehension.Finally, we determined potentially relevant amounts of text coverage in order to define various thresholds.
The study's three central research questions (RQ) can be summarized as follows: RQ1: What is the shape of the relationship between text coverage and text comprehension?
We hypothesize that text comprehension increases exponentially rather than linearly, due to the effect of network connectivity on the propositional network and activation of prior knowledge.
RQ2: Does text coverage explain text comprehension better than vocabulary knowledge?
We hypothesize that text coverage better explains vocabulary knowledge because it more accurately describes the words known in a given text.
RQ3: Is there an amount of text coverage can be defined as a lexical threshold?
We hypothesize that text comprehension increases faster above a certain level of text coverage.Vocabulary, text coverage, word frequency and the lexical…
Participants
The children who participated in the study attended 4th grade and were tested at the beginning of the second half of the school year.Fourth graders are typically required to comprehend texts independently, so vocabulary knowledge (i.e., knowing the meaning of words) and especially text comprehension become important.This study is a program evaluation of a project to promote language and literacy skills among fourth graders at public schools in six different German states.The program provided teachers with scientifically grounded teaching materials and handouts.Only students with parental consent were included in the present analysis.The study involved N i = 949 fourth graders from N c = 64 classes and N s = 35 schools.About half of the participants were female, 52.05%, and children were on average M = 10.28 years old, SD = 0.52.Overall, 64.91% of the students reported exclusively speaking German at home.The program was conducted in federal states where the share of public school students from immigrant backgrounds ranged from 50.1 to 28.9% (Stanat et al., 2017, p. 299).Thus, participants are relatively representative of these federal states.However, we conducted robustness checks to assess the impact of language background and discuss this in the limitations.We excluded 25 (2.63%) participants because they answered fewer than 50% of the items for either the vocabulary or the text comprehension test.Thus, we analyzed the test results of N i = 924 participants.
Vocabulary knowledge test
Vocabulary knowledge was assessed with the synonym-based vocabulary knowledge test, KFT 4-12 + R V1 (Heller & Perleth, 2000).We used this test because we considered it a good measure of 'knowing' the meaning of words in line with the theory that words represent nodes in an associative network.This paper-pencil test included 25 items presented in fixed order and administered under low time constraints.Thus, most students responded to all items.The items consisted of one item stem word and five response options with one key (see Fig. 3).The distractors were orthographically similar (i.e., curved versus covered) and/or semantically related (e.g., anonyms or meronyms), but not synonyms.
Text comprehension test
The standardized text comprehension test was the Aspects of the Learning Situation and Learning Development Test (LAU; Lehmann et al., 2002).This test includes four texts with multiple-choice (MC) items, "Mosquito" (124 words, 11 sentences, 4 items), "Candle" (106 words, 8 sentences, 7 items), "I am not blind" (206 words, 8 sentences, 7 items), and "Plastic duck" (125 words, 7 sentences, 7 items).Figure 4 shows an example item.The test was administered with low time constraints; thus, most students completed all items.
Word frequencies
We derived word frequencies for the vocabulary test items and reading comprehension text from the 'childLex' corpus.The childLex corpus (www.child lex.de) includes 500 books classified as appropriate for children 6-12 years of age, including overall 9.85 million running words (i.e., "token") and 182 thousand unique words (i.e., "types").Normalized lemma frequencies were used for all analyses.
All words in the vocabulary test, but not all words in the reading comprehension texts, were part of the childLex corpus (see Table 1 Vocabulary, text coverage, word frequency and the lexical… words were proper nouns or compound words, and their frequencies were interpolated using Laplace approximations (Diependaele et al., 2013).Based on the interpolated normalized lemma frequencies, so-called Zipf values were computed (Van Heuven et al., 2014).This scale is logarithmic and scaled such that a value of 3 corresponds to the frequency of a word occurring once in a million words, a value of 4 corresponds to a frequency of ten times in a million words, a value of 5 100 times in a million words, etc.The word frequencies in the vocabulary test were on average M = 3.99, SD = 0.56, and ranged from 2.77 to 4.87.The "Mosquito" and "Candle" texts had similar word frequency distributions and a similar length.The "Plastic duck" text was similarly long as these two texts but encompassed more infrequent words.The "Blind" text was longer and encompassed more infrequent words than "Mosquito" and "Candle".The differences in word frequency distributions between texts indicate that the texts had different vocabulary knowledge requirements.
Procedure
The study was conducted in the morning hours in all classes and administered in paper-pencil format.First, the text comprehension test was administered (30 min), Table 1 Overview of word frequencies by text Token = running words in the text, types = unique words in the text, n = number of tokens,% = relative proportion of tokens 1 Zipf value with Laplace transformation = log('lemma frequency + 1'/'number of unique lemma in corpus + number of words in the corpus'), 2 Lower boundary defined as larger than and upper boundary as smaller than or equal to, 3 Not-found tokens were assigned a value of 2.00 based on the Laplace transformation and were counted in the interval '2-3'
Data quality
In a preparatory step, we conducted an item fit analysis because misfitting items in the text comprehension and vocabulary tests might lead to false interpretations of the test results.We applied the Rasch model (Adams & Wu, 2007) to the response data for the text comprehension and vocabulary tests using the package Test Analysis Modules (TAM; Robitzsch et al., 2021) within R (R Core Team, 2021).We identified three items in the text comprehension test with an outfit or infit below 0.7 or above 1.3 (Gustafsson, 1980).An inspection of these items suggested that they had somewhat ambiguous answers.Even readers with otherwise high reading comprehension abilities did not answer these items correctly.We decided to exclude these three items from the text comprehension test since they might not actually measure comprehension.No items were excluded from the vocabulary knowledge test based on this analysis.The relationship between item difficulty in the vocabulary knowledge test and the item's word frequency was very important for the text coverage estimation.In the original 25 items, item difficulty and minimum word frequency for the synonym pair correlated only with r(23) = − 0.33.However, after excluding five items with highly synonymous and orthographically similar distractors, the correlation was r(18) = − 0.64.We considered this to be more consistent with previous findings on word frequency effects in German 4th graders (e.g., r = − 0.74: Trautwein & Schroeder, 2018).
The overall rate of missing (i.e., omitted) responses was low (LAU: 2.97% and KFT: 5.65%).Missing responses were treated with the full information maximum likelihood method (FIML).
Modeling vocabulary knowledge and text coverage
Text coverage is the intersection between a text's words and the reader's vocabulary knowledge.We estimated text coverage values for each child and each text.The rationale behind the estimation process was to reference children's vocabulary knowledge test scores to the word frequency level they are likely to know and then determine which words in a text were likely to be known by each child.
3
Vocabulary, text coverage, word frequency and the lexical… Based on these results, we transformed the Rasch scale, N (0, 1), so that the item parameters were on the same scale as the expected word frequency of each item.We refer to this as the Zipf scale because this scale represents students' vocabulary knowledge as a function of word frequency, N (4.59, 1.30).
Figure 5 panel a shows vocabulary knowledge on a Rasch scale with a mean of 0 and a standard deviation of 1. Negative values represent low vocabulary knowledge because the probability of answering a vocabulary test item correctly is low.Positive values represent high vocabulary knowledge because the probability of answering a vocabulary test item correctly is high.Figure 5 panel b shows the linear relationship between item difficulty and word frequency.An item of average difficulty has an expected word frequency of 4.59, a difficult item (i.e., M-1 SD) an expected word frequency of 3.29, and an easy item (i.e., M+1 SD) an expected word frequency of 5.90. Figure 5 panel c shows the distribution of vocabulary on the Zipf scale.On this scale, an average person has a value (θ wf p ) corresponding to the expected word frequency of an item with average item difficulty. .This function was used to calculate the probability of each person knowing each word as described by Brysbaert et al. (2018) and illustrated in Fig. 6.The text coverage is the average probability of knowing each word in the text or the proportion of words estimated to be known out of the total number of words in the text.
Modeling text coverage and text comprehension
We addressed RQ1 by comparing linear and exponential models of text coverage explaining text comprehension.We used a latent regression Rasch model (De Boeck & Wilson, 2004) that in the baseline model explains the probability of correctly solving a test item based on random effects for item difficulty and a random effect for person ability.For the explanatory models, we additionally included linear and quadratic terms for text coverage as a text-by-person covariate or vocabulary knowledge as a person covariate.
A latent regression Rasch model has the advantage that the regression coefficients represent the relationship between text coverage and measurement error-adjusted Vocabulary, text coverage, word frequency and the lexical… text comprehension.This modeling approach increases interpretability, as the hypotheses relate to a text comprehension measure that is free of measurement error, and increases the reproducibility of our estimates, as imperfect reliability of the text comprehension test should bias the regression coefficients much less.The model was specified within the generalized linear mixed-effect model framework (GLMM) and fitted using the package 'lme4' (Bates et al., 2014) in the R environment (R Core Team, 2021).
The difference in random variance in person ability (σ 2 θ ) between a model without text coverage (i.e., baseline model) and the explanatory models with text coverage were used to calculate the explained variance in person ability R 2 θ , σ 2 θ baseline − σ 2 θ text coverage ∕σ 2 θ baseline .We used marginal R 2 (mR 2 ) to estimate the variance in the responses explained by the fixed effects (Nakagawa & Schielzeth, 2013).
We compared the model fits using the Bayes Information criterion (BIC).The BIC are model fit indicators (i.e., goodness of fit) that prevent overfitting by penalizing the number of variables in the model (in contrast to deviance or pseudo R 2 ) and can be used to compare nested and unnested models.Lower values correspond to a better goodness of fit and fit differences of 5 -10 points can be considered substantive (Burnham & Anderson, 2002).Additionally, we use Akaike weights (w i ), which can be directly interpreted as conditional probabilities for each model (Wagenmakers & Farrell, 2004).
We addressed RQ2 by estimating similar linear and quadratic models using vocabulary knowledge as the predictor variable.We evaluate which models (i.e., out of all text coverage and vocabulary knowledge models) fit the data better using the same fit indicators described above.
Modelling the lexical threshold
We addressed RQ3 using a broken-line regression (Muggeo, 2008) with average text coverage predicting the sum score on the text comprehension test.Broken-line regression is a statistical method that identifies a changepoint in a linear regression.It also provides a significance level and confidence interval for the changepoint (i.e., threshold).Instead of estimating one regression slope, as in linear regression, broken-line regression estimates two regression slopes, divided at the identified changepoint.This method has been used in related research (O'Reilly et al., 2019;Wang et al. 2019).Based on our theoretical background regarding activation and propositional network connectivity, we expected an exponential increase and not necessarily a linear relationship with a changepoint.However, the changepoint is important from a practical perspective because decisions about factors such as text alignment are often binary (i.e., is the text too difficult for a particular student or not).
Vocabulary knowledge and reading comprehension
The descriptive results on raw score mean and standard deviation, range, reliability and correlations indicate that the tests for vocabulary and text comprehension worked in the intended way (Table 2).Both the vocabulary knowledge (Rel wle = 0.67) and text comprehension (Rel wle = 0.69) tests had an acceptable reliability for a large-scale assessment context.The vocabulary and text comprehension tests correlated highly with each other, r(922) = 0.61, t = 23.2,p < 0.001.
Vocabulary knowledge and text coverage
The process of text coverage estimation might be best demonstrated with an example and a visual overview.Table 3 shows the words from one sentence of the "Mosquito" text together with the probability that each specific word will be known by children with varying levels of vocabulary knowledge.High-frequency words such as "die [the]" (WF = 7.7) had a high probability of being known by both high-(100%) and low-skilled readers (92%).The probability that a low-frequency word such as "Stechmücken [mosquitoes]" (WF = 2.8) would be known was 36% for a high-skilled reader and 2% for a low-skilled reader.Words with intermediate frequency such as "saugen" [suck] had a 79% probability of being known for a highskilled reader and 11% for a low-skilled reader.Thus, the difference between children with high and low vocabulary knowledge was more pronounced for low-and average-frequency words.The text coverage is the average probability of knowing each word in a text.
The text coverage estimation yielded average text coverage scores ranging from 65% for "Plastic duck" to 74% for "Mosquito".Figure 7 shows the text coverage relative to vocabulary knowledge for each text.The estimated text coverage increases with vocabulary knowledge up to mean + 2 SD and then nears 100% for all texts.
For instance, the "Plastic duck" text has the most infrequent words and "Candle" the fewest infrequent words.The differences in text coverage between texts are higher for students with high and mean vocabulary knowledge than for students with 1 3 Vocabulary, text coverage, word frequency and the lexical… Table 3 Example sentence with probabilities of knowing each word relative to the WF of words and vocabulary knowledge Sentence from "Mosquito" text. 1 Ability estimates on Zipf scale: N (4.59, 1.30).High = 3.3 (M-1 SD), Mean = 4.59, Low = 5.9 (M + SD).On the Zipf scale, low values correspond to high vocabulary knowledge because the value describes the probability of knowing infrequent words.The probability of knowing a word is 50% when the value of the vocabulary knowledge estimate equals the value of the word.mean or low vocabulary knowledge.Text coverage for students with high (i.e., M + 1 SD) vocabulary knowledge ranged from 78.27% for the Plastic Duck text to 86.60% for the Mosquito text, while text coverage for students with low vocabulary knowledge (i.e., M-1 SD) was around 42% for each text.Text coverage and reading comprehension
RQ1: shape of the relationship between text coverage and text comprehension
The upper part of Table 4 summarizes the results comparing the baseline model (BLM) to a linear and a quadratic text coverage model.Both the linear and quadratic models had a substantively better fit than the BLM.This was indicated by a much lower BIC, Δ BLM-TCL (BIC) = 430, Δ BLM-TCQ (BIC) = 488.However, the quadratic text coverage model fit significantly better, χ 2 = 51.32,p < 0.001, and had the lowest BIC Δ TCQ-TCL (BIC) = 42.Although the variance explained by the quadratic term was rather small R 2 θ Δ = 0.026 , the fit indices suggest that the quadratic text coverage model fits better than the linear model.
The model parameters for the quadratic model are provided in Table 5.Both the linear, β 1 = − 1.26, SE = 0.63, p = 0.046, and the quadratic trend, β 2 = 3.90, SE = 0.54, p < 0.001, were significant.The signs of the predictors show that reading comprehension increased with text coverage, but that the effect leveled off in the low text coverage range.
RQ2: the better predictor for text comprehension
We performed the same analysis including linear and quadratic terms with vocabulary knowledge to determine whether text coverage was a better predictor of text comprehension than vocabulary knowledge.Both vocabulary knowledge models were better than the baseline model.As expected, both vocabulary θ .χ 2 and p were the test statistic and p-value of the likelihood ratio test comparing nested models ˟w i can be interpreted as the probability of each model being the best model in a BIC sense among the compared models (Wagenmakers & Farrell, 2004) 1 3 Vocabulary, text coverage, word frequency and the lexical… knowledge models explain a significant amount of variance in the responses and thus reading comprehension ability.In contrast to text coverage, the quadratic model for vocabulary knowledge was not substantively better than the linear model, χ 2 = 3.34, p = 0.068.This was most clearly indicated by the only marginally lower Δ VKQ-VKL (BIC) = − 6, suggesting that the gains in explained variance and goodness of fit were due to overfitting.The linear vocabulary model showed a significant linear trend, β 1 = 0.63, SE = 0.03, p < 0.001 effect, that was in line with previous findings in terms of size and direction (see Table 5).
In a direct comparison of the two text coverage and two vocabulary models, the quadratic text coverage model turned out to be the best model as indicated by the much lower BIC.1 For better interpretability, we calculated the w i , which represents the probability of each model being the best out of the five models (Wagenmakers & Farrell, 2004).The w TCQ > 0.999 implies that there was an above 99.9%chance that the quadratic text coverage model was the best model among the five.In terms of explained variance, the differences between the quadratic text coverage model and the linear vocabulary knowledge model were small but significant for both outcomes concerning reading comprehension ability, ΔR 2 θ = 0.011 .
RQ3: amount of text coverage that defines the lexical threshold
A broken-line regression was significantly better at explaining text comprehension scores than a linear regression, F(2, 920) = 11.121,p < 0.001.The broken-line text comprehension increases at a rate of β 1<0.56 = 6.59,SE = 1.28, p < 0.001, and above the threshold, it increases at a rate of β 1>0.56 = 7.89, SE = 1.68, p < 0.001.The expected test score at the threshold was 9.56, slightly below the mean test score of M = 11.1.Thus, the threshold occurs at a mean reading comprehension level (Fig. 8).
Discussion
In the present study, we investigated the relationship between (1) vocabulary knowledge and reading comprehension, (2) vocabulary knowledge and text coverage, (3) text coverage and text comprehension, as well as associated lexical thresholds.
In line with previous studies, our findings show a strong association between vocabulary knowledge and reading comprehension.As expected, the association between text coverage and comprehension was best described by a non-linear relationship (i.e., exponential or broken-line).Text comprehension increases with text coverage exponentially rather than linearly, and we were able to identify a threshold at 56% text coverage, above which text comprehension increases more rapidly.Overall, text coverage outperformed vocabulary knowledge as a predictor of text comprehension.
Our study provides strong evidence for the view that text coverage and text comprehension are non-linearly related.This is in line with the construction-integration model (Kintsch, 1988) that conceptualizes text comprehension as building an Vocabulary, text coverage, word frequency and the lexical… associative network of links and nodes.A network loses connectivity exponentially when nodes are missing.Hence, there is a certain amount of text coverage that is necessary to activate relevant background knowledge, enable contextual inference and subsequently build comprehension.
Our estimation of the lexical threshold differed from previous studies.Previous studies reporting higher values used self-reports (Hsueh-Chao & Nation, 2000) or word translations (Laufer, 1989).The study with the most comparable research method found the most similar results, but only considered content words (59%; O' Reilly et al., 2019).Due to methodological differences, there is probably no 'one' lexical threshold, as the estimation crucially depends on how word knowledge and text comprehension are assessed.As a consequence, thresholds should only be used within the context in which they were defined.
Our findings also have implications for research and practice.In particular, the relationship between text coverage and text comprehension can be used to determine theoretically and statistically justified cut-off values to inform the selection of adequate learning material and for standard-setting procedures.
Selection of appropriate reading materials could benefit from the text range model and thresholds if text range estimation were implemented in a software tool or within a readability analysis.In most situations, readers benefit most from reading activities that are neither too difficult nor too easy (e.g., Wolfe et al., 1998).Reading activities that are too difficult may be more detrimental to motivation and reading engagement than activities that are too easy (Kahmann et al., 2022).The lexical threshold might be helpful for identifying too little text coverage.It is probably advisable to match readers to texts so that text coverage is above 56%.
A similar application is standard-setting.In the context of educational monitoring, it is often of great interest to determine what test score on a vocabulary test best represents "core vocabulary".Core vocabulary for reading is the vocabulary that allows students to understand a text on a basic level.Core vocabulary has been defined as the vocabulary that covers some (high) percentage of texts in corpora (e.g., Chujo & Utiyama, 2005).Thresholds for core vocabulary are usually defined based on expert ratings or norm values (Brown & Kappes, 2012).However, the text coverage function could be a model-based way to determine these thresholds.Wang et al. (2019) found a decoding threshold that is stable between Grades 5 and 10.They suggested that such thresholds are a function of skill rather than grade or age.According to our model, readers with a vocabulary corresponding to a text coverage above 56% start to gain comprehension.Whether this threshold is consistent across grades need to be investigated in further research.
Limitations and outlook
One of the major limitations of the present study is that our results are based on a large group of participants but only a small number of texts.Our results should be replicated with a larger and more representative sample of authentic texts, different vocabulary tests and other participants in order to test the generalizability of the reported results.
The difference in explained variance between the non-linear and linear model of the relationship between text coverage and text comprehension was significant but very small.On the one hand, this small effect could still be relevant, because it could improve reader-text matching without requiring more test time and is based on already available information (vocabulary test and text word frequencies).On the other hand, several aspects of this study might have led to a particularly small effect.First, the vocabulary test used in the present study was not ideal for the purpose of estimating students' text coverage.Although it was a widely used, standardized instrument, items were not selected systematically based on their word frequency.Additionally, the test had relatively few items and a relatively low reliability.Future researchers may be advised to use tests that systematically manipulate word frequency and are more reliable (i.e., SET 5-10: subtest "lexicon"; Petermann, 2012).Second, the non-linear relationship between text coverage and text comprehension might have been more pronounced if we had investigated a broader range of texts, including texts with only frequent words or texts with very rare words, and a broader range of students, for instance, second to fifth graders.
There was relatively little precise information about the students' language background.About 35% of the students reported that they did not primarily speak German at home.This group of students could include recently arrived non-native speaking students (about 4% of German fourth graders at the time of the study) or bilingual students.However, we performed robustness checks and used language spoken at home as a mediator of the relationship between text coverage and text comprehension and did not find a significant difference in effects.Thus, the relationship we described is probably generalizable to students with diverse language backgrounds.However, future studies should take a more in-depth look at language background-specific effects.
There are also some theoretical problems that might need to be addressed in the future to further develop this technique.In particular, the present framework does not take into account the context in which words are encountered.Frequent words are usually useful in many contexts, whereas infrequent words are more contextspecific.This issue is not addressed in our model.It is also not clear how different psychometric aspects such as measurement error, guessing and slipping influence the text coverage estimation.However, these aspects might primarily influence the absolute text coverage scores.
Despite these limitations, our study demonstrated that text coverage and the lexical threshold are useful concepts that are not yet well established in elementary school reading research and could help to align reading materials with readers and conduct standard-setting.Further research should further refine the method and test whether the thresholds are actually useful.
Funding Open Access funding enabled and organized by Projekt DEAL.
Declarations
Conflict of interest We have no known conflict of interest to disclose.
Fig. 1
Fig. 1 Diagram illustrating relationships between word frequency, vocabulary knowledge, and text coverage.Note Illustrative diagram (no actual data displayed).Panel a analog to Brysbaert et al., 2018.Panel b analog to Chujo and Utiyama (2005)
Fig. 2
Fig. 2 Diagram summarizing the relationship between word frequency, vocabulary knowledge, text coverage, text comprehension and the thresholds.Note Solid circles: text, dashed circles: vocabulary knowledge, area with diagonal lines: text coverage, position of the solid circle on the y-axis: number of rare words in a text, X-axis with increasing diameter of dashed circles: increase in vocabulary knowledge from left to right.Color gradient from white (left upper corner) = low text comprehension to black (right lower corner) = high text comprehension
Fig. 3
Fig. 3 Example item of the vocabulary knowledge test.Note Illustrative example of a typical item from the vocabulary knowledge test.This item was not actually in the test.The vocabulary knowledge test is protected by copyright
Step 1 :
referencing vocabulary test score to word frequencyThe vocabulary test responses were modeled with a Rasch model using the TAM(Robitzsch et al., 2022) within R (R Core Team, 2021).Then, we regressed the minimum word frequency of the synonym pair on the item difficulty parameter σ (WF = b 0 + b 1 σ + ε).The regression revealed significant regression coefficients of b 0 = 4.59, SE = 1.51, p = 0.007 and b 1 = − 1.31, SE = 0.37, p = 0.003.The intercept implies that an average item σ = 0 has an expected word frequency of WF = 4.59.The slope indicates that a difficult item σ = 1 has an expected word frequency of WF = 3.28 and an easier item σ = − 1 has an expected WF = 5.90.
Fig. 5
Fig. 5 Distribution of vocabulary knowledge before and after the linear transformation.Note Panel a shows the distribution of vocabulary knowledge before the linear transformation, panel b shows the relationship between vocabulary knowledge test scores on the z-standardized and Zipf scales, and panel c shows the distribution of vocabulary knowledge after the linear transformation
Fig. 6
Fig. 6 Relationship between word frequency and the probability of knowing a word relative to vocabulary knowledge.Note Figure analogous to Fig. 2 in Brysbaert et al. (2018).Vocabulary: High = 1 (M + 1 SD), Mean = 0, Low = − 1 (M-1 SD).The probability of knowing a word is 50% when vocabulary knowledge on the Zipf scale is equal to the frequency of a word
Fig. 7
Fig. 7 Text coverage for each text in relation to vocabulary knowledge.Note Percent text coverage estimate (y-axis).Vocabulary knowledge on z-standardized scale and the Zipf scale (x-axis).The Zipf scale is inverse to a z-standardized scale.Low values correspond to high vocabulary knowledge because the scale refers to the word frequencies individuals are likely to know.Vertical lines indicate low vocabulary knowledge (i.e., M-1 SD), mean vocabulary knowledge, and high vocabulary knowledge (i.e., M + 1 SD above) Sample: N = 924, Items: I = 20, Text: T = 4, Observations = 17,932 (924 × 20 = 18.480, difference due to 2.97% omitted and not reached responses) np i = Number of estimated parameters for model i ; log(L i ) = Natural logarithm of the maximum likelihood for model i ; BIC = Bayesian information criterion; Δ i BIC = [BICi-min(BIC)]; w i (BIC) = Rounded Schwarz weights.R 2 θ = Person variance explained by fixed effects obtained by (baseline model 2 θ − 2 θ of model i)/baseline model 2
Fig. 8
Fig. 8 Relationship between text coverage and comprehension.Note Broken-line regression with a changepoint at 56% text coverage.x-axis: Average text coverage estimate for a student across texts.y-axis: Text comprehension test score with a maximum of 20.The grey area is the 95% confidence interval of the expected text comprehension test score.The dots represent the distribution of the test score ).Most of the non-includedExample Item of the Vocabulary Knowledge Test Which word has the most similar meaning to the bold word?
V1).We only use the text comprehension test and the vocabulary knowledge test in the analysis.The tests were administered in accordance with their test manuals.
Table 2
Results of the The categorization (high, mean, low) was only used to derive illustrative examples and did not influence the actual estimation
Table 4
Model comparisons between models with linear and quadratic terms explaining the probability of a correct answer in the text comprehension test | 9,552.2 | 2022-12-08T00:00:00.000 | [
"Computer Science"
] |
MiR‐16 regulates mouse peritoneal macrophage polarization and affects T‐cell activation
Abstract MiR‐16 is a tumour suppressor that is down‐regulated in certain human cancers. However, little is known on its activity in other cell types. In this study, we examined the biological significance and underlying mechanisms of miR‐16 on macrophage polarization and subsequent T‐cell activation. Mouse peritoneal macrophages were isolated and induced to undergo either M1 polarization with 100 ng/ml of interferon‐γ and 20 ng/ml of lipopolysaccharide, or M2 polarization with 20 ng/ml of interleukin (IL)‐4. The identity of polarized macrophages was determined by profiling cell‐surface markers by flow cytometry and cytokine production by ELISA. Macrophages were infected with lentivirus‐expressing miR‐16 to assess the effects of miR‐16. Effects on macrophage–T cell interactions were analysed by co‐culturing purified CD4+ T cells with miR‐16‐expressing peritoneal macrophages, and measuring activation marker CD69 by flow cytometry and cytokine secretion by ELISA. Bioinformatics analysis was applied to search for potential miR‐16 targets and understand its underlying mechanisms. MiR‐16‐induced M1 differentiation of mouse peritoneal macrophages from either the basal M0‐ or M2‐polarized state is indicated by the significant up‐regulation of M1 marker CD16/32, repression of M2 marker CD206 and Dectin‐1, and increased secretion of M1 cytokine IL‐12 and nitric oxide. Consistently, miR‐16‐expressing macrophages stimulate the activation of purified CD4+ T cells. Mechanistically, miR‐16 significantly down‐regulates the expression of PD‐L1, a critical immune suppressor that controls macrophage–T cell interaction and T‐cell activation. MiR‐16 plays an important role in shifting macrophage polarization from M2 to M1 status, and functionally activating CD4+ T cells. This effect is potentially mediated through the down‐regulation of immune suppressor PD‐L1.
Introduction
Macrophages, with the capabilities of phagocytosis, antigen presentation, tissue remodelling and the secretion of a variety of molecules including growth factors, cytokines, enzymes, complement components and prostaglandins are important players in both innate and adaptive immune systems [1]. Under a steady-state or in response to inflammation, monocytes extravasate from the circulation, differentiate and mature into either dendritic cells (DCs) or macrophages [2]. Mouse macrophages are characterized and distinguished from DCs by the positive expression of surface markers F4/80 and CD11b, and intracellular antigen CD68 [3]. When monocyte precursors exit the circulation, depending on the local microenvironment, macrophages may undergo separate differentiation pathways and generate two states of polarized activation: classically activated macrophages (M1) and alternatively activated macrophages (M2) [4,5]. These two subsets of macrophages are associated with their own in vitro differentiation inducers, cell-surface markers, secretion of cytokines and other molecules, interaction with T-cell subsets and subsequent functional consequences [6]. Lipopolysaccharide (LPS) and Th1 cytokine interferon (IFN)-c drive macrophage polarization towards M1 phenotypes in vitro, which are characterized by the surface expression of CD86 and CD16/32, the secretion of pro-inflammatory cytokines tumour necrosis factor (TNF)-a, interleukin (IL)-12 and IL-23, the upregulation of chemokines CXCL9 and CXCL10, and the enhanced activity of inducible nitric oxide synthase (iNOS) that stimulates NO production from macrophages. Functionally, M1 macrophages are key effector cells for antigen-specific Th1 and Th17 cellular immune responses. In contrast, M2 macrophages are differentiated in response to Th2 cytokine IL-4, featuring the surface expression of mannose receptor (CD206), arginase 1 (Arg-1) and Dectin-1, the secretion of anti-inflammatory cytokines IL-10 and IL-1RA, and the up-regulation of chemokines CCL17, CCL22 and CCL24. Functionally, M2 macrophages are mainly involved in immunosuppression and tissue repair [7,8]. As a major type of infiltrating leucocytes associated with solid tumours, tumour-associated macrophages (TAMs) play an important role in tumour immunity; featuring a IL-10 high IL-12 low phenotype similar to M2 macrophages and presenting potent immunosuppressive functions [5]. Consistently, the predominant expression of M2 macrophages is associated with the advanced stage of tumour progression, which promotes the idea of treating cancer by the repolarization of macrophages from the immunosuppressive M2 phenotype to the pro-inflammatory M1 phenotype [9]. The polarization to M1 or M2 macrophages are highly dynamic and plastic to external signals such as the cytokine environment [10]. However, intracellular mechanisms regulating macrophage polarization plasticity remains to be elucidated.
MicroRNAs (miRNAs) are small (21-25 nucleotides in length) non-coding RNA molecules that control gene expression at post-transcriptional levels and target more than 60% of genes in mammals [11,12]. The seed sequence of miRNAs, through base pairing with complementary sequences within the 3 0 -untranslated region (3 0 -UTR) of specific mRNA molecules, silences these mRNAs via the following mechanisms: cleavage or destabilization of target mRNA molecules (upon perfect or nearly perfect complementarity), or less efficient translation of the mRNA into proteins (for imperfect hybridization) [13][14][15]. MiRNAs play essential roles in various physiological and pathological processes, and their biological functions and regulatory mechanisms are under intensive investigation in biomedical fields.
MiR-16 and miR-15a are on the same gene cluster that maps to the human chromosome 13q14 region. The down-regulation and deletion of miR-16 and miR-15a has been reported in multiple cancers including chronic lymphocytic leukaemia (CLL), prostate cancer, multiple myeloma, pancreatic cancer, ovarian cancer, malignant melanoma, colorectal cancer and urinary bladder cancer [16]; suggesting that the loss of these genes promote tumorigenesis. Consistently, previous studies have revealed multiple targets for miR-16 including BCL2, CCND1 and WNT3A [17][18][19][20], which are involved in tumour cell apoptosis or cell-cycle regulation; and thus, directly regulate tumour growth. However, less is known on the action of miR-16 in macrophage polarization, its potential targets involved in this process, or its implication in tumour development. To address these questions, we established an in vitro cell system, in which primary macrophages were isolated from mouse peritoneum and induced to differentiate into M1 or M2 cells in response to different cytokines. Using this model system, we were able to examine the role of miR-16 in macrophage polarization and explore potential targets that regulate this process.
Isolation and treatment of mouse peritoneal macrophages
All animal experiments were approved by the Institutional Animal Care and Use Committee of Yangzhou University (Yangzhou, China). Peritoneal macrophages were isolated from healthy, female C57BL/6 mice (6-8 weeks old; purchased from the College of Veterinary Medicine, Yangzhou University), as previously described [3]. To characterize the purity of isolated macrophages, cells were examined after 8 hrs of isolation by flow cytometry, as detailed below.
To induce the differentiation of mouse peritoneal macrophages at 8-12 hrs after isolation, 100 ng/ml of IFN-c (Peprotech, Rocky Hill, NJ, USA) with 20 ng/ml of LPS (Peprotech) or 20 ng/ml of IL-4 (Peprotech) was added to the cells and incubated at 37°C with 5% CO 2 for 36 hrs.
ELISA
ELISA kits (Bio-Swamp, Shanghai, China) for mouse IL-2, IL-4, IL-10, IL-12 and IFN-c were used to detect cytokines secreted from cells into the culture medium, according to manufacturer's instructions.
Nitric oxide assay
Nitric oxide level in culture medium was determined using a Griess assay-based nitric oxide detection kit (Beyotime, Jiangsu, China), according to manufacturer's instructions.
Quantitative real-time PCR
To determine the endogenous level of miR-16, we performed quantitative RT-PCR. Briefly, total RNA was extracted from cells using Trizol reagent (Invitrogen, Carlsbad, CA, USA). cDNA synthesis and miRNA quantification was achieved using the Mir-X miRNA First-Strand Synthesis and qRT-PCR SYBR Kits (Takara, Mountain View, CA, USA) according to the manufacturer's instructions. The primers used were as follows: for miR-16, forward 5 0 -TAGCAGCACGTAAATATTGGCG-3 0 ; for U6, forward 5 0 -CTCGCTTCGGCAGCACA-3 0 , and miR-16 and U6 reverse primer was included in Mir-X miRNA First-Strand Synthesis Kit. All reactions were set up in triplicate, with each experiment repeated three independent times. The relative quantification in gene expression was determined using the 2 ÀDDCt method [21]. to the target cells with polybrene (final concentration: 5 lg/ml; Genechem). After 72 hrs, cells were imaged under a fluorescence inverted microscope (Olympus, Tokyo, Japan) for EGFP expression. Cells were used for further analysis when more than 80% of cells were GFP-positive. For controls, cells that were not infected or those infected with LVcontrol were used.
Purification of CD4 + T cells from mouse spleen
To purify CD4 + T cells from mouse spleen, mouse spleens were dissected from healthy, female C57BL/6 mice (6-8 weeks old; purchased from the College of Veterinary Medicine, Yangzhou University) under sterile conditions. A 200-lm cell strainer was placed in a sterile 6-cm Petri dish and the spleens were transferred into the cell strainer with 1 ml of PBS. The spleens were mashed with a plunger from a 2-ml syringe to release splenocytes into the Petri dish. Then, cell suspension from the Petri dish was transferred into 15-ml conical tubes, spun down at 650 9 g for 5 min. at 4°C, and the supernatant was discarded. The pellet cells were lysed in a 3-ml RBC lysis buffer (0.15 M of NH 4 Cl, 1 mM of KHCO 3 and 0.1 mM of ethylenediaminetetraacetic acid) at 4°C for 10 min. After washing twice with PBS, cells were incubated with PE-conjugated antimouse CD4 at 4°C in the dark for 30 min. and sorted for CD4 + T cells using a flow sorter (FACS Aria; BD Biosciences, San Jose, CA, USA).
Co-culture of macrophages with CD4 + T cells
Mouse macrophages and purified CD4 + T cells were seeded into 6-well plate at 2 9 10 6 cells/well and 6 9 10 6 cells/well, respectively. Antimouse CD3 (0.5 mg/l; eBioscience) and antimouse CD28 (0.5 mg/l; eBioscience) antibodies were also added into the co-culture systemto stimulate the proliferation of CD4 + T cells. After 36 hrs of co-culture, the medium was collected, centrifuged (14,792 9 g) at 4°C for 5 min. to remove cell debris and the supernatant was stored at À80°C until further use.
Western immunoblot
Total protein was extracted using cell lysis buffer (KeyGEN, Nanjing, China) and protein concentration was measured using a BCA kit (Key-GEN), according to manufacturer's instructions. The same amount of total proteins from different samples were separated on SDS-PAGE gels and transferred onto polyvinylidene difluoride (PVDF) membranes. Proteins were incubated with anti-CD274/PD1L1 (Abcam, Cambridge, MA, USA) or anti-b-actin (internal control, KeyGEN), followed by the corresponding secondary antibodies (Beyotime). Then, the signal was developed using an ECL kit (KeyGEN) and analysed with Gel-Pro32 software.
Statistical analysis
Statistical analysis was performed with SPSS 17.0 software. All experiments were performed at least three independent times. Quantitative data were presented as mean AE S.D. and compared using Student's ttest. A P value <0.05 was considered statistically significant.
Primary peritoneal macrophages polarize to either M1 or M2 cells in response to different cytokines
To examine the effects of miR-16 in macrophage polarization, an in vitro model system was first established; wherein, mouse primary peritoneal macrophages were isolated and induced to differentiate either into M1 cells in response to INF-c with LPS (INF-c+LPS) or M2 cells following IL-4 treatment. Macrophages of >90% purity were obtained following a well-established protocol, which was demonstrated by surface staining for F4/80 (Fig. S1). Without any treatment (basal state, M0), these mouse primary peritoneal macrophages contained approximately 52% CD16/32 + cells, <2% CD206 + cells and approximately 10% Dectin-1 + cells (Fig. 1A). In response to INF-c+LPS treatment, these cells presented significant M1 features including the dramatic increase in CD16/32 + cells to >85%, a higher nitric oxide production, and IL-12 secretion into the culture medium ( Fig. 1B and C). In contrast, isolated peritoneal macrophages treated with IL-4 shifted cells to prevalent M2 phenotypes with a dramatic increase in CD206 + and Dectin-1 + cells (to >30% and >50% respectively; Fig. 1A), and a prominent IL-10, but not NO secretion. (Fig. 1B and C).
MiR-16 induces M1 polarization of primary peritoneal macrophages from basal state
After testing the differentiation capability of isolated mouse primary peritoneal macrophages, we first examined the endogenous level of miR-16 during the differentiation by quantitative real-time PCR. As shown in Figure S2, the endogenous miR-16 level was significantly lower in IL-4-induced M2 cells than in IFN-c+LPS-induced M1 cells, suggesting that endogenous miR-16 might be functionally important for maintaining the M1 phenotype. To assess the biological activity of miR-16, we infected M0 cells with miR-16-expressing lentivirus (M0-miR-16); and either non-infected parental cells (M0) or cells infected with control lentivirus (M0-control) were used as controls. The overexpression of miR-16 clearly shifted cells to M1 phenotypes (Fig. 2), as indicated by the increase in CD16/32 + cells from approximately 50% to >65% (P < 0.05, compared with M0 or M0-control cells), a significantly higher secretion of nitric oxide and IL-12 into the culture medium (P < 0.05, compared with M0 or M0-control cells), and the absence of dramatic alterations on CD206 + , Dectin-1 + cells or on MiR-16 induces M1 polarization of peritoneal macrophages from M2-polarized state Next, the capacity of miR-16 to induce M1 phenotypes on macrophages that already presented M2 features was examined. IL-4-treated macrophages (M2) were infected with miR-16-expressing lentivirus (M2-miR-16), and the phenotypes of these cells were compared with parental non-infected M2 cells or M2 cells infected with control lentivirus (M2-control). Similar to the effects on M0 peritoneal macrophages, miR-16 induced IL-4-treated macrophages to shift from an M2 state to an M1 state, with a significant increase in CD16/ 32 + cells, a decrease in CD206 + and Dectin-1 + cells, the stimulation of nitric oxide and IL-12 production, and the repression of IL-10 production (P < 0.05, compared with M2 or M2-control cells; Fig. 3).
MiR-16-expressing macrophages activate purified CD4 + T cells CD4 + T cells purified from mouse spleen (purity, approximately 97%; Fig. S2) were co-cultured with M2, M2-control or M2-miR-16 to analyse the functional significance of the miR-16-induced M1 shift. Then, anti-CD3 and -CD28 antibodies were added into the co-culture system to stimulate the activation/proliferation of CD4 + T cells. By quantifying cell-surface activation marker, CD69, the addition of anti-CD3 and -CD28 antibodies was found to significantly boost the activation of CD4 + T cells. Co-culturing with M2 or M2-control macrophages significantly inhibited CD4 + T-cell activation, as revealed by the reduced surface expression of CD69 (P < 0.05, compared with CD4 + T+ anti-CD3 + anti-CD28 cells; Fig. 4A). In contrast, co-culturing with M2-miR-16 macrophages released the suppression on CD4 + T-cell activation (P < 0.05, compared with CD4 + T+anti-CD3 + anti-CD28 + M2 or CD4 + T+anti-CD3 + anti-CD28 + M2-control); although the level of activation did not reach that achieved with anti-CD3 + anti-CD28 alone (P < 0.05, compared with CD4 + T+anti-CD3 + anti-CD28 cells; Fig. 4A). Consistent with alterations in CD4 + T-cell activation, the same pattern of changes was observed on the secretion of pro-inflammatory cytokines IFN-c and IL-2 in the co-culture system (Fig. 4B). Secretions of these cytokines, which were dramatically reduced as a result of the co-culture of CD4 + T cells with M2 or M2-control macrophages (P < 0.05, compared with CD4 + T cells alone), were significantly released following the co-culture of CD4 + T cells with M2-miR-16 macrophages (P < 0.05). In addition, when cultured with CD4 + T, M2, M2-control or M2-miR-16 alone, a minimal production of IFN-c or IL-2 was detected; suggesting that cytokines detected from the coculture system were mostly produced by activated CD4 + T cells.
MiR-16 down-regulates PD-L1 expression in peritoneal macrophages
Focus was given on PD-L1, a transmembrane protein expressed on macrophages that drives the activation state of macrophages towards M2 phenotypes, to explore the potential mechanism by which miR-16 stimulates macrophage differentiation towards M1 phenotypes [22]. Through bioinformatic analysis using TargetScan, miRanda and Pic-Tar, a potential binding site for miR-16 within the 3 0 -UTR of PD-L1 mRNA was identified, which is identical for human, chimpanzee, and mouse PD-L1 mRNA (Fig. 5A); suggesting that PD-L1 could be a potential target for miR-16. To test this possibility, the expression of PD-L1 in M2 alone, M2-control and M2-miR-16 cells were examined. Western immunoblot results revealed that PD-L1 expression was reduced by approximately 50% in M2-miR-16 cells, as compared with M2 or M2-control cells (P < 0.05, Fig. 5B). Consistently, we detected similar changes on the surface expression of PD-L1 by flow cytometry (Fig. S4).
Discussion
In this study, our seminal findings revealed that miR-16 is sufficient to induce the differentiation of primary peritoneal macrophages or repolarize M2 macrophages towards M1 phenotypes. Molecular mechanisms underlying this miR-16 effect involve at least the expressional regulation on PD-L1. This study is the first to demonstrate miR-16 action in macrophages. Furthermore, these findings would greatly impact our understanding on immune regulation, and guide the development of novel immune therapies for cancer and other immune-related diseases. Macrophages present great plasticity for polarization/repolarization, and several transcription factors have been suggested in regulating this process [23]. However, its underlying mechanisms remain largely elusive. Darnell et al. found that the signal transducers and activators of transcription 1 (STAT1) homodimers induced by IFN-c engages the cis elements within the promoter region of target genes including iNOS and IL-12, promotes target gene expression and drives the M1 differentiation of macrophages [24]. Fujioka et al. revealed that although sequestered in an inactive state in quiescent monocytes, NF-jB is induced in response to inflammatory stress; activating transcription pro-inflammatory cytokines including TNF-a and IL-1b to promote M1 polarization [25]. In response to IFN-b stimulation, interferon regulatory factor 9 (IRF9) complexes with STAT2 homodimers and stimulates M1 polarization [26]. Toll-like receptor 4 signalling-activated IRF3 enhances the production of IFN-b, promoting the M1 phenotype [27]. Interferon regulatory factor 5 is required for IL-12 expression and contributes to M1 polarization [28]. The up-regulation of hypoxia-inducible factor (HIF)-1a in response to hypoxia and LPS down-regulates Kr € uppel-like factor 2, which in turn inhibits the recruitment of NF-jB to the promoter of target genes; and thus, stimulates the M1 polarization of macrophages [29]. In contrast, several other transcription factors have been shown to promote M2 polarization. HIF-2a competes with iNOS in L-arginine metabolism; and thus, inhibits nitric oxide production [30]. Ligand-dependent peroxisome proliferation-activated receptor-c (PPAR-c) associates with the NCoR repressor complex to inhibit the transcriptional activity of STATs, NF-jB and AP1; dampening M1 polarization [31]. IL-4/STAT6 signalling induces PPAR-d expression and promotes M2 polarization [32]. Kr € uppel-like factor 4 cooperates with STAT6 to mediate Arg-1 transcription during M2 polarization [33]. In addition to transcription factors, miRNAs also modulate macrophage polarization [34,35]. Zhuang et al. reported aberrant miR-223 expression in chronic inflammatory disease including rheumatoid arthritis and type 2 diabetes mellitus. MiR-223 knockout mice, when fed with a high-fat diet, are vulnerable to adipose inflammation and insulin resistance. In response to LPS stimulation, miR-223 stimulates the M1 polarization of macrophages [36]. Moreover, LPS upregulates miR-155, which in turn inhibits transcription factor CCAAT/ enhancer-binding protein-b (C/EBP-b) protein expression and promotes M1 polarization [37]. Consistently, C/EBP-b is up-regulated in TAMs, promotes M2 phenotypes in these cells and protects tumour cells form cytotoxic immunity [37]. Ponomarev et al. revealed that brain-specific miR-124 is expressed in microglia, but not in monocytes or macrophages. When overexpressed in macrophages, miR-124 inhibits M1-macrophage polarization by inhibiting the translation of iNOS and TNF-a, and promoting M2-like phenotypes associated with Arg-1 expression [38].
MiR-16 is located at the human chromosome 13q14 region, which is within the same gene cluster as miR-15. This gene locus is frequently deleted or mutated in multiple cancers, suggesting its tumour suppressor activity in cancer development. Accordingly, a number of miR-16 targets have been identified to function in apoptosis [17] or cell-cycle regulation [20], supporting the direct role of miR-16 in tumorigenesis. In addition, miR-16 also directly or indirectly regulates other target genes to modulate cancer behaviour and invasiveness. In leukaemia, although miR-16 does not directly bind to the 3 0 -UTR of the Wilms tumour protein 1 (WT1) mRNA, WT1 down-regulation in response to miR-16 significantly correlates with the development of acute myeloid leukaemia [39]. In U937 lymphoma cells, miR-16 expression is up-regulated by LPS, which in turn negatively regulates NF-jB signalling and stimulates IL-8 production [40]. In mammary tumour stem cells, miR-16 negatively regulates the expression of wild-type p53-induced phosphatase 1 (Wip1), suppresses the self-renewal and growth of these cells, and sensitizes breast cancer cells to chemotherapeutic agents [41]. In colorectal carcinoma, miR-15a and miR-16-1 directly down-regulates the expression of AP4, a transcription factor critical for epithelialmesenchymal transition (EMT) and cancer invasiveness/metastasis. In return, AP4 exerts a negative feedback to inhibit miR-15a/miR-16-1 expression [42]. To date, most studies on miR-16 have focused on its actions in cancer cells, with minimal information available on its potential roles in other cell types within the tumour microenvironment.
In this study, we identified a novel function of miR-16; which is its capability to promote the M1 phenotype from primary peritoneal macrophages at basal state or from IL-4-induced M2 macrophages. Given the importance of M2-predominant TAM in cancer development and the loss of miR-16 in multiple cancers, we propose that miR-16 may also function in macrophages to complement its roles in cancer cells. To test our hypothesis, we established an in vitro macrophage cell system that could be induced to differentiate into either M1 or M2 phenotypes in response to distinct cytokines; and this allowed for the examination of the effects of miR-16 in this process. Following established protocols, high-purity macrophages (M0) were isolated from mouse peritoneum [3]; and these cells were successfully polarized to either M1 or M2 phenotypes following IFN-c+LPS or IL-4 stimulation [43]. Isolated macrophages were infected with lentivirus to assess the significance of miR-16 in this process, since preliminary studies that used plasmid transfection did not yield satisfactory transfection efficiency. Lentiviral infection led to the stable expression of miR-16 in more than 80% of macrophages. In the examination of the phenotypes of these cells, surface marker expression and the production of cytokines and nitric oxide, we found that the ectopic expression of miR-16 not only promotes the M1 polarization of primary peritoneal macrophages at basal state but also repolarizes IL-4-induced M2 macrophages to M1 phenotypes. This study is the first to reveal the action of miR-16 in macrophage polarization. Interestingly, when examining the changes of endogenous miR-16 during the differentiation, we found that it was decreased in M2 cells, as compared to M1 cells, suggesting that macrophage-derived miR-16 might be important for maintaining LPS+IFN-c-induced M1 phenotype, but not so for the IL-4-induced M2 phenotype. It does not exclude the possibility that during cancer development, exogenous miR-16 produced by other cell types such as cancer cells, may also contribute to the differentiation towards the M1 phenotype. However, the frequent deletion of the miR-16 gene locus in many cancers would inactivate this anticancer mechanism; thus, shifting the balance to M2-predominant phenotypes and promoting cancer development.
During the development of adaptive immunity, M1 and M2 macrophages, through antigen presentation, distinctively direct Th1 (cytotoxic) and Th2 (protective) responses respectively [44]. These two responses are mainly carried out by two distinct subsets of CD4 + helper T cells that are divided based on the cytokines produced: Th1 cells are characterized by the secretions of IFN-c, IL-1, TNF-b and IL-2, and participate in cellular immunity; Th2 cells mainly secrete IL-4, IL-6, IL-10 and IL-13, and regulate humoural (antibody-mediated) immunity [45]. In this study, we demonstrated that IL-4-differentiated M2 macrophages; and when co-cultured with CD4 + T cells, inhibited the activation of the latter, which is consistent with the immunosuppressive activity of M2 macrophages. In contrast, after M2 macrophages were infected with miR-16-expressing lentivirus, the inhibition on CD4 + T-cell activation was significantly released, which is coherent with the repolarization to the M1 phenotype in response to miR-16 expression.
B cells are another cellular component important to T-cell activity. It has been demonstrated that B-cell expansion in response to antigen presentation plays a critical role in inducing T-cell tolerance [46,47]. In addition, the study conducted by Klein et al. revealed that miR-15a/16-1 knockout is associated with B-cell expansion and the development of CLL [48] in mice, suggesting that the upregulation of miR-16 may promote B-cell death and subsequent Tcell activation. However, it remains unknown whether the predominant cell type in miR-15a/16-1 knockout mice is responsible for Bcell expansion, or whether other genes within the locus are functionally more important than miR-16. Future studies should characterize the cell-specific activities of miR-16 in various cell types, as well as abnormalities in B cells or other immune components in the context of different diseases.
MiR-16 is highly conserved among multiple species [49]. In this study, we performed a bioinformatic analysis to identify potential miR-16 targets that carry out its actions in macrophage polarization. Through a combined search on TargetScan, miRanda and PicTar, we found a miR-16-binding site within the 3 0 -UTR of the mouse, human and chimpanzee PD-L1 mRNA. PD-L1, also known as B7-H1 and CD274, belongs to the B7 costimulatory family; and is expressed in macrophages, DCs, immune cells including activated T cells and B cells, epithelial cells and tumour cells. By interacting with its receptor programmed death-1 (PD-1) found on activated T cells, B cells and myeloid cells, PD-L1 induces a co-inhibitory signal and promote T-cell apoptosis, anergy or functional exhaustion [50,51]. Thus, PD-1/PD-L1 signalling plays critical roles in regulating autoimmunity, immune responses after transplantation and cancer immunity. Consistent with their physiological functions, PD-1 deficiency has been found to induce macrophage polarization to the M1 phenotype after spinal cord injury in mice [52]. Blocking of the PD-1/PD-L1 pathway is currently being tested in clinic as a therapeutic approach to target cancer [53,54]. In this study, we found that the ectopic expression of miR-16 in IL-4-induced M2 macrophages led to the significant reduction in PD-L1 expression in these cells; suggesting that PD-L1 might be a target of miR-16 to mediate its effect on macrophage polarization. In previous studies, miRNA-200 and miR-513 have been reported to regulate the expression of PD-L1 [55,56]; which suggest that the expressional control of PD-L1 might be dependent on cell types/contexts.
Although this study was carried out in isolated peritoneal macrophages, the effects observed in this study may well-translate into in vivo macrophages under physiological and various pathological situations, which obviously requires further studies for its evaluation and characterization. Although miR-16 is well-demonstrated to be down-regulated in multiple cancers, its expression status in TAM and potentially other stromal cells within the tumour microenvironment should be carefully examined for any potential correlation with the clinicopathological features of tumours. Mechanistically, we found a correlation between the expression of miR-16 and PD-L1 in macrophages; however, it remains unknown whether PD-L1 is a direct target for miR-16. Furthermore, it would be desirable to perform a systemic analysis on gene expression profiles in macrophages with and without miR-16 expression, to obtain a more thorough picture of the potential gene targets of miR-16 and functional indications of miR-16 in addition to macrophage polarization.
In summary, we identified a novel function of miR-16 in macrophage polarization; that is, promoting the M1 phenotype. PD-L1 is a miR-16 target (directly or indirectly) to mediate this process. Therefore, the significance of miR-16 in cancer treatment might be twofold: (i) shifting the macrophage balance from M2-dominant immunosuppression to M1-mediated antitumour phenotype; (ii) down-regulating PD-L1 to block immune evasion. Given the significance of M1 and M2 macrophages in various diseases other than cancer, including autoimmunity, or resistance after transplantation, this study may provide a novel therapeutic tool for the immune regulation of various diseases.
Supporting information
Additional Supporting Information may be found online in the supporting information tab for this article: Figure S1 The isolated primary peritoneal macrophages are of high purity. The purity of isolated cells from mouse peritoneum was determined by flow cytometry for F4/80+ cells (red). As a negative control, the cells were stained with isotype-matched IgG (grey).
Figure S2
The endogenous miR-16 is reduced in M2 cells, when compared with M1 cells. Primary peritoneal macrophages were isolated and treated in the presence of IFN-c+LPS or IL-4 for 36 hrs.
The expression of miR-16 was examined by quantitative RT-PCR. **P < 0.01. Figure S3 The sorted CD4 + T cells from mouse spleen are of high purity. Cells isolated from mouse spleen were stained with PE-conjugated anti-CD4 antibody and examined by flow cytometry before (left) and after (right) sorting. Figure S4 PD-L1 is down-regulated by miR-16 in macrophages. The surface expression of PD-L1 in M1, M2, M2-control and M2-miR-16 cells were examined by flow cytometry, with the percentage of PD-L1 + cells presented and compared between different groups. *P < 0.05, compared with M2 or M2-control cells. | 6,545 | 2016-05-31T00:00:00.000 | [
"Biology"
] |
Sex-Dependent Synaptic Remodeling of the Somatosensory Cortex in Mice With Prenatal Methadone Exposure
Rising opioid use among pregnant women has led to a growing population of neonates exposed to opioids during the prenatal period, but how opioids affect the developing brain remains to be fully understood. Animal models of prenatal opioid exposure have discovered deficits in somatosensory behavioral development that persist into adolescence suggesting opioid exposure induces long lasting neuroadaptations on somatosensory circuitry such as the primary somatosensory cortex (S1). Using a mouse model of prenatal methadone exposure (PME) that displays delays in somatosensory milestone development, we performed an un-biased multi-omics analysis and investigated synaptic functioning in the primary somatosensory cortex (S1), where touch and pain sensory inputs are received in the brain, of early adolescent PME offspring. PME was associated with numerous changes in protein and phosphopeptide abundances that differed considerably between sexes in the S1. Although prominent sex effects were discovered in the multi-omics assessment, functional enrichment analyses revealed the protein and phosphopeptide differences were associated with synapse-related cellular components and synaptic signaling-related biological processes, regardless of sex. Immunohistochemical analysis identified diminished GABAergic synapses in both layer 2/3 and 4 of PME offspring. These immunohistochemical and proteomic alterations were associated with functional consequences as layer 2/3 pyramidal neurons revealed reduced amplitudes and a lengthened decay constant of inhibitory postsynaptic currents. Lastly, in addition to reduced cortical thickness of the S1, cell-type marker analysis revealed reduced microglia density in the upper layer of the S1 that was primarily driven by PME females. Taken together, our studies show the lasting changes on synaptic function and microglia in S1 cortex caused by PME in a sex-dependent manner.
INTRODUCTION
Despite efforts to curtail the opioid addiction crisis, opioid use and misuse continue to represent a major health concern. As the crisis has further developed, opioid-exposed infants have emerged as a particularly vulnerable population that is relatively understudied. A significant rise in maternal opioid use disorder (OUD) at delivery has translated into a substantial increase in neonatal opioid withdrawal syndrome (NOWS) by 3.3 per 1000 births representing an 82% increase in NOWS between 2010 and 2017 (1). Indeed, nearly half of all states within the US witnessed at least a 100% increase in both NOWS and maternal OUD with some states seeing a nearly fourfold increase in NOWS rates during this time period (1). Although often complicated by significant variations in prenatal/ postnatal environment, prenatal opioid exposure is associated with numerous physical and developmental impairments including poorer outcomes at birth and deficits in attention, behavioral regulation, motor skills, and cognitive performance throughout early childhood development (2)(3)(4).
In an effort to advance our understanding of the clinical implications of prenatal opioid exposure, there has been a growing interest in developing preclinical models of prenatal opioid exposure (5,6). These animal models have generally recapitulated findings described in clinical studies with prenatal opioid exposed animals demonstrating hyperactivity (7), cognitive dysfunction (8,9), and delayed neurodevelopment (10,11). To better replicate epidemiological trends in maternal opioid use (12,13), our laboratory developed a mouse model of prenatal methadone exposure (PME) as this recapitulates the growing proportion of prenatal opioid exposed cases resulting from treatment of OUD in reproductive age women (14). Rodent pups with PME exhibited withdrawal-like symptoms at birth, reduced growth, and altered behavior in an open field when repeatedly assessed throughout the weaning period (14). Additionally, several developmental milestones of sensorimotor-based behaviors were delayed in PME offspring including cliff aversion, surface righting, and the forelimb grasp task indicating offspring may struggle to translate multimodal sensory input into motor behaviors (14). Indeed, we discovered motor neurons of the primary motor cortex exhibited alterations in sub-threshold firing properties and local circuitry associated with this aberrant behavioral development of PME mice (14).
The maladaptive development of somatosensory circuitry may contribute to the sensorimotor behavioral phenotype of these PME mice. For instance, tactile information via whisker stimulation is necessary for the display of some developmental milestones such as the cliff aversion (15). These findings indicate the somatosensory system, specifically the primary somatosensory cortex (S1), may be an integral component of the neural circuit controlling reflexive behaviors during early development. To determine if pathological adaptations exist in the S1 of our PME model that may contribute to the impaired sensorimotor behavioral development (14), we performed quantitative global proteomics and phosphoproteomics of the S1 alongside electrophysiological and neuroanatomical assessments of the S1 excitatory and inhibitory synapses in early adolescent PME and prenatal saline exposed (PSE) offspring.
Animals and Model Generation
Protocols were approved by the Indiana University School of Medicine Institutional Animal Care and Use Committee and guidelines established by the National Institutes of Health were used to conduct animal care and research. An extensive description and characterization of model generation have been published elsewhere (14). Female C57BL/6J mice were randomly assigned to receive either saline (10 mL/kg) or oxycodone treatments to model oxycodone dependence prior to initiating treatment for OUD. We have previously demonstrated this oxycodone dosing strategy induces robust opioid dependency (14). All saline or oxycodone doses were administered subcutaneously twice daily at least 7 hours apart. Following 9 days of oxycodone injections, oxycodone-dependent mice began receiving methadone (10 mg/kg s.c. b.i.d.) while saline-treated animals continued to receive saline injections. Five days following the start of methadone treatment, an 8week-old C57BL/6J male mouse was placed into the cage of each female for 4 days. Methadone or saline treatments continued throughout the remainder of pregnancy and postnatal period up to weaning. We previously demonstrated that this dose of methadone leads to plasma levels within the therapeutic range and produces dependency in both dams and offspring (14). Additionally, we find this dosing strategy only minimally impacts pregnancy characteristics and does not influence maternal care (14). Oxycodone and methadone were obtained from the National Institute on Drug Abuse Drug Supply Program. Both offspring in our previous study and the current one were weaned at approximately 3 weeks of age and group housed (3-5 per cage). Early adolescent offspring (P21-P36) were used for the proteomics, immunohistochemical, and electrophysiological studies described here.
Protein Preparation
Sample preparation, mass spectrometry analysis, bioinformatics, and data evaluation for quantitative proteomics and phosphoproteomics experiments were performed in collaboration with the Indiana University Proteomics Core similar to our previous studies (16).
Animals were rapidly decapitated without anesthesia between 1 p.m. and 4 p.m. by a blinded researcher and tissue was dissected bilaterally. Slices were cut in a 0.5 mm coronal mouse brain matrix and whole S1 was carefully dissected from each slice. Tissue was immediately snap frozen in isopentane on dry ice and stored until later processing. Flash frozen brain lysates were homogenized using a BeadBug ™ 6 (Benchmark scientific Cat (18). Labelling reactions were quenched with 0.3% hydroxylamine (v/v) at room temperature for 15 min. Labeled peptides were then mixed and dried by speed vacuum. The TMT-labeled peptide mix was desalted to remove excess label using a 100 mg Waters SepPak cartridge, eluted in 70% acetonitrile, 0.1% formic acid and lyophilized to dryness.
Phosphopeptide Enrichment
Phosphopeptides were enriched from the mixed, labeled peptides on one spin tip from a High-Select ™ TiO2 Phosphopeptide Enrichment Kit (capacity of 1-3 mg; Thermo Fisher Scientific, catalog A32993). After preparing spin tips, labeled and mixed peptides were repeatedly applied to the TiO2 spin tip, eluted and immediately dried as per manufacturer's instructions. Prior to LC/MS/MS the phosphopeptides were resuspended in 25 µL 0.1% formic acid. The flow through from each tip was saved for global proteomics.
Nano-LC-MS/MS Analysis
Nano-LC-MS/MS analyses were performed on an EASY-nLC ™ HPLC system (SCR: 014993, Thermo Fisher Scientific) coupled to Orbitrap Fusion ™ Lumos ™ mass spectrometer (Thermo Fisher Scientific). One fifth of the phosphopeptides and one tenth of each global peptide fraction was loaded onto a reversed phase EasySprayTM C18 column (2 μm, 100 A, 75 μm × 50 cm, Thermo Scientific Cat No: ES802A) at 400 nL/min. Peptides were eluted from 4 to 28% with mobile phase B [Mobile phases A: 0.1% FA, water; B: 0.1% FA, 80% Acetonitrile (Fisher Scientific Cat No: LS122500)] over 160 min; 28%-35% B over 5 min; 35-50% B for 14 min; and dropping from 50 to 10% B over the final 1 min. The mass spectrometer method was operated in positive ion mode with a 4 s cycle time data-dependent acquisition method with advanced peak determination and Easy-IC (internal calibrant). Precursor scans (m/z 400-1750) were done with an orbitrap resolution of 120,000, RF lens% 30, maximum inject time 50 ms, standard AGC target, including charges of 2-6 for fragmentation with 60 s dynamic exclusion. MS2 scans were performed with a fixed first mass of 100 m/z, 34% fixed CE, 50000 resolution, 20% normalized AGC target and dynamic maximum IT. The data were recorded using Thermo Fisher Scientific Xcalibur (4.3) software (Thermo Fisher Scientific Inc.).
Proteome and Phosphoproteome Data Processing
Resulting RAW files were analyzed in Proteome Discover ™ 2.5 (Thermo Fisher Scientific, RRID: SCR_014477) with a mus musculus UniProt FASTA plus common contaminants. SEQUEST HT searches were conducted with a maximum number of 3 missed cleavages; precursor mass tolerance of 10 ppm; and a fragment mass tolerance of 0.02 Da. Static modifications used for the search were, 1) carbamidomethylation on cysteine (C) residues; 2) TMTpro label on lysine (K) residues and the N-termini of peptides. Dynamic modifications used for the search were TMTpro label on N-termini of peptides, oxidation of methionines, phosphorylation on serine, threonine or tyrosine, and acetylation, methionine loss or acetylation with methionine loss on protein N-termini. Percolator False Discovery Rate was set to a strict setting of 0.01 and a relaxed setting of 0.05. IMPptm-RS node was used for all modification site localization scores. Values from both unique and razor peptides were used for quantification. In the consensus workflows, peptides were normalized by total peptide amount with no scaling. Quantification methods utilized TMTpro isotopic impurity levels available from Thermo Fisher Scientific. Reporter ion quantification was allowed with S/N threshold of 7 and coisolation threshold of 50%. Data shown is for PME/PSE abundance value ratios (AR
Electrophysiology Recordings
Whole-cell, voltage-clamp recordings from pyramidal neurons in layer 2/3 (L2/3) of the S1 barrel fields (between bregma −0.22 and −1.94 mm) were carried out at 29-32°C and aCSF was continuously perfused at a rate of 1-2 mL/min. Recordings were made from neurons using a Multiclamp 700B amplifier (Axon Instruments). Slices were visualized on an Olympus BX51WI microscope (Olympus Corporation of America). Pyramidal neurons were identified by their size, membrane resistance, and capacitance. Patch pipettes were prepared from filament-containing borosilicate micropipettes (World Precision Instruments) using a P-1000 micropipette puller (Sutter Instruments), having a 2.0-4.0 MΩ resistance. For both inhibitory and excitatory currents, tetrodotoxin (500 μM) was also added to the aCSF. For excitatory currents, the internal solution contained (in mM) 120 CsMeSO 3 , 5 NaCl, 10 TEA-Cl, 10 HEPES, 5 lidocaine bromide, 1.1 EGTA, 0.3 Na-GTP, and 4 Mg-ATP and picrotoxin (50 µM) was added to the aCSF for recordings to isolate excitatory transmission. For inhibitory currents, the internal solution contained (in mM): 120 CsCl 2 , 10 HEPES, 10 EGTA, 4 MgCl 2 , 2 MgATP, 0.5 NaGTP, and 5 lidocaine and 5 µM NBQX and 50 µM AP-5 were added to the aCSF for isolating inhibitory transmission. After a stabilization period of at least 5 mins, miniature inhibitory postsynaptic currents or excitatory postsynaptic currents (mIPSCs and mEPSCs, respectively) were measured over the course of a 3-min gap-free recording for mEPSCs and 2 mins for mIPSCs. Data were acquired using Clampex 10.3 (Molecular Devices).
Electrophysiology Data Processing
For all recordings, series resistance was monitored and only cells with a stable series resistance (less than 25 MΩ and that did not change more than 15% during recording) were included for data analysis. mEPSC and mIPSC data were processed via MiniAnalysis software (Synaptosoft Inc.).
Immunohistochemistry
Offspring were anesthetized with isoflurane and perfused with 4% paraformaldehyde prepared in PBS for 10 mins at a pump rate of 2 mL/min. Fixed brains were sectioned into 100 μm sections in the coronal plane (between bregma −0.1 and −1.94 mm) using a Leica VT-1000 vibrating microtome (Leica Microsystems) and stored in antigen preserved solution (PBS, 50% ethylene glycol and 1% polyvinyl pyrrolidone) at −20°C until later analysis. For synaptic marker (VGAT, Gephyrin, VGluT1, VGluT2, and PSD95) staining, sections were permeabilized with 2% Triton X100, then incubated with a blocking solution (3% normal goat serum prepared in PBS with 0.3% Triton X-100) and then incubated overnight with primary antibody prepared in blocking solution (See Table 1 for concentration and source). For S100β and Iba1 staining, sections were permeabilized with 0.3% Triton X100, then incubated with a blocking solution and then incubated overnight with primary antibody prepared in blocking solution. An appropriate secondary antibody conjugated with an Alexa series fluorophore was used to detect the primary antibody. DAPI (100 ng/ml, Thermo Fisher) or Draq5 (1:10,000 dilution, Cell Signaling) was included in the secondary antibody solution to stain nuclei.
For imaging synaptic marker staining, Z-stack confocal images were acquired from both hemispheres with a Nikon A1 confocal microscope with a 60X/NA1.4 objective at 3 times software zoom or Leica SP8 confocal microscope with a 63X/NA1.2 objective at 2.5 times software zoom. The Z-stacks were taken at 0.1 µm intervals (for VGAT + Gephyrin) or 0.2 µm (for VGluT1/2 + PSD95), and 2-4 µm-total thickness was imaged. Two images from each hemisphere and both hemispheres were imaged per animal. We utilized Imaris (Bitplane, Zurich, Switzerland) to quantify synaptic punctate at the three-dimensional level and establish the data analysis workflow to quantify the synaptic number according to the published literature (21)(22)(23)(24). The volume occupied by nuclei and vasculature varied within each image, thus robustly impacting the density of synaptic marker quantification. To accurately estimate the neuropil occupied volume, we first used surface module to create the surface objects of nuclei and vasculature-like structure. Next, the gephyrin-or PSD95-channel was further masked by nuclei and vasculature objects to exclude the volume occupied by nuclei and vasculature. The post-masked gephyrin or PSD95 channel was used to generate a surface object containing the volume (neuropil object) to be analyzed. For spot detection, we followed similar procedures and parameter settings as described before (21)(22)(23)(24). Specifically, the pre-synaptic (VGluT1, VGluT2, and VGAT) and postsynaptic (gephyrin and PSD95) punctate were detected by Imaris spot module with 0.5 µm and 0.3 µm diameter according to the published literature. In general, the diameter for synaptic puncta is between 0.25-0.8 µm (25,26). In our experience in analyzing all acquired images (~400 images), the automatic threshold by Imaris was unable to detect synaptic punctuates reliably. To find the optimal detecting threshold for spot detection, we first manually defined the detecting threshold for one image from each animal. The threshold that detected most synaptic punctates without creating artifacts was applied to analyze all images and generated the spot layer for each synaptic marker. Only synaptic punctates inside the neuropil-object were used for subsequence analysis. Next, we determine how many pre-synaptic spots were directly opposed to postsynaptic spots (defined as synapse at anatomical level) with the distance 0.5 µm. The juxtaposed synaptic punctate of VGluT1/ PSD95, VGluT2/PSD95, and VGAT/gephyrin were defined as intracortical excitatory, thalamocortical excitatory, and inhibitory neurochemical inputs. Synaptic density was calculated as the number of synapses detected in a dataset over the volume of the dataset. All image acquisition and data analysis were performed in a blinded manner. For visualizing S100β and Iba1 staining, Z-stack confocal images were acquired from both hemispheres with a Nikon A1 confocal microscope with a 10X/NA0.45 objective or Leica confocal microscope with a 10X/NA0.75 objective. The Z-stacks were taken at 1 µm intervals, 5 µm-total thickness was imaged. One image from each hemisphere and both hemispheres were imaged per animal. Projection images of 5 µm-thickness were used for image quantification by using NIH ImageJ software. If the location was damaged or folded and, thus, unable to be quantified, the image was discarded. All image acquisition and data analysis were performed in a blinded manner.
Gene Ontology Enrichment Analysis
All analyses are presented as PME relative to PSE (e.g., log2 abundance ratios of PME/PSE). For overrepresentation analysis of Gene Ontology (GO), the UniProt Accessions of all differentially abundant proteins (p < 0.05) were submitted to the g:Profiler g:GOst Functional Profiling platform (27). For settings, "only annotated genes" was selected for the statistical domain scope and the significance threshold was set to Benjamini-Hochberg FDR<0.05. Electronic GO annotations were excluded and the term size was filtered to between 5 and 2000. The full results of the GO analysis are provided in the Supplementary Material and at https://github.com/gggrecco/S1-Omics. The biological process (BP) and cellular component (CC) terms were exported and subsequently processed via REViGO (Reduce and Visualize Gene Ontology) to reduce redundancy, summarize, and better visualize GO enrichment as a network (28). This network was clustered using the AutoAnnotate plugin in CytoScape and further formatted to generate a publicationready figure.
Kinase-Substrate Enrichment Analysis
A kinase-substrate enrichment analysis (KSEA) of the phosphoproteomics data was performed using the KSEA App (https://casecpb.shinyapps.io/ksea/) (29). All identified phosphopeptides with quantified abundance ratios (PME/PSE) and confirmed phosphosite modifications were utilized for the KSEA. PhosphoSitePlus + NetworKIN (NetworKIN score cutoff of 2) were used as the kinase-substrate dataset. Results were FDRcorrected (<0.05), and a z-score of enrichment was calculated to determine the normalized magnitude of upregulation or downregulation of kinases (PME vs. PSE). The full results of the KSEA analysis are provided in the Supplementary Material and at https://github.com/gggrecco/S1-Omics. The kinase scores resulting from the KSEA analysis were exported to Coral and overlayed onto kinome trees to better visualize patterns in kinase regulation where branches were set to represent the significance level, node color represents the z-score of enrichment, and node size represents the size of enrichment (absolute value of z-score) (30).
Electrophysiology and Immunohistochemistry Analysis
Data are graphically presented as the mean ± SEM for repeated measures or dot plots displaying all individual data points. The level of significance was a priori set at p < 0.05. All experiments were performed using both male and female offspring. To minimize potential litter effects in all completed studies, no more than two males and females per litter were utilized for any study. All studies were sufficiently powered to detect sex differences with sex considered as a factor. Immunostaining and electrophysiology statistical analyses were conducted using GraphPad Prism 9 software. ANOVAs with Sidak's post hoc tests were used for analyzing all electrophysiology and immunostaining data.
Differential Protein and Phosphopeptide Expression
To initiate an exploration into the possible mechanisms underlying the aberrant behavioral development in PME offspring, we collected whole S1 cortices from adolescent male and female PME and PSE offspring for quantitative proteomic and phosphoproteomic analysis. Overall, we identified 10,333 proteins and 3,231 phosphopeptides in the S1 of offspring. For the global proteome, 83 proteins were differentially abundant in males while 52 were differentially abundant in females with a Figures 1A,B). For the phosphoproteome, 89 phosphopeptides were differentially abundant in males and 13 were differentially abundant in females (p < 0.05; Figures 1C,D). These differentially abundant proteins and phosphorylated proteins included synaptic vesicle release machinery (synaptotagmin, bassoon, VAT1, and RIMS1), ion channels (GluN2B, GlyR α4 subunit, and voltage-dependent L-type calcium channel β4), proteins associated with the postsynaptic signaling response (GRIP1, SAP90/PSD95-associated proteins, and CaMKIIβ), and various proteins associated with maintaining synaptic structure (microtubule associated proteins, ankyrin 3, and NCAM). For a full list of these proteins, abundances for each sample, and p values, please see the Supplementary File spreadsheet. Surprisingly, there was very little overlap in the differentially abundant proteins or phosphopeptides (Figures 2A,B) between PME males and PME females. These data suggest that PME has a sex-dependent impact on the S1 proteome and phosphoproteome.
Gene Ontology Functional Enrichment
To further probe differences in the proteome and phosphoproteome of the S1, a Gene Ontology (GO) enrichment analysis to identify enriched Biological Processes (BPs) and Cellular Components (CCs) was performed using g: Profiler on the networks of significant differentially abundant proteins and phosphopeptides (27). The analysis of the global proteome revealed that 0 BPs and only 6 CCs in PME males while 36 BPs and 28 CCs in PME females were enriched in the network of differentially abundant proteins (FDR < 0.05). The phosphopeptide network appeared more enriched than the global proteome network. In the network of differentially abundant phosphopeptides of males and females, 212 BPs and 50 CCs in PME males and 242 BPs and 70 CCs in PME females were identified as enriched. The full identity and description of the terms can be found in the Supplementary Information. To facilitate the identification of patterns among the enriched terms, REViGO (28) was utilized to reduce redundancy and consolidate the similarities among the large lists of enriched BPs and CCs. In CytoScape, these terms were clustered into larger groups based on shared identities and visualized as nodes with edges indicating overlapping proteins associated with the BP or CC (Figures 3, 4, respectively). Many of the larger clusters of BPs enriched in the network of differentially abundant proteins and phosphorylated proteins were related to neuronal development, vesicle localization and transport, and synaptic organization ( Figure 3). For the enriched CCs, large clusters were frequently associated with the synapse, dendrite, and axon of the neuron (Figure 4). Although there were few overlapping proteins/phosphopeptides identified as differentially abundant in both PME males and females, there were many more similarities in BPs and CCs that were enriched in the proteome and phosphoproteome of both PME males and females ( Figures 2C,D). These network analyses suggest PME disrupts neuronal development and synaptic function through wide-scale changes in the proteomic and phosphoproteomic landscape.
Kinase-Substrate Enrichment Analysis
Lastly, a kinase-substrate enrichment analysis (KSEA) was performed (29) to estimate the changes in kinase pathways based on specific phosphorylation site modifications. The significant kinases predicted to be disrupted in the S1 of PME males and females can be seen in Figures 5A,B, respectively. The full enrichment results of the KSEA kinase scores and significance threshold is found in the Supplementary Material. The KSEA output was then overlaid onto kinome trees using Coral (30) to visualize patterns in enrichment among the various kinase families (Figures 6, 7). In both females ( Figure 6) and males (Figure 7), prenatal exposure to methadone was associated with many changes in the CMGC kinases (cyclin-dependent, mitogenactivated, glycogen synthase and CDC-like kinases family) with notable differences in the cyclin-dependent kinases (Cdk9, Cdk5, Cdk1, and Cdk6). In postmitotic neurons, it is generally thought that most Cdks display low expression; however, Cdk5 has been shown to phosphorylate presynaptic and postsynaptic proteins in mature neurons suggesting Cdks may impact plasticity and neurotransmission in postmitotic neurons (31,32). The AGC kinases (protein kinase A, G, and C family) including PKC, PKA, and PKG and the CAMK kinases (Calcium and Calmodulinregulated kinase family) including CaMKII, CaMK1, and CaMK4 were also predicted to be disrupted based on the differential phosphopeptide expression data. Members of the AGC and CAMK kinases families are well-known kinases regulating second messenger signaling cascades involved in synaptic signaling.
In summary, these multi-omic data indicate PME induces persistent and widespread changes to the S1 proteome and phosphoproteome with many effects associated with processes related to neurotransmission at the synapse in a sex-dependent manner.
Neurochemical Assessment of GABAergic Synaptic Markers
Given the large number of differentially abundant proteins associated with synaptic functioning and the enrichment in terms associated with the synapse, we investigated GABAergic synapse density in layer 2/3 (L2/3) and layer 4 (L4) of the S1 using the co-localization of the presynaptic vesicular protein VGAT and postsynaptic protein gephyrin (See Figures 8A,B for representative images). The co-localization VGAT and gephyrin was used to infer the presence of a "functional" Figure 8D, bottom). Although a main effect of exposure was present on the co-localization of gephyrin and VGAT in both L2/ 3 and L4, this exposure effect appeared to be driven by PME males which exhibited a significantly reduced density of co-localization in both L2/3 (ANOVA: Exposure, F (1,63) = 29.14, p < 0.0001; Sex, F (1,63) = 17.75, p < 0.0001; Interaction, F (1,63) = 13.93, p = 0.0004; PME female vs. PSE female, p = 0.41; PME male vs. PSE male, p < 0.0001; Figure 8E Figure 8E, bottom). These findings indicate PME significantly reshapes GABAergic synapse development in PME offspring by reducing the number of putative functional GABAergic synapses, although this effect is more prominent in male offspring.
Electrophysiological Assessment of Inhibitory Neurotransmission
These neurochemical differences in GABAergic synaptic markers led us to functionally examine inhibitory neurotransmission (primarily GABAergic) at L2/3 pyramidal neurons using whole cell patch clamp electrophysiology. Representative traces for mIPSCs in the S1 can be found in Figure 9A. The frequency of mIPSCs was significantly affected by sex but not prenatal exposure (ANOVA: Exposure, F (1,38) = 0.017, p = 0.90; Sex, F (1,38) = 7.77, p = 0.0083; Interaction, F (1,38) = 0.349, p = 0.56; Figure 9B). PME offspring exhibited a significant decrease in the amplitude of mIPSCs compared to PSE offspring (ANOVA: Exposure, F (1,38) Figure 9E). These electrophysiological findings indicate the differences in neurochemical synaptic markers may have functional consequences for inhibitory transmission in L2/3 pyramidal neurons of the S1.
Neurochemical Assessment of Glutamatergic Synaptic Markers
To investigate if the impairments in inhibitory synapses in PME offspring also extended to excitatory synapses, we next assessed glutamatergic synapse density in L2/3 and L4 of the S1 using the co-localization of the presynaptic vesicular protein VGluT1 Figure 10E, bottom) suggesting intracortical glutamatergic synapses are not significantly disrupted by PME. When assessing VGluT2 co-localization with PSD-95, we also did not discover any exposure-related effects on PSD-95 density in L2/3 (ANOVA: Exposure, F ( Figure 11E, bottom). This neurochemical assessment of putative functional glutamatergic synapses indicate PME may increase thalamocortical inputs in L4 and increase thalamocortical synaptic connections in L2/3.
Electrophysiological Assessment of Excitatory Neurotransmission
We followed-up this neurochemical assessment of glutamatergic synaptic markers by assessing functional excitatory inputs to L2/3 pyramidal neurons using electrophysiology. Representative traces for mEPSCs in the S1 can be found in Figure 12A. The results from kinase-substrate enrichment analysis were mapped onto kinome treeplots via Coral in which branch color corresponds to significance level, node color corresponds to z-score of enrichment, and node size correspond to magnitude of enrichment for kinase pathways in the S1 of females.
Advances in Drug and Alcohol Research | Published by Frontiers April 2022 | Volume 2 | Article 10400 Figure 12E). Similar to the limited effects of PME on neuroanatomical data, PME does not appear to disrupt excitatory transmission in L2/3 pyramidal neurons of the S1.
Glial Cell Density and Cortical Thickness
Finally, we used microglia-specific calcium-binding protein, Iba1, and the calcium binding protein, S100β, that is primarily expressed in astrocytes and oligodendrocyte to assess microglia and astrocyte density in the S1 (See Figures 13A-D Figure 13F). There was no effect of exposure present on S100β density within the upper Figure 13J).
These results indicate PME reduces microglia densities in the upper layer of S1 with a greater impact in PME females but has minimal effects in the deep layer or in other glial cells. Lastly, in the process of assessing cell-type specific makers and synaptic markers in the S1, we also measured cortical thickness of the S1 in coronal slices and observed a significant reduction of cortical thickness as a result of PME which was primarily driven by PME females (ANOVA: Exposure, F (1,65) = 6.22, p = 0.015; Sex, F (1,65) = 0.617, p = 0.44; Interaction, F (1,65) = 2.83, p = 0.097; Figure 14). The results from kinase-substrate enrichment analysis were mapped onto kinome treeplots via Coral in which branch color corresponds to significance level, node color corresponds to z-score of enrichment, and node size correspond to magnitude of enrichment for kinase pathways in the S1 of males.
Advances in Drug and Alcohol Research | Published by Frontiers April 2022 | Volume 2 | Article 10400
DISCUSSION
While the deleterious effects of opioids on the brain have canonically been described in regions associated with reward, the present study indicates prenatal exposure to opioids can impair S1 development. PME induced widespread changes to the proteomic and phosphoproteomic landscape of S1. These multi-omic changes were associated with several differences in excitatory and inhibitory synaptic development and cumulated in disrupted L2/3 pyramidal inhibitory neurotransmission. As these neurons represent a key node in the S1 microcircuit, excitatory/ inhibitory imbalances in these cells could impair how inputs from L4 (the main recipient of thalamic information) are transferred to L5 pyramidal neurons (the main output neurons of the S1) which would likely lead to aberrant expression of sensorimotor behaviors. In addition to the alterations we have previously detailed in the motor cortex, these findings in the S1 could contribute to the impaired expression of the various developmental milestones in our PME offspring (14). Impaired development of sensorimotor milestones and somatosensation have been widely observed in models of prenatal opioid exposure (11,(33)(34)(35). Similarly, deficits in S1 neurotransmission and morphology of S1 pyramidal cells that may contribute to these behaviors in opioid exposure models have also been described (35)(36)(37)(38). Using a fentanyl exposure model where dams can freely consume fentanyl orally, Alipio FIGURE 9 | PME impairs inhibitory transmission in L2/3 pyramidal neurons. (A) Schematic demonstrating coronally sectioned brain slice to acquire the S1 barrel fields (S1BF) for whole cell voltage clamp recordings at approximately −0.82 mm bregma (left). Representative traces for miniature inhibitory postsynaptic currents (mIPSCs) in the S1 (right). Scale Bars = 500 ms, 100 mV (B) mIPSC frequency was not affected by PME. (C) The amplitude of mIPSCs was significantly reduced in PME offspring (ANOVA: Exposure, p = 0.024). (D) The rise time was not altered by prenatal exposure. (E) The decay constant was significantly lengthened in PME offspring (ANOVA: Exposure, p = 0.049). n = 8 PME mice (4M:4F), 21 neurons (10M:11F) and 8 PSE mice (4M:4F), 21 neurons (11M:10F). *p < 0.05. et al. have found late adolescent mouse offspring exhibit numerous differences in the S1 including reduced excitatory transmission and increased inhibitory transmission in L5 pyramidal neurons (35). These L5 pyramidal neurons were also characterized by reduced dendritic branching and a smaller soma size which occurred alongside reduced expression of the neurotrophic receptor TrkB (35). In L2/3 neurons, they discovered fentanyl exposure impaired long term potentiation and increased the frequency of excitatory transmission in contrast to our null effects of PME on L2/3 mEPSC frequency (36). The present findings add to their work in several important ways. First, we provide the first comprehensive assessment of the proteomic and phosphoproteomic impact of PME on the S1 in both males and females. As the study of how prenatal opioid exposure impacts the S1 and somatosensory behaviors progresses, these data will provide an excellent, freely available resource and wealth of knowledge for any researcher to utilize. We identified numerous proteins and phosphopeptides of interest, and the enrichment analyses provide several molecular pathways that could serve to generate new hypotheses for future studies. Additionally, our synaptic marker analysis provides useful neurochemical context to supplement the electrophysiology findings in this present study and previous work (35,36). Although we did not observe alterations in excitatory transmission, we have identified changes in inhibitory transmission likely reflecting postsynaptic changes in L2/3 neurons. Lastly, our cell-type marker analysis is the first assessment to demonstrate prenatal opioid exposure alters the density of glial cells in the S1 which could impact synaptogenesis and synaptic pruning. It is worth noting that our model of PME and this previously discussed model of perinatal fentanyl exposure differ in many ways (39). Our mouse model seeks to recapitulate what is a growing clinical scenario: prenatal exposure to opioid agonists (e.g., methadone and buprenorphine) that treat OUD in pregnant women (12,13) whereas Alipio et al. are modeling recreational fentanyl misuse in women (39). This is important as fentanyl and methadone differ in their potency for stimulating the mu opioid receptor (MOR; fentanyl>>methadone), their off target activities (e.g., physiologically relevant NMDA receptor antagonism for methadone), and their pharmacokinetic profiles (methadone classically has a very long half-life while fentanyl is relatively rapid) which undoubtably contribute to differences in their ability to cross the placenta and impact offspring development (40)(41)(42)(43). Indeed, we and others have characterized the levels of methadone in offspring and determined that methadone tissue levels are quite high during the fetal period but drop to nearly undetectable levels in the first week of postnatal life (14,44) leading to withdrawal in offspring around postnatal day 1 (14) which is a withdrawal time course that is similar to clinical observations (45). Alipio et al. report classic opioid withdrawal symptoms shortly after weaning (postnatal day 22) suggesting fentanyl passes through the breastmilk at high levels or offspring have access to freely consume the fentanyl solution alongside the dam as they mature, but an accompanying report of fentanyl tissue and/or plasma levels is not provided to lend any insights (39). Lastly, our recordings were completed in early adolescent mice (around 3, 4 weeks of age), a period when synaptogenesis, gliogenesis, and myelination are still rapidly occurring whereas Alipio et al. recorded from late adolescent mice (around 6-8 week old mice) when brain development is more stable (46). While conflicting findings could be due to any combination of these key variations between studies, we believe the present findings will act in conjunction with Alipio et al. to bolster the current understanding of how prenatal exposure to opioids can disrupt somatosensory functioning. We initiated our exploration into the S1 by performing proteomics and phosphoproteomics of S1 bulk tissue from male and female PME and PSE offspring in attempt to identify processes or pathways which were uniquely affected by PME. For the global proteome, more proteins were identified as differentially abundant in males (83) than in females (52), yet the network of differentially abundant proteins in males yielded minimal enrichment (only 6 CCs were identified) while the female network generated dozens of enriched BPs and CCs. In females these BPs included "synaptic vesicle localization" and "synaptic vesicle clustering" and CCs included the "postsynapse," "dendrite," and "synaptic vesicle" indicating these terms are highly represented based on the differentially abundant proteins in PME females. Although the differential proteins and enrichment in the global proteome was quite distinct between males and females, the impact of PME on the phosphoproteome led to more similarities between males and females for the enrichment analyses. There were 38 CCs that were identified as enriched in both the male and female differential phosphopeptide network, and these CCs included "dendritic spine," "presynaptic active zone," "postsynaptic density" "excitatory synapse," "glutamatergic synapse," and "inhibitory synapse." Similarly, 110 BPs were identified as enriched in both the male and female differential phosphopeptide network including processes such as "vesicle mediated transport in synapse," "synapse organization," "GABA secretion," and "GABA transport." The results from these enrichment analyses indicate there were many more alterations in proteins associated with these synaptic signaling processes and synaptic cellular locations than would be expected. Interestingly though, few proteins and phosphopeptides were identified as differentially abundant in both males and females suggesting PME has unique effects on the male and female proteome/phosphoproteome. However, while the individual proteins/phosphopeptides may differ, in many cases, the cumulative effect of these proteomic and phosphoproteomics effects still produced many similarities in GO and kinase enrichment. Nonetheless, these multi-omics findings served as an excellent source of hypothesis generating data as we later discovered several differences in GABAergic and glutamatergic synapses based on both anatomical and functional investigations.
There are some noteworthy limitations to bear in mind when considering this multi-omics analysis. First, although the differential protein/phosphopeptide expression in PME offspring indicated alterations in synaptic signaling were present, these enrichment analyses do not provide "directionality." For instance, while the phosphopeptide abundances in PME offspring indicate the "GABA transport" BP is enriched, a GO enrichment analysis does not tell us that GABA transport is increased or decreased only that this BP is significantly represented given the list of differentially abundant phosphopeptides. This may explain why there was limited overlap in differentially expressed proteins/phosphopeptides, but more modest overlap in GO enrichment. Additionally, one-to-one comparisons between proteomics/phosphoproteomics data with synaptic marker or electrophysiology findings remain difficult as bulk S1 tissue was taken for the multi-omics analysis. Therefore, the quantified proteins/phosphopeptides may have originated in various S1 layers, glia cells, interneurons of S1, or even presynaptic inputs from other brain regions. This likely explains why differences in the density of synaptic markers were discovered in the immunostaining analysis, but these protein markers were not identified in the multiomics analysis as differentially abundant.
How methadone induces these sex-dependent effects on the S1 remains to be investigated. Both in vitro and in vivo work has determined that the developing central nervous system expresses opioid receptors and opioid peptides during the prenatal period (47)(48)(49)(50). During embryonic development, MOR agonists, including methadone, appear to inhibit growth, differentiation, and proliferation of neural and glial progenitor cells (48,49). Additionally, the expression and functioning of the endogenous opioid system during early development is often transient meaning the effects of opioid exposure on the developing brain may differ considerably from the effects of opioids on an adult brain (48). Therefore, exposure to the exogenous opioid methadone during this critical period of embryonic neurodevelopment could disrupt MOR-mediated signaling leading to a lasting disruption in S1 neuronal development and synaptogenesis. Although we have previously identified sex-dependent effects in reward-related behavior in this PME model (51), the sex differences discovered across various modalities of data collected in the present study add an additional layer of complexity that was unexpected. There is evidence for cross-talk between estrogen signaling and MOR expression (52,53). As hypothalamic-pituitary-adrenal axis dysfunction is observed in opioid exposed offspring (54,55), it is possible PME also disrupts the hypothalamic-pituitary-gonadal axis altering steroidal hormones FIGURE 14 | Cortical thickness of somatosensory cortex. PME significantly reduced cortical thickness of S1 (ANOVA: Interaction, p = 0.015). Although no significant main effect of sex or interactions with sex were present, this reduction in cortical thickness visually appears to be driven by PME females. n = 9 (4M:5F) PME, 8 PSE (4M:4F), one image per hemisphere, two brain section/animal. concentrations or their receptors during the perinatal period when the brain is uniquely sensitive to the enduring effects of these hormones (56) which may contribute to the observed sex differences in opioid exposed offspring. The mechanisms underlying the sex-dependent effects of PME on offspring behavioral and brain development will require further investigation.
In summary, our findings indicate PME induces prominent disruptions in the S1 in a sex-dependent manner. Dozens of proteins and phosphopeptides display differential abundance in PME offspring with functional enrichment in several relevant pathways including those related to synaptic transmission. PME offspring also exhibit layer-dependent differences in GABAergic markers, glutamatergic markers, and microglia density. Lastly, PME has functional consequences on S1 neurotransmission as L2/3 pyramidal neurons exhibit disrupted inhibitory transmission. These findings suggest deficits in sensorimotor development observed in models of prenatal opioid exposure may result from persistent neuroadaptations induced by opioid exposure during fetal development of the S1.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors without undue reservation. All supplementary files and raw data can be found at https://github. com/gggrecco/S1-Omics.
ETHICS STATEMENT
The animal study was reviewed and approved by Indiana University School of Medicine Institutional Animal Care and Use Committee.
AUTHOR CONTRIBUTIONS
GG, JH, BM, AM, H-CL, and BA designed experiments. GG, YG, and BR generated animals. ED and AM performed proteomics and phosphoproteomic studies. GG completed proteomic and phosphoproteomic analyses. JH and CH completed all immunostaining work. GG and BM completed electrophysiology studies and analyses. All authors discussed the results and contributed to all stages of manuscript preparation.
FUNDING
The mass spectrometry work performed in this work was done by the Indiana University Proteomics Core. Acquisition of the IUSM Proteomics core instrumentation used for this project was provided by the Indiana University Precision Health Initiative. | 9,307.2 | 2022-04-01T00:00:00.000 | [
"Biology"
] |
High Order ADER Schemes for Continuum Mechanics
In this paper we first review the development of high order ADER finite volume and ADER discontinuous Galerkin schemes on fixed and moving meshes, since their introduction in 1999 by Toro et al. We show the modern variant of ADER based on a space-time predictor-corrector formulation in the context of ADER discontinuous Galerkin schemes with a posteriori subcell finite volume limiter on fixed and moving grids, as well as on space-time adaptive Cartesian AMR meshes. We then present and discuss the unified symmetric hyperbolic and thermodynamically compatible (SHTC) formulation of continuum mechanics developed by Godunov, Peshkov, and Romenski (GPR model), which allows to describe fluid and solid mechanics in one single and unified first order hyperbolic system. In order to deal with free surface and moving boundary problems, a simple diffuse interface approach is employed, which is compatible with Eulerian schemes on fixed grids as well as direct Arbitrary-Lagrangian-Eulerian methods on moving meshes. We show some examples of moving boundary problems in fluid and solid mechanics.
In this paper we first review the development of high order ADER finite volume and ADER discontinuous Galerkin schemes on fixed and moving meshes, since their introduction in 1999 by Toro et al. We show the modern variant of ADER based on a space-time predictor-corrector formulation in the context of ADER discontinuous Galerkin schemes with a posteriori subcell finite volume limiter on fixed and moving grids, as well as on space-time adaptive Cartesian AMR meshes. We then present and discuss the unified symmetric hyperbolic and thermodynamically compatible (SHTC) formulation of continuum mechanics developed by Godunov, Peshkov, and Romenski (GPR model), which allows to describe fluid and solid mechanics in one single and unified first order hyperbolic system. In order to deal with free surface and moving boundary problems, a simple diffuse interface approach is employed, which is compatible with Eulerian schemes on fixed grids as well as direct Arbitrary-Lagrangian-Eulerian methods on moving meshes. We show some examples of moving boundary problems in fluid and solid mechanics.
INTRODUCTION AND REVIEW OF THE ADER APPROACH
The development of high order numerical schemes for hyperbolic conservation laws has been one of the major challenges of numerical analysis for the last decades. Godunov [1] proved that for the linear advection equation no monotone linear schemes of second or higher order of accuracy can be constructed. Therefore, even if physical viscosity is considered, a linear high order scheme will present spurious oscillations near discontinuities, as it can be seen, for instance for the Lax-Wendroff scheme, Lax and Wendroff [2]. A first idea to circumvent this theorem has been proposed in Kolgan [3], where limited slopes are employed to produce a non-linear scheme of second order of accuracy in space. Since then, many high order numerical methods have been developed like the Total Variation Disminishing methods (TVD) and Flux limiter methods (see, for instance, [4][5][6][7][8][9]). Despite these methodologies being already well-established at the end of the last century, their major drawback was that they just provided global second order of accuracy and reduced locally to first order in the vicinity of smooth extrema.
More advanced non-linear methods for advection dominated problems involve the family of ENO and WENO schemes, see Harten and Osher [10], Harten et al. [11], and Shu [12]. In particular, the method of Harten et al. [11] is a fully discrete high order scheme that can be re-interpreted in terms of the solution of a generalized Riemann problem (GRP), see Castro and Toro [13]. Moreover, it can be seen as a generalization of the MUSCL-Hancock method of van Leer, see van Leer [8], Toro [9], and Berthon [14].
Following the idea of solving a generalized Riemann problem (GRP), see also Ben-Artzi and Falcovitz [15], LeFloch and Tatsien [16], Ben-Artzi et al. [17], and Han et al. [18], the ADER approach (Arbitrary high order DErivative Riemann problem) has been first put forward for the linear advection equation with constant coefficients by Millington et al. [19] and Toro et al. [20]. The first step of the methodology involves piecewise polynomial data reconstruction, where a non-linear ENO reconstruction is applied in order to avoid spurious oscillations of the numerical solution. Then, a GRP is defined at each cell interface. Classically, the initial condition for the GRP was given as piece-wise linear polynomials and second order schemes could be obtained by constructing a space-time integral of the solution in an appropriate control volume [21,22], or following a MUSCL approach, van Leer [23] and Colella [24]. An alternative methodology proposed in Ben-Artzi and Falcovitz [25] consists in expressing the solution of the GRP as a Taylor series expansion in time. The ADER approach obtains the high order time derivatives of the GRP solution at the cell interface via the Cauchy-Kovalevskaya procedure, which replaces time derivatives by spatial derivatives using repeated differentiation of the differential form of the PDE. The spatial derivatives, which may also jump at the interface, are defined via the solution of linearized Riemann problems for the derivatives, where linearization is carried out about the Godunov state obtained from the classical Riemann problem between the boundary extrapolated values at the interface. In Figure 1, the classical piece-wise constant polynomials are plotted against a high order reconstruction and the similarity solutions for both cases are sketched. Finally, these similarity solutions are used to construct the numerical flux. The resulting schemes are arbitrary high order accurate in both space and time, in the sense that they have no theoretical accuracy barrier.
Since their introduction in Toro et al. [20] and Millington et al. [19], many extensions of the ADER methodology have been proposed. Regarding 2D linear PDEs, one may refer to Schwartzkopff et al. [26] and their simplification for the particular case of structured grids in Schwartzkopff et al. [27]. Moreover, non-linear systems have been initially addressed in Toro and Titarev [28] and Titarev and Toro [29]. Further applications of ADER on non-Cartesian meshes have been presented in Käser [30], Käser and Iske [31], Dumbser et al. [32], and Castro and Toro [13]. One should also mention the development of ADER schemes in the framework of discontinuous Galerkin (DG) finite element methods, see Qiu et al. [33], Dumbser and Munz [34] and Gassner et al. [35]. One of the main advantages of using DG is that the reconstruction step of classical ADER finite volume (ADER-FV) schemes can be skipped, since the discrete solution is already given by high order piecewise polynomials that can be directly evolved during each time step. Furthermore, ADER-DG schemes avoid the use of classical Runge-Kutta time stepping and thus provide efficient communication-avoiding schemes for parallel computing, see Fambri et al. [36] and allow for simple and natural time-accurate local time stepping (LTS), see Dumbser et al. [37].
An important step forward in the development of more general ADER schemes was achieved in Dumbser et al. [38], where a new class of ADER-FV methods has been introduced. The main contribution of this paper consists in the introduction of a new element-local space-time DG predictor, which allows at the same time the treatment of stiff source terms, as well as the replacement of the cumbersome Cauchy-Kovalevskaya procedure. First, a high order WENO method is employed to compute a polynomial reconstruction of the data inside each spatial element; then, an element-local weak formulation of the conservation law is considered in space-time and the predictor is applied to construct the time evolution of the WENO polynomials within each cell. Note that, in this step, the integration by parts is performed only in time, which differs from global space-time DG schemes [39,40], which are globally implicit. Finally, the cell averages are updated with an explicit fully discrete one-step scheme, considering the integral form of the equations. As a result, the proposed methodology maintains arbitrary high order of accuracy, while avoiding the issues related to the use of a Taylor series expansion in time. As already mentioned above, it naturally provides an approach for the treatment of stiff source terms [for further details on this topic, see [41] and references therein].
The above methodology can also be applied in the discontiuous Galerkin framework as presented in Dumbser et al. [42], where, a unified P N P M framework for arbitrary high order one-step finite volume and DG schemes has been introduced. For other reconstruction-based DG schemes, see e.g., Luo et al. [43,44]. Afterwards, the methodology has been extended to solve a wide variety of different PDE systems, such as the resistive relativistic MHD equations, Dumbser and Zanotti [45]; non-conservative hyperbolic systems found in geophysical flows, Dumbser et al. [46] in which a wellbalanced and path-conservative version of the scheme has been developed; compressible multi-phase flows Dumbser et al. [47], the compressible Navier-Stokes equations, Dumbser [48]; the compressible Euler equations and divergence-free schemes for MHD, Balsara et al. [49], and Balsara and Dumbser [50], where ADER schemes were used in combination with genuinely multidimensional Riemann solvers. The last extensions concern the special and general relativistic MHD equations, see Zanotti et al. [51], and Fambri et al. [36], as well as the Einstein field equations of general relativity [52,53].
Later, ADER schemes have been extended to adaptive mesh refinement on Cartesian grids (AMR), in combination with time accurate local time stepping (LTS). This technique has initially been introduced in Dumbser et al. [54,55] for conservative and non-conservative hyperbolic systems, respectively. Moreover, the schemes of the ADER family were the first high order methods to be applied for the numerical solution of the unified first order hyperbolic formulation of continuum mechanics by Godunov, Peshkov and Romenski [56][57][58], see Dumbser et al. [59][60][61]. In the rest of this paper, we will refer to the Godunov-Peshkov-Romenski model of continuum mechanics as GPR model.
The ADER approach has also been extended to the direct Arbitrary-Lagrangian-Eulerian framework (ALE), where the mesh moves with an arbitrary velocity, taken as close as possible to the local fluid velocity. Initially developed for one space dimension, it has been soon extended to the case of the two and three dimensional Euler equations on unstructured meshes, Boscheri and Dumbser [62,63], including the discretization of non-conservative products. Further works in this area involve the use of local timestepping techniques, [64,65]; coupling with multidimensional HLL Riemann solvers, Boscheri et al. [66]; solution of magnetohydrodynamics problems (MHD), [67,68]; development of a quadrature-free approach to increase the computational efficiency of the overall method, Boscheri and Dumbser [69]; use of curvilinear unstructured meshes, Boscheri and Dumbser [70]; or extension to solve the GPR model, Boscheri et al. [71] and Peshkov et al. [72]. Furthermore, in Gaburro et al. [73] a novel algorithm to deal with moving non-conforming polygonal grids has been presented. The methodology reduces the typical mesh distortion arising in shear flows and provides high quality elements even for long-time simulations. An exactly well-balanced path-conservative version of this approach for the Euler equations with gravity can be found in Gaburro et al. [74]. Still in the ALE framework, within this article, we will present new results for the family of ADER-FV and ADER-DG schemes on moving unstructured Voronoi meshes [75], as recently introduced in Gaburro [76] and Gaburro et al. [77].
It is well-known that when dealing with high order schemes special care must be paid to the limiting methodology employed. In most of the previous referenced papers classical a priori limiters have been used, such as WENO reconstruction. Nevertheless, some alternative contributions to this topic can be found in the series of papers [51,[77][78][79][80][81][82][83][84][85], where a novel a posteriori sub-cell FV limiter of high order DG schemes, based on the MOOD paradigm of Clain et al. [86] and Diot et al. [87,88], has been employed.
Besides the references given above, which focus on the development of the ADER methodology with a local space-time Galerkin predictor, many recent papers have been devoted to the development of other families of ADER schemes, like the classical ADER finite volume methods. Without pretending to be exhaustive, we may refer to Castro et al. [ [101], and Dematté et al. [102] and references therein.
In this paper, as a promising application of the family of ADER schemes, we solve a diffuse interface formulation of the GPR model of continuum mechanics. In comparison with existing continuum mechanics models, the novel feature of the GPR model is in that it incorporates the two main branches of continuum mechanics, fluid and solid mechanics, in one single unified PDE system. Recall that traditionally fluid and solid mechanics are described by PDE systems of different types, i.e., parabolic (viscous fluids) and hyperbolic (linear elasticity and hyperelasticity), which imposes many theoretical and technical difficulties if one wishes to model natural and industrial processes involving co-existence of the fluid and solid states such as in fluid-structure interaction (FSI) problems, modeling of general solid-fluid transition such as in melting and solidification processes, e.g., additive manufacturing, see for example Francois et al. [103], flows of granular media [104], viscoplastic flows, e.g., debris flows, avalanches, mantle convection, flows of many industrial Bingham-type fluids, see Balmforth et al. [105]. Due to the unified treatment of fluids and solids, the GPR model thus has a great potential for simplifying the modeling process and code development for solving the aforementioned problems. Yet, before to be applied to practical problems, the GPR model may require a coupling with an interface tracking/capturing technique for the modeling of moving material boundaries such as in free surface flows or solid body motion. In particular, in this paper, we couple the GPR model with a simple diffuse interface approach, see Tavelli et al. [85], Dumbser [106], Gaburro et al. [107], Kemm et al. [108]. For example, very interesting computational results with similar diffuse interface approaches and level set techniques for compressible multi-material flows have been obtained for example in Gavrilyuk et al. [109], Favrie et al. [110], Favrie and Gavrilyuk [111], Ndanou et al. [112], de Brauer et al. [113], Michael and Nikiforakis [114], Jackson and Nikiforakis [115], and Barton [116]. Finally, we demonstrate that the ADER family of schemes is capable to resolve the GPR model in both solid and fluid regimes.
The paper is organized as follows. In section 2 we present the family of ADER finite volume and ADER discontinuous Galerkin finite element schemes on fixed Cartesian and moving polygonal meshes in two space dimensions. Next, in section 3 we introduce the diffuse interface formulation of the GPR model. In section 4 we show some computational results obtained with different kinds of ADER schemes (ADER-FV and ADER-DG) on different mesh topologies, including moving unstructured Voronoi meshes, as well as fixed and adaptive Cartesian grids. The paper is rounded off by some concluding remarks and an outlook to future work in section 5.
Here, we briefly describe the key features of our numerical scheme, keeping the notation as general as possible, and referring to the literature for further details. We start by introducing the general form of our governing PDE system and a moving unstructured discretization of two-dimensional domains (sections 2.1 and 2.2); next, in section 2.3 we describe the data representation of the discrete solution. Then, we explain how to obtain high order of accuracy in space: this is available by construction in the DG case, and obtained via some variants of the well-known WENO procedure [32,[121][122][123][124][125] for the FV approach. Finally, we focus on the predictor-corrector version of the ADER scheme that allows to achieve arbitrary high order of accuracy in space and time. Since it is out of the scope of this paper to recall all the details, a general overview is given in sections 2.5 and 2.7, and an inedited proof of the convergence of the predictor for a non-linear conservation law is presented in section 2.6.
We would like to emphasize that, besides this novel convergence proof, other progress has been introduced within this work. Indeed, up to our knowledge, it is the first time that: (i) the ADER approach is used to solve a diffuse interface formulation of the GPR model that addresses the free surface problem in both solid and fluid mechanics context (previously, a similar formulation was used only in the solid dynamics context [112,126,127]); (ii) non-conservative products are taken into account in the high order direct ALE scheme of Gaburro et al. [77], where they have to be integrated also on degenerate spacetime control volumes (see section 2.5.2).
Governing PDE System
In this paper we consider high order fully-discrete schemes for non-linear systems of hyperbolic PDE with non-conservative products and algebraic source terms of the form where Q = Q(x, t) ∈ Q ⊂ R m is the state vector, t ∈ R + 0 is the time, x ∈ ⊂ R d is the spatial coordinate, d is the number of space dimensions, Q is the so-called state space or phase space, F(Q) is the non-linear flux tensor, B(Q)·∇Q is a non-conservative product and S(Q) is a purely algebraic source term. Introducing the system matrix A(Q) = ∂F/∂Q + B(Q) the above system can also be written in quasi-linear form as The system is said to be hyperbolic if for all n = 0 and for all Q ∈ Q the matrix A(Q) · n has m real eigenvalues and a full set of m linearly independent right eigenvectors. The system (1) needs to be provided with an initial condition Q(x, 0) = Q 0 (x) and appropriate boundary conditions on ∂ .
In this paper we focus on a particular, but very general, example of a first-order system (1) describing elastic and viscoplastic heat-conducting media; it will be discussed in section 3.
Domain Discretization
In the general ALE case, we consider a moving two-dimensional (d = 2) domain (t) and we cover it using an unstructured mesh made of N P non-overlapping polygons P i , i = 1, . . . N P . The mesh is first built at time t = 0 and then it is rearranged at each time step t n : elements and nodes are moved following the local fluid velocity and when necessary, in order to prevent mesh distortion, also the mesh topology (i.e., the shape of the elements and their connectivities) is changed.
Given a polygon P n i we denote by V(P n i ) = {v n i 1 , . . . , v n i j , . . . , v n i N n V i } the set of its N n V i Voronoi neighbors (the neighbors that share with P n i at least a vertex), and by E(P n i ) = {e n i 1 , . . . , e n i j ,. . . ,e n i N n V i } the set of its N n V i edges, and by D(P n i ) = {d n i 1 , . . . , d n i j ,. . . ,d n i N n V i } the set of its N n V i vertexes, consistently ordered counterclockwise. Finally, the barycenter of P n i is noted as x n b i = (x n b i , y n b i ). When necessary, by connecting x n b i with each vertex of D(P i ) we can subdivide a polygon P n i in N n V i subtriangles denoted as T (P n i ) = {T n i 1 , . . . , T n i j , . . . , T n The coordinates of each node at time t n are denoted by x n k , and V n k represents the velocity at which it is supposed to move, Frontiers in Physics | www.frontiersin.org so that its new coordinates at time t n+1 are given from the following relation More details on how to obtain V can be found in Boscheri et al. [68], Boscheri and Dumbser [63,119] for what concerns classical direct ALE schemes on conforming unstructured grids, in Gaburro et al. [73,74] for non-conforming unstructured grids, in Boscheri and Dumbser [70] for curvilinear meshes, and we refer in particular to section 2.4 and 2.5 of Gaburro et al. [77] for what concerns moving unstructured polygonal grids allowing for topology changes, which indeed is the ALE case considered in this paper (see case B below). Moreover, working in the ALE framework, we are allowed to take V = 0, i.e., we can also work in a fixed Eulerian system where the initial mesh is never modified.
In particular, in this paper we will consider the following two situations for our domain discretization: A. A fixed Cartesian mesh made of N P quadrilaterals elements, which is not moved during the simulation, but which can be successively refined, with a general space-tree-type data structure that allows element-by-element refinement with a general refinement factor r ≥ 2, in order to increase the resolution in the areas of interest, as can be seen in Figure 2 (for the details on the refinement procedure we refer to Dumbser et al. [54] and Fambri et al. [36]). To ease the description of the numerical method, we will associate to each quadrilateral element P n i , a set of indices that refer to its Cartesian coordinates, j, k , such that P n jk : B. A moving polygonal grid as the one described in Gaburro et al. [77] that (i) moves with the fluid flow in order to reduce the numerical dissipation associated with transport terms and (ii) also allows for topology changes at any time step in order to maintain always a high quality of the moving mesh; in this case we remark that our method is also able to deal with degenerate space time control volumes at arbitrary high order of accuracy.
Space-Time Connectivity
To better understand the context of moving meshes we refer the reader to Figure 3: note that the tessellation at time t n has been evolved resulting in a slightly different tessellation at time t n+1 ; for each element P n i the new vertex coordinates x n+1 k , k = 1, . . . , N n V i , are connected to the old coordinates x n+1 k via straight line segments, yielding the multidimensional space-time control volume C n i , that involves N n,st V i +2 space-time sub-surfaces. Specifically, the space-time volume C n i is bounded on the bottom and on the top by the element configuration at the current time level P n i and at the new time level P n+1 i , respectively, while it is closed with a total number of N n,st V i lateral space-time surfaces ∂C n i j , j = 1, . . . , N n,st V i that are given by the evolution of each edge e n i j of element P n i within the time step t = t n+1 − t n . A priori, ∂C n i j are not parallel to the time direction: thus to be treated numerically they can be mapped to a reference square by using a FIGURE 2 | Sketch of the mesh refinement structure of three AMR levels with refinement factor r = 3. Solid lines indicate active cells, whereas the dashed ones are the virtual cells allowing interpolation between the coarse and the refined mesh, needed in the case of high order WENO reconstruction.
set of of bilinear basis functions (see Boscheri and Dumbser [62]). To resume, the space-time volume C n i is bounded by its surface ∂C n i which is given by Note that in the fixed Cartesian case, C n i reduces to a right parallelepiped with four lateral space-time surfaces ∂C n i j parallel to the time-direction, so many simplifications are possible.
We close this part by emphasizing that the family of direct ALE schemes proposed in this work, based on the ADER predictor-corrector approach, is based on the integration of the governing Equation (1) in space and in time directly over these space-time control volumes, see section 2.7. Note that this procedure, which is more evident when C n i is an oblique prism, is also hidden when C n i is just a right parallelepiped.
Data Representation
The conserved variables Q in (1) are discretized in each polygon P n i at the current time t n via piecewise polynomials of arbitrary high order N, denoted by u n h (x, t n ) and defined as where in the last equality we have employed the classical tensor index notation based on the Einstein summation convention, which implies summation over two equal indices. The functions ϕ ℓ (x, t n ) can be either: i. Nodal spatial basis functions given by a set of Lagrange interpolation polynomials of maximum degree N with the property where {x m GL } are the set of the Gauss-Legendre (GL) quadrature points on P n i (see Stroud [128] for the multidimensional case). In particular, when employing these basis functions on a Cartesian grid, each quadrilateral P n i is easily mapped to a reference square, we only need the tensor product of the GL quadrature points in the unit interval [0, 1], and the ϕ ℓ are simply generated by multiplying one-dimensional nodal basis functions, i.e., with ϕ ℓ i satisfying (6) with d = 1, and x = x j− 1 2 + ξ x j , y = y k− 1 2 +η y k being the set of reference coordinates related to P n i . In this case, the total number of GL quadrature points per polygon, as well as the total number of basis functions {ϕ ℓ } and expansion coefficientsû n ℓ,i , the so-called degrees of freedom (DOF), is N = (N + 1) d . These basis functions are used on Cartesian grids, i.e., for Case A. ii. Modal spatial basis functions written through a Taylor series of degree N in the variables x = (x, y) directly defined on the physical element P n i , expanded about its current barycenter x n b i and normalized by its current characteristic length h i h i being the radius of the circumcircle of P n i . In this case the (N + m). We employ this kind of basis functions in the moving unstructured polygonal Case B.
The discontinuous finite element data representation (5) leads naturally to discontinuous Galerkin (DG) schemes if N > 0, but also to finite volume (FV) schemes in the case N = 0. This indeed means that for N = 0 we have ϕ ℓ (x) = 1, with ℓ = 0 and (5) reduces to the classical piecewise constant data that are typical of finite volume methods. In the case N > 0 (DG) the form given by (5) already provides a spatially high order accurate data representation with accuracy N + 1, where instead for the case N = 0 (FV), if we are interested in increasing the spatial order of accuracy, up to M + 1 for examle, we need to perform a spatial reconstruction. With this notation, our method falls within the more general class of P N P M schemes introduced in Dumbser et al. [42] for fixed unstructured meshes.
Data Reconstruction
In this section we focus on the reconstruction procedure needed in the finite volume context (N = 0, M > 0) in order to obtain order of accuracy M + 1 in space starting from the piecewise constant values of u n h (x, t n ) in P n i and its neighbors, i.e., in order to obtain a high order polynomial of degree M representing our solution in each P n where the ψ ℓ functions simply coincide with the ϕ ℓ basis functions of (5). Our reconstruction procedures are based on the WENO algorithm in its polynomial formulation as presented in Dumbser et al. [38], Dumbser and Käser [32,123], Titarev et al. [129], Tsoutsanis et al. [130], Levy et al. [131], Dumbser et al. [132], and Semplice et al. [133], and not based on the original version of WENO proposed in Jiang and Shu [121], Balsara and Shu [122], Hu and Shu [134], and Zhang and Shu [124] which provides only point values. For each P n i , the basic idea consists in (i) selecting a central stencil of elements S 0 i with a total number of elements, containing the cell P n i itself, its first layer of Voronoi neighbors V(P n i ) and filled by recursively adding neighbors of those elements that have been already included in the stencil, and in (ii) using the cell-average values of the elements of S 0 i to reconstruct a polynomial of degree M by imposing the integral conservation criterion, i.e., by requiring that its average on each cell match the known cell average. If f > 1 (which occurs in the unstructured case, where we take f = 1.5), this of course leads to an overdetermined linear system, which is solved using a constrained least-squares technique (CLSQ) [123], i.e., the reconstructed polynomial has exactly the cell averageû n 0,i on the polygon P n i and matches all the other cell averages of the remaining stencil elements in the least-square sense.
However, as well-known thanks to the Godunov theorem [1], the use of only one central stencil (which is indeed a linear procedure) would introduce oscillations in the presence of shock waves or other discontinuities. So, in order to make the reconstruction procedure non-linear, we will compute the final reconstruction polynomial as a non-linear combination or more than only one reconstruction polynomial, each one defined on a different reconstruction stencil S s i . We refer to the cited literature for further details, and here we just highlight the main characteristics of the two reconstruction procedures adopted in this work.
Case A: Cartesian Mesh
In Case A, of a fixed Cartesian mesh, we employ the polynomial WENO procedure given in Dumbser et al. [54], which is implemented in a dimension by dimension fashion. For each cell, we define its related sets of one-dimensional reconstruction stencils as where L = {M, s} and R = {M, s} denote the order and stencil dependent spatial extension of the stencil to the left and to the right. For odd order schemes we consider three stencils, one central, one fully left-sided, and one fully right-sided stencil in each space dimension (see Figure 4 for a graphical interpretation for M = 2), while for even order schemes we have four stencils, two of which are central, while the remaining two are again given by the fully left-sided and fully right-sided in each space dimension. In both cases the total amount of elements in each stencil is always n e = M + 1, the order of the scheme. Focusing on the reconstruction procedure on the x direction, given a element P n i , we start by expressing the first coordinate of the reconstruction polynomial at each stencil in terms of one dimensional basis functions, Then, we integrate on the stencil elements obtaining an algebraic system on the polynomial coefficients: withū n mk the average value obtained by integrating the solution at the previous time step on the cell P mk . Once the coefficients, and thus the polynomials, related to all the stencils are obtained, we compute a reconstruction polynomial in the x direction as the data-dependent non-linear combination of these, where n s is the number of stencils, n s = 3 if M =2 and n s = 4 otherwise; and ω s denote the non-linear weights (see Dumbser et al. [54] for further details).
To complete the reconstruction polynomial, we now repeat the above procedure in the y direction for each degree of freedom w n jk,ℓ 1 . First, we write the reconstruction polynomial in terms of the basis functions, Then, we solve the algebraic system 1 y m Finally, we get the WENO reconstruction polynomial In order to enforce bounds on the WENO reconstruction polynomial, such as the condition 0 ≤ α ≤ 1 on the volume fraction function of for example (56a), we rescale the reconstruction coefficientsŵ n jk,ℓ 1 ℓ 2 around the cell average as follows:ŵ * where the scaling factor ϕ jk is computed via the Barth and Jespersen limiter (see Barth and Jespersen [135]) applied to the volume fraction function α in all Gauss-Legendre and Gauss-Lobatto quadrature nodes, i.e., ϕ jk = min(ϕ jk,p ) is the global minimum in each element, with the nodal limiter values given by Here α max = 1 − ε ≤ 1 is the upper bound of the volume fraction function and α min = ε ≥ 0 is its lower bound;ᾱ denotes the cell average of α and α p denotes the node value of α in the quadrature point x p under consideration. As already mentioned above, this strategy is inspired from the Barth and Jespersen limiter [135], but also from the new bound-preserving polynomial approximation introduced in Després [136] and Campos-Pinto et al. [137]. Since the physical solution of α must satisfy 0 ≤ α ≤ 1, the above bound preserving limiter does not reduce the formal order of accuracy of the reconstruction, as proven in Després [136].
Case B: Moving Polygonal Mesh
In Case B of our moving and topology changing polygonal mesh we adopt a CWENO reconstruction algorithm, first introduced in Levy et al. [138][139][140] and Semplice et al. [133], and which can be cast in the general framework described in Cravero et al. [141]. We closely follow the work outlined in Dumbser et al. [132] and Boscheri et al. [142] for unstructured triangular and tetrahedral meshes, and extended it to moving polygonal grids in Gaburro et al. [77]. We emphasize that the main advantages of such a procedure is that only one stencil (the central one) is required to contain the total amount of elements stated in (10) and only this one is used to construct a polynomial of degree M; the other ones are used to compute polynomials of lower degree. In particular, we consider N n V i stencils S s i , each of them containing exactlŷ n e = (d + 1) cells, i.e., the central cell P n i and two consecutive neighbors belonging to V(P n i ). Refer to Figure 5 for a graphical description of the stencils. For each stencil S s i we compute a linear polynomial by solving a simple reconstruction system which is not overdetermined. According to the above mentioned literature, the reconstructed polynomial obtained via a nonlinear combination of the polynomial of degree M, computed over S s 0 , and of the N n V i linear polynomials, computed over S s i , maintains the order of convergence of the method and avoids unwanted spurious oscillations. In particular, in the case of moving meshes with topology changes, where the set of neighbors may change at any time step, the use of smaller so-called sectorial stencils significantly speeds up computations.
For the sake of uniform notation, in the DG case, i.e., when N > 0 and M = N, we trivially impose that the reconstruction polynomial is given by the DG polynomial, i.e., w n h (x, t n ) = u n h (x, t n ), which automatically implies that in the case N = M the reconstruction operator is simply the identity.
Space-Time Predictor Step
In this section we focus on the key feature, the element-local space-time predictor step, of our ADER FV-DG schemes: this part of the algorithm (the predictor) produces a high order approximation in both space and time of Q in all P n i . This allows to obtain a fully discrete one-step scheme that is uniformly high order accurate in both space and time.
The predictor step consists in a completely local procedure which solves the governing PDE (1) in the small, see Harten et al. [11], inside each space-time element C n i , and it only considers the geometry of volume C n i , the initial data w n h on P n i and the governing Equations (1), without taking into account any interaction between C n i and its neighbors. Because of this absence of communications, we refer to it as local. The procedure finally provides, for each C n i , a space-time polynomial data representation q n h , which serves as a predictor solution, only valid inside C n i , to be used for evaluating the numerical fluxes, the non-conservative products and the algebraic source terms when integrating the PDE in the final corrector step (see section 2.7) of the ADER scheme.
The predictor q n h is a polynomial of degree M, which takes the following form where θ ℓ (x, t) can be either i. For fixed and adaptive Cartesian grids (Case A), nodal space-time basis functions of degree M given by the product of one-dimensional nodal basis functions verifying (6) (with d = 1 ), two of them mapped to the unit interval [0, 1] as in (7) and with the time coordinate mapped to the reference time τ ∈ [0, 1] via t = t n + τ t. In this case, the total number of GL quadrature points per cell, as well as the total number of DOF is Q = (M + 1) d+1 , see also In the other panels we report two of the N n Vi = 5 sectorial stencils containing the element itself and two consecutive neighbors belonging to V(P n i ). dimensions plus time) are used, which read with the total number of DOF Q = 1 Since we are only interested in an element local predictor solution, i.e., we do not need to consider the interactions with the neighbors, we do not yet take into account the jumps of q n h across the space-time lateral surfaces, because this will be done in the final corrector step (section 2.7). Instead, we insert the known discrete solution w n h (x, t n ) at time t n in order to introduce a weak initial condition for solving our PDE; note that w n h (x, t n ) uses information coming from the past only (following an upwinding approach) in such a way that the causality principle is correctly respected. To this purpose, the first term is integrated by parts in time. This leads to Equation (25) results in an element-local non-linear system for the unknown degrees of freedomq n ℓ of the space-time polynomials q n h . The solution of (25) can be found via a simple and fast converging fixed point iteration (a discrete Picard iteration) as detailed e.g., in Dumbser et al. [42] and Hidalgo and Dumbser [41]. For linear homogeneous systems, the discrete Picard iteration converges in a finite number of at most N + 1 steps, since the involved iteration matrix is nilpotent, see Jackson [143]. Moreover a proof of the convergence of this procedure in the case of a non-linear homogeneous conservation law in 1D is given in next section 2.6.
Simplification in the Case of a Fixed Cartesian Mesh
The space-time predictor step formerly presented can be simplified in the case of a Cartesian mesh with nodal basis functions resulting in a more efficient algorithm. Under these assumptions the governing PDE (1), can be rewritten as with Now, by substituting the discrete space-time predictor solution q n h with its expansion on the nodal basis and after integrating by parts in time, we obtain 1 0 1 0 1 0 θ k (ξ , η, 1) θ ℓ (ξ , η, 1)q n ℓ dξ dηdτ To recover the value of the unknown degrees of freedomq n ℓ , it is sufficient to solve the above equation locally for each element. One important advantage of using the nodal Gauss-Legendre basis is that the terms in (29) can be evaluated in a dimensionby-dimension fashion.
Space-Time Predictor for Sliver Space-Time Elements
When a topology change occurs, some space-time sliver elements, as those shown on the right side of Figure 8, are originated (see Gaburro et al. [77]), and the predictor procedure over them needs particular care. The problem connected with sliver elements is the fact that their bottom face, which consists only in a line segment, is degenerate, hence the spatial integral over P n i vanishes, i.e., there is no possibility to introduce an initial condition for the local Cauchy problem at time t n into their predictor. Thus, in order to couple however (24) with some known data from the past, we will end up with a formula different from (25). We underline that we first carry out the space-time predictor for all standard elements using, which can be computed independently of each other, and only subsequently we process the remaining space-time sliver elements. Then, when FIGURE 8 | Space time connectivity with topology changes and sliver element. Left: at time t n the polygons P n 2 and P n 3 are neighbors and share the highlighted edge, instead at time t n+1 they do not touch each other; the opposite situation occurs for polygons P n 1 and P n 4 . This change of topology causes the appearance of degenerate elements of different types (refer to Gaburro et al. [77] for all the details). In particular, so-called space-time sliver elements (right) need to be taken into account when considering the space-time framework, so the predictor and the corrector step have to be a adapted to their special features. Sliver elements (right) are indeed completely new control volumes which do neither exist at time t n , nor at time t n+1 , since they coincide with an edge of the tessellation and, as such, have zero areas in space. However, they have a non-negligible volume in space-time. The difficulties associated to this kind of element are due to the fact that w h is not clearly defined for it at time t n (thus the predictor has to be modified) and that contributions across it should not be lost at time t n+1 in order to guarantee conservation (thus the corrector has to be modified).
considering a sliver, we use the upwinding in time approach on the entire space-time surface ∂C n i that closes a sliver control volume, and again respecting the causality principle, we take the information to feed the predictor only from the past, i.e., only from those space-time neighbors C n j whose common surface ∂C n ij exhibit a negative time component of the outward pointing space-time normal vector (ñ t < 0). In this way, we can introduce information from the past into the space-time sliver elements.
As a consequence, the predictor solution q n h is again obtained by means of (24), but by treating the entire ∂C n i with the upwind in time approach, i.e., by considering also the jump terms between the still unknown predictor of the slivers (call it q n,− h ) and the already known predictors of its neighbors (call them q n,+ h ), where ∂C − i = ∂C n i withñ t < 0 is the part of the space-time boundary that has a negative time component of the space-time normal vector. Note that here we have taken into account also the jump of the non-conservative terms, and that these contributions have been added entirely [i.e., not only half of them, as in (49)]. Indeed, in (49) half of the jump contribution goes to one element, while the other half goes to the neighboring element; here instead, since the interaction between neighbors is only computed from the side of the sliver element, the entire jump contributes to the predictor in the sliver element.
Convergence Proof of the Predictor
Step for a Non-linear Conservation Law In this section, the convergence proof of the predictor for a non-linear conservation law is given. The proof is provided, for simplicity, in the case of a fixed mesh in one space dimension, following the nomenclature already employed in section 2.5.1, but it still holds in higher dimensions. Let us consider a general hyperbolic system of conservation laws of the form Then, the corresponding space-time DG predictor used in the ADER-DG framework reads For convenience, all derivatives and integrals in (32) have been transformed to the reference space-time element [0, 1] 2 . Moreover, the discrete solution is given by q h = θ l (ξ , τ )q ℓ , and the flux is expanded in the same basis as f h = θ ℓ (ξ , τ )f ℓ . When using a nodal basis, we can compute the degrees of freedom for the flux interpolant f h simply asf ℓ = f q ℓ . We also recall that the initial condition given by the DG scheme at time t n reads w h = ϕ ℓ (ξ )ŵ ℓ . Then, integration of the first term in (32) by parts in time yields and insertion of the definitions of the discrete solution leads to The iterative scheme employed to find the solution for the spacetime degrees of freedomq, at any Picard iteration r, can therefore be rewritten in compact matrix-vector notation as with where we have dropped the indices to ease the notation. After inverting K 1 (this matrix is built using the linearly independent basis functions so that it is invertible), we obtain the explicit iteration formulâ To prove that the former iterative formula will converge, we introduce the operator and the induced matrix norm Furthermore, we assume the flux to be Lipschitz continuous with Lipschitz constant L > 0 so that We now need to show that the operator ϕ is a contraction: The operator is therefore a contraction under the CFL-type condition on the time step t which connects the Lipschitz constant L with the mesh spacing x and the matrix norm of K −1 1 K ξ . Since the operator is contractive under the above assumptions, the Banach fixed point theorem, Banach [144], guarantees convergence of the iterative method.
In the previous reasoning, we have assumed that the inequality in the right hand side of (43) be strict. Thus, to conclude the proof, let us assume that the equality holds, this is true if and only if K −1 1 K ξ = 0. By taking into account the definition of the induced matrix norm (40), it implies K −1 1 K ξ x = 0 for any x in the metric space. Thus, K −1 1 K ξ = 0. Direct substitution in (38) gives so that no iterative procedure is done. Note: The matrix K −1 1 K ξ has been proven to be nilpotent and thus all its eigenvalues are zero, see Jackson [143], which guarantees convergence to the exact solution in a finite number of steps for linear homogeneous PDE.
Corrector Step
The corrector step is the last step of our path-conservative ADER FV-DG scheme, where the update of the solution from time t n up to time t n+1 can take place in a single step procedure thanks to the use of the predictor q n h . The update formula is recovered starting from the space-time divergence form of the PDẼ ∇ ·F(Q) +B(Q) ·∇Q = S(Q),F = (F, Q), which is multiplied by a set of space-time test functionsφ k and integrated over each space-time control volume C n Frontiers in Physics | www.frontiersin.org Note that the employed test functionsφ k coincide with the θ k of (22) for the Cartesian Case A. Instead, for the moving polygonal Case B, they need to be tied to the motion of the barycenter x b i (t) and must be moved together with P i (t) in such a way that at time t = t n they refer to the current barycenter x n b i and at time t = t n+1 they refer to the new barycenter x n+1 b i , thus they are defined as follows These moving modal basis functions are essential to the moving approach presented in Gaburro et al. [77] and used in this paper. They naturally allow for topology changes, without the need of any remapping steps, which we want to avoid in a direct ALE formulation. Now, (46) by applying the Gauss theorem to the fluxdivergence term and by splitting the non-conservative products into their volume and surface contribution, becomes where Q on P n+1 i is represented by the unknown u n+1 h , on P n i is taken to be the current representation of the conserved variables u n h , in the interior of C n i is given by the predictor q n h and on the space-time lateral surfaces ∂C n ij is given by q n,− h and q n,+ h which are the so-called boundary-extrapolated data, i.e., the values assumed respectively by the predictors of the two neighbor elements C n i and C n j on the shared space-time lateral surface ∂C n ij . Furthermore, we have employed a two-point path-conservative numerical flux function of Rusanov-type where s max is the maximum eigenvalue of the ALE Jacobian matrices A V n (q n,+ h ) and A V n (q n,− h ) being and the path = (q − h , q + h , s) is a straight-line segment path connecting q n,− h and q n,+ h which allow to treat the jump of the non-conservative products following the theory introduced in Dal Maso et al. [145], Parés [146], and Castro et al. [147], and extended to ADER FV-DG schemes of arbitrary high order in Dumbser et al. [46] and Dumbser and Toro [148]. Despite in this paper we only consider the Rusanov flux, the above methodology can be extended to different flux functions, adapting to the new flux splitting techniques like the ones presented in Toro and Vázquez-Cendón [149]. Finally, the time step size t is given by (52) where h min is the minimum characteristic mesh-size, ℓ i j is the length of the edge j of P n i and |λ max | is the spectral radius of the Jacobian of the flux F. Stability on unstructured meshes is guaranteed by the satisfaction of the inequality CFL < 1 d , see Dumbser et al. [42].
We close this section by remarking that the integration of the governing PDE over closed space-time volumes C n i automatically satisfies the geometric conservation law (GCL) for all test functionsφ k . This simply follows from the Gauss theorem and we refer to Boscheri and Dumbser [63] for a complete proof.
A Posteriori Subcell Finite Volume Limiter
Up to now, we have presented a family of FV and DG type schemes which achieves arbitrary high order of accuracy in space and time; the main difference between the FV and the DG approach lies in the fact that FV schemes, thanks to the WENOtype non-linear reconstruction procedure, are robust in the presence of shocks and discontinuities, while the DG formulation as presented so far, being linear in the sense of Godunov, is subject to the appearance of spurious oscillations. Thus, in order to employ a DG scheme in the context of solving hyperbolic partial differential equations, where usually discontinuities are developed, a technique that is able to limit spurious oscillations (called limiter) should be introduced. Several attempts in that direction can be found in the literature. For example, we could recall the artificial viscosity technique used in Hartmann and Houston [150],Persson and Peraire [151], and Cesenek et al. [152] which consists in adding a small parabolic term in the equation in order to smooth out the discontinuities.
Here, instead, we follow a different approach based on exploiting the respective strengths of FV and DG schemes, i.e., the resolution of DG in smooth regions and the robustness of FV across discontinuities. Thus, we first evolve the solution everywhere by using our DG scheme; then, we check a posteriori, at the end of each time step, if the obtained DG solution in each cell respects or not some criteria [as density and pressure positivity, a relaxed discrete maximum principle, specific physical bounds, or more elaborate choices as those of Guermond et al. [153]], and we mark as troubled those cells where the obtained DG solution is marked as not acceptable. Only for these troubled cells we repeat the time step using, instead of the DG scheme, a second order TVD FV method, which always assures a robust solution.
This idea is founded on works as those of Cockburn and Shu [154], Qiu and Shu [155,156], Balsara et al. [157], Luo et al. [158], Krivodonova [159], Zhu et al. [160], Zhu and Qiu [161], Clain et al. [86], Diot et al. [87,88], Loubére et al. [79], Boscheri et al. [162],and Boscheri and Loubére [83]; but in particular, here, we adopt a so-called subcell approach aimed at not losing the resolution of the DG scheme when switching to the FV method, as forwarded in Sonntag and Munz [163], Dumbser et al. [78], Zanotti et al. [80], Dumbser and Loubére [81], Boscheriand Dumbser [119], Fambri et al. [84], Rannabauer et al. [164], de la Rosa and Munz [165], and Boscheri et al. [142]. Indeed, at the beginning of the time step we project the DG solution u n h of a troubled cell P n i on a subdivision of it in sub-cells s n i,α obtaining a value for the cell averages on s n i,α at time t n v n i,α (x, t n ) = We evolve the cell averages up to time t n+1 using a classical TVD FV scheme, obtaining v n+1 i,α (x, t n+1 ). Finally, we recover a DG polynomial representation of the solution at time t n+1 over P n+1 i using the values on the sub-grid level v n+1 i,α and by applying a reconstruction operator as where the reconstruction is imposed to be conservative on the main cell P n+1 i yielding the additional linear constraint Thus, the limited solution on a troubled cell is robust thanks to the use of a TVD scheme and accurate thanks to the subcell resolution. For all the details of the a posteriori subcell FV limiter used in this work, we refer to Dumbser et al. [78] and Fambri et al. [36] for the fixed Cartesian Case A and to Gaburro et al. [77] for the moving polygonal Case B.
Governing PDE System
A simplified diffuse interface formulation of the unified continuum fluid and solid mechanics model [57,59,60,166], which can be used for modeling moving boundary problems of fluids and solids of arbitrary geometry, is given by the following PDE system (throughout this paper we make use of the Einstein summation convention over repeated indices) Here, (56a) is the evolution equation for the color function α that is needed in the diffuse interface approach as introduced in Tavelli et al. [85] for the description of linear elastic solids of arbitrary geometry and as used in Dumbser [106] and Gaburro et al. [107] for a simple diffuse interface method for the simulation of non-hydrostatic free surface flows. We assume that the color function α equals to 1 in the regions of the computational domain occupied by the material and 0 outside these regions. In the computational code, α = 1 − ε inside of the material and α = ε outside the material. Here, ε is a small parameter ε ≪ 1, see section 4. Then, inside of the diffuse interface, α may take any values between 0 and 1 (between ε and 1 − ε in the computational code). Equation (56b) is the mass conservation law and ρ is the material density; (56c) is the momentum conservation law, where v i is the velocity field and g i is the gravity vector; (56d) is the evolution equation for distortion field A ik (non-holonomic basis triad, see Peshkov et al. [167]); (56e) is the evolution equation for the specific thermal impulse J k constituting the heat conduction in the matter via a hyperbolic (non-Fourier-type) model. Finally, (56f) is the entropy balance equation and (56g) is the energy conservation law. Other thermodynamic parameters are defined via the total energy potential E = E(α, ρ, S, v, A, J): ik = pδ ik − σ ik is the total stress tensor (δ ik is the Kronecker delta); p = ρ 2 E ρ is the thermodynamic pressure; σ ik = −ρA jk E A ji is the non-isotropic part of the stress tensor, T = E S is the temperature, and the notations such as E ρ , E A ik , etc. stand for the partial derivatives of the energy potential, e.g., , etc. The dissipation in the medium includes two relaxation processes: the shear stress relaxation characterized by the scalar function θ 1 (τ 1 ) > 0 depending on the relaxation time τ 1 and thermal impulse relaxation characterized by θ 2 (τ 2 ) > 0 depending on the relaxation time τ 2 . Both these relaxation processes then contribute to the entropy production term [the source on the right hand-side of (56f)] which is positive because it is quadratic in E A ik and E J k .
From the mathematical standpoint, the unification of the model (56) consists in the use of only first-order hyperbolic equations for both dissipative and non-dissipative processes in contrast to the classical continuum mechanics relying on the mixed hyperbolic-parabolic formulations such as the famous Navier-Stokes-Fourier equations, for example. From the physical standpoint, the unification of Equations (56) consists in treating solid and fluid states of matter from the solid-dynamics viewpoint. Indeed, as discussed in Peshkov and Romenski [57] and Dumbser et al. [59,166], similarly to standard continuum solid-dynamics, the distortion field introduces additional degrees of freedom (in comparison to the classical continuum fluid mechanics) which characterizes deformation and rotational degrees of freedom of the continuum particles, represented not as scaleless mathematical points but characterized by a finite length scale, or equivalently, time scale τ 1 , e.g., see Dumbser et al. [166]. In such a formulation, solid-type behavior corresponds to relaxation times τ 1 such that T problem ≪ τ 1 , while the fluidtype behavior corresponds to τ 1 ≪ T problem , where T problem is the characteristic time scale of the problem under consideration.
In order to close system (56), that is, in order to define pressure p = ρ 2 E ρ , stresses σ ik = −ρA jk E A ji , temperature T = E S , and the dissipative source terms, one needs to provide the energy potential E. In this paper, we rely on a rather simple choice of E, which is, however, enough to deal with Newtonian fluids and simple hyperelastic solids. Thus, we assume that the specific total energy can be written as a sum of three contributions as (57) with the specific internal energy given by the ideal gas equation of state in the case of gases, and given by either the so-called stiffened gas equation of state or the well-known Mie-Grüneisen equation of state in the case of solids and liquids. Here, c v is the specific heat capacity at constant volume, γ is the ratio of the specific heats, p 0 is the reference (atmospheric) pressure, ρ 0 is the reference material density, and Ŵ 0 , and s are some material parameters. The specific energy stored in material deformations and in the thermal impulse is where • Gij= G ij − 1 3 G kk δ ij is the trace-free part of the metric tensor G ij = A ki A kj , which is induced by the mapping from Eulerian coordinates to the current stress-free reference configuration. The coefficientsc s (α) andc h (α) in (61) are the characteristic velocities for propagation of shear and thermal perturbations accordingly. In the present diffuse interface model, we choose the following simple linear mixture rule for the computation of the shear sound speed and of the heat wave propagation as a function of the volume fraction ᾱ where c s and c h are the material parameters inside the continuum and c g h ≪ 1 and c g s ≪ 1 are free parameters that can be chosen for the region outside the continuum. The specific kinetic energy is contained in the third contribution to the total energy and reads E 3 (v k ) = 1 2 v i v i . With the equation of state chosen above, we get the following expressions for the stress tensor, the heat flux and the dissipative sources E A ik and E J k present in the relaxation source terms: The functions θ 1 and θ 2 are chosen in such a way that a constant viscosity and heat conduction coefficient are obtained in the stiff relaxation limit, see Dumbser et al. [59] for a formal asymptotic analysis, Thus, following the procedure detailed in Dumbser et al. [59], one can show via formal asymptotic expansion that in the stiff relaxation limit τ 1 → 0, τ 2 → 0, the stress tensor and the heat flux reduce to and that is the effective shear viscosity and effective heat conductivity of model (56) are with ρ 0 and T 0 are reference density and temperature, see Dumbser et al. [59], where also an explanation has been provided of how the relaxation times τ could be obtained experimentally via ultrasound measurements.
Symmetric Godunov Form of the Model
It is important to note an interesting structural feature of Equations (56) that may affect future developments of the ADER schemes in an attempt to respect such structural properties at the discrete level that may help to improve physical consistency of the numerical solution. Thus, as many PDE systems studied in some other of our papers [59,60,168,169], system (56) belongs to the class of so-called Symmetric Hyperbolic Thermodynamically Compatible (SHTC) PDE systems originally studied by Godunov [170,171] and later by Godunov and Romenski [172], Godunov et al. [173], Romenski [168] and Romensky [174]. Indeed, by simply rescaling the quantitiesρ = αρ,p = αp =ρ 2 Eρ, andσ ik = ασ ik = −ρA jk E A ji and replacing the non-conservative Equation (56a) by an equivalent (on smooth solutions) conservative form (69a), system (56) can be written as where we have omitted the energy equation. Now, this system looks exactly as the system studied in Dumbser et al. [59], apart from the additional Equation (69a) which has the same structure as (69b) and does not change the essence. Then, after denoting E =ρE and introducing new variables P = (̺ 1 , ̺ 2 , v i , α ik , i , σ ) which are thermodynamically conjugate to the conservative variables Q = (αρ,ρ,ρv i , A ik ,ρJ i ,ρS), and a new thermodynamic potential L(P) = Q · E Q − E = Q · P − E, system (69) can be written in a symmetric form In this PDE system, the first two terms in each equation form the canonical Godunov form introduced in Godunov [170] which can be immediately written as a quasilinear symmetric form, e.g., see Peshkov et al. [169], Romenski [168], and Romensky [174]. The other (non-conservative) terms obviously form a symmetric matrix. Therefore, the entire system (71) can be written in a symmetric quasi-linear form and hence, it is a symmetric hyperbolic system if the thermodynamic potential L is convex. We note that the understanding of the structural properties of the continuous equations might be beneficial for developing of so-called structure-preserving numerical integrators (e.g., symplectic integrators). Thus, the energy conservation law (56g) is in fact a consequence of the other Equations (56) or (71), e.g., see Dumbser et al. [59] and Peshkov et al. [169], and can be viewed as a constraint of the system (71). Its non-violation at the discrete level cannot be guaranteed by the general purpose ADER family of schemes studied in this paper and hence, usually, as well as in our implementation, it is included into the set of discretized PDEs instead of the entropy equation. In principle, a structure-preserving scheme which satisfies all SHTC properties [169] of the continuous equations at the discrete level should guarantee the automatic satisfaction of the energy conservation law, without its explicit discretization. We hope to cover this topic in future work.
NUMERICAL RESULTS
In this section, we present some numerical results in order to illustrate the capabilities and potential applicability of the proposed numerical approach in non-linear continuum mechanics. The first three test problems are carried out without making explicit use of the diffuse interface approach, i.e., setting α = 1 everywhere in the entire computational domain. The last three test problems illustrate the full potential of the diffuse interface extension of the GPR model in the context of moving free boundary problems. Gravity effects are neglected in all test cases, apart from the dambreak problem shown in subsection 4.6. Whenever values for ν = µ/ρ 0 and c s are provided, the corresponding relaxation time τ 1 is computed according to (68).
Numerical Convergence Studies in the Stiff Relaxation Limit
In order to verify the high order property of our ADER schemes in both space and time in the stiff relaxation limit, we first represent the numerical convergence study that was already carried out in Dumbser et al. [59] on a smooth unsteady flow, for which an exact analytical solution is known for the compressible Euler equations, i.e., in the stiff relaxation limit τ 1 → 0 and τ 2 → 0 of the GPR model. The problem setup is the one of the classical isentropic vortex, see Hu and Shu [175]. The initial condition consists in a stationary isentropic vortex, whose exact solution can easily be found by solving the compressible Euler equations in cylindrical coordinates. Due to the Galilean invariance of the Euler equations and of the GPR model, one can then simply superimpose a constant velocity field to this stationary vortex solution in order to get an unsteady version of the test problem. The vortex strength is chosen as ε = 5 and the perturbation of entropy S = p ρ γ is assumed to be zero. For details of the setup, see Hu and Shu [175] and Dumbser et al. [59]. In this test we set the distortion field initially to Table 1, together with the chosen values for the effective viscosity µ and the effective heat conductivity coefficient κ. From Table 1 one can observe that high order of convergence of the numerical method is achieved also in the stiff limit of the governing PDE system.
Circular Explosion Problem in a Solid
In this Section, we simulate a circular explosion problem in an ideal elastic solid. We compare the results obtained with a third order ADER-WENO finite volume scheme on moving unstructured Voronoi meshes with possible topology changes, Gaburro et al. [77], with those obtained with a fourth order ADER discontinuous Galerkin finite element scheme on a very fine uniform Cartesian mesh composed of 512 × 512 elements, which will be taken as the reference solution for this benchmark. 1 | Experimental errors and order of accuracy at time t = 1 for the density ρ for ADER-DG schemes applied to the GPR model (c s = 0.5, α = 1) in the stiff relaxation limit (µ ≪ 1, κ ≪ 1). mesh with 82 919 control volumes. The computational results obtained with the unstructured ADER-WENO ALE scheme and those obtained with the high order Eulerian ADER-DG scheme are presented and compared with each other in Figure 9.
We can note a very good agreement between the two results. The high quality of the ADER-WENO finite volume scheme on coarse grids is mainly due to the natural mesh refinement around the shock, which is typical for Lagrangian schemes. Furthermore, Lagrangian schemes are well-known to capture material interfaces and contact discontinuities very well, since the mesh is moving with the fluid and thus numerical dissipation at linear degenerate fields moving with the fluid velocity is significantly lower than with classical Eulerian schemes.
Rotor Test Problem
A second solid mechanics benchmark consists in the simulation of a plate on which a rotational impulse is initially impressed, in a circular region centered with respect to the computational domain. This rotor will initially move according to the rotational impulse, while emitting elastic waves which ultimately determine the formation of a set of concentric rings with alternating direction of rotation. The test is analogous to the rotor problem shown in Peshkov et al. [72], but with a weakened material in order to show stronger motion of the Voronoi grid.
The results of the third order ADER-WENO finite volume method on a moving Voronoi grid with variable connectivity, composed of 150 561 cells, are compared against a reference solution obtained with a fourth order ADER discontinuos Galerkin scheme on a very fine uniform Cartesian mesh counting 512 × 512 elements, for a total of over four million spatial degrees of freedom.
The computational domain is the square = [−1, 1] × [−1, 1] and the final simulation time is set to t = 0.5. With exception made for the velocity field, all variables are initially constant throughout the domain. Specifically we set α = 1, ρ = 1, p = 1, A = I, J = 0, while the velocity field is v = [−y/R, x/R, 0] if r = x 2 + y 2 ≤ R, and v = 0 otherwise, that is, outside of the circle of radius R = 0.2; this way, the initial tangential velocity at r = R is one. The solid is taken to be elastic (τ 1 → ∞), heat wave propagation is neglected (c h = 0), and the characteristic speed of shear waves is c s = 0.25. The constitutive law is chosen to be the stiffened-gas EOS with γ = 1.4 and p 0 = 0. We can see in Figure 10 that, although some of the finer features are lost (specifically the small central counterclockwise-rotating ring) due to the lower resolution of the finite volume method on a coarser grid, the shear waves travel outwards with the correct velocity and the moving Voronoi finite volume simulation can be said to be in agreement with the high resolution discontinuous Galerkin results. Also in Figure 10, it is shown that the central region of the computational grid has undergone significant motion but thanks to the absence of constraints on the connectivity between elements, the Voronoi control volumes have not been stretched excessively as would instead happen for a similar moving unstructured grid, but with fixed connectivity.
Elastic Vibrations of a Beryllium Plate
The first benchmark for our new diffuse interface version of the GPR model consists in the purely elastic vibrations of a beryllium plate, subject to an initial velocity distribution, see for example Sambasivan et al. [176], Maire et al. [177], Burton et al. [178], Boscheri et al. [71], and Peshkov et al. [72] for a setup of the same test problem in the framework of Lagrangian and ALE schemes.
Unlike in Lagrangian schemes, no boundary conditions need to be imposed on the surface of the bar. We simply use transmissive boundaries on ∂ . The entire computational domain is initialized with the reference density for beryllium as ρ(x, 0) = ρ 0 , while the pressure is set to p(x, 0) = 0. The distortion field is initialized with A = I. According to Burton et al. [178], the final time is set to t f = 53.25 so that it corresponds approximately to two complete flexural periods. The simulations are carried out with a third order ADER-WENO scheme on two uniform Cartesian meshes composed of 256 × 128 and 512 × 256 elements, respectively.
For the fine grid simulation in Figure 11, we present the temporal evolution of the color contour map of the volume fraction function α, which represents the moving geometry of the bar. Here, dark gray color is used to indicate the regions with α > 0.5 and white color is used for the regions of α < 0.5. In the same figure, we also depict the pressure field in the region α > 0.5 at times t = 5, t = 14, t = 23, and t = 28. These time instants cover approximately one flexural period. The time evolution of the vertical velocity component v(0, 0, t) in the origin is depicted in Figure 12. For comparison, in the same figure we also show the results obtained on the coarse mesh for the same test problem with a fourth order ADER-DG scheme with second order TVD subcell finite volume limiter (red line).
Our computational results compare visually well against available reference solutions in the literature, see Sambasivan et al. [176], Maire et al. [177], Burton et al. [178], Boscheri et al. [71], and Peshkov et al. [72], which were all carried out with pure Lagrangian or Arbitrary-Lagrangian-Eulerian schemes on moving meshes, while here we use a diffuse interface approach on a fixed Cartesian grid.
The Taylor bar impact problem is a classical benchmark for an elasto-plastic aluminium projectile that hits a rigid solid wall, see Sambasivan et al. [176], Maire et al. [177], Dobrev et al. [181], and Boscheri et al. [71]. The projectile is initially moving with velocity v = (−0.015, 0) toward a wall located at x = 0. This velocity field is imposed within the subregion b , while in the rest of the domain we set v = 0. The remaining initial conditions are chosen as ρ = ρ 0 , p = p 0 , A = I, J = 0 and with the parameters τ 0 = 1 and m = 20 for the computation of the relaxation time (73). Unlike in Lagrangian schemes, we do not need to set any boundary conditions on the free surface of the moving bar. We only apply reflective slip wall boundary conditions on the wall in x = 0. According to Maire et al. [177], Dobrev et al. [181], and Boscheri et al. [71] the final time of the simulation is t = 5, 000. The computational domain is discretized on a regular Cartesian grid composed of 512 × 256 elements using a third order ADER-WENO finite volume scheme. As in Boscheri et al. [71] we FIGURE 13 | Geometry of the Taylor bar at time t = 1, 000 (top) and at the final time t = 5, 000 (bottom) obtained with a third order ADER-WENO finite volume scheme applied to the diffuse interface GPR model. We plot the contour colors of the volume fraction function α, where black regions denote α > 0.5 and white regions α < 0.5. employ a classical source splitting for the treatment of the stiff sources that arise in the regions of plastic deformations, i.e., when σ ≫ σ 0 . In Figure 13, we show the computational results at t = 1000 and at the final time t = 5, 000. The obtained solution is in agreement with the results presented in Maire et al. [177], Boscheri et al. [71], and Peshkov et al. [72]. At time t = 5, 000, we measure a final length of the projectile of L f = 456, which fits the results achieved in Maire et al. [177] and Boscheri et al. [71] up to 2%.
Dambreak Problem
In this last section on numerical test problems, we solve a twodimensional dambreak problem with different relaxation times in order to show the entire range of potential applications of the GPR model. For this purpose, we also activate the gravity source term, setting the gravity vector to g = (0, −g) with g = 9.81. The computational domain is chosen as = [0, 4] × [0, 2] and is discretized with a fourth order ADER discontinuous Galerkin finite element scheme with polynomial approximation degree N = 3 and a posteriori subcell TVD finite volume limiter. Computations are run on a uniform Cartesian mesh composed of 128 × 64 elements until the final time t = 0.5. The initial condition is chosen as follows: we set ρ = ρ 0 , v = 0, A = I and J = 0 in the entire computational domain. We impose the slip boundary condition on the bottom. In the subdomain d = [0, 2] × [0, 1], we set α = 1 − ε, and p = ρ 0 g(y − 1), while in the rest of the domain we set α = ε and p = 0. In this test problem we set ε = 10 −2 and use a stiffened gas FIGURE 15 | Dambreak problem at t = 0.4, simulated with a fourth order ADER-DG scheme using a space-time adaptive Cartesian AMR mesh applied to the GPR model with with ν = 10 −3 (Top), and reference solution, computed with a third order ADER-WENO finite volume scheme on a very fine uniform Cartesian grid, solving the inviscid and barotropic reduced Baer-Nunziato approach presented in [106,107] (Bottom). equation of state with parameters ρ 0 = 1, 000, p 0 = 5 × 10 4 , γ = 2, c h = 0 and a shear sound speed c s = 6. Simulations are run in three different regimes, only characterized by a different choice of the strain relaxation time τ 1 . In the first simulation, we set τ 1 so that a kinematic viscosity ν = µ/ρ 0 = 10 −3 is reached in the stiff relaxation limit, i.e., the GPR model in this case describes an almost inviscid fluid. In the second simulation we choose τ 1 so that ν = 0.1, i.e., a high viscosity Newtonian fluid behavior is reached. In the last simulation we set τ 1 → ∞, i.e., the strain relaxation term is switched off so that an ideal elastic solid with low shear resistance is described, similar to a jelly-type medium. In all cases, we apply solid slip wall boundary conditions on the left and on the right of the computational domain, while on the right and upper boundary, transmissive boundary conditions are set. The temporal evolution of the volume fraction function α, together with the coarse mesh used in this simulation, are depicted in Figure 14. The results for the almost inviscid fluid agree qualitatively well with those shown in Ferrari et al. [182], Dumbser et al. [106], and Gaburro et al. [107] for nonhydrostatic dambreak problems. In order to corroborate this statement quantitatively, we now repeat the simulation with ν = 10 −3 using a fourth order ADER-DG scheme on a coarse AMR grid composed of only 32 × 16 elements on the level zero grid. We then apply two levels of AMR refinement with refinement factor r = 3, i.e., we employ a general space-tree, rather than a simple quad-tree. We note that the simulations on the AMR grid are run in combination with time-accurate local time stepping (LTS), which is trivial to implement in high order ADER-DG and ADER-FV schemes, due to their fully-discrete one-step nature. For details on LTS, see Dumbser et al. [37,54],Dumbser [64] and Gaburro et al. [65]. As a reference solution of this almost inviscid flow problem, we solve the reduced barotropic and inviscid Baer-Nunziato model introduced in Dumbser [106] and Gaburro et al. [107], using a third order ADER-WENO finite volume scheme on a very fine uniform Cartesian grid composed of 1024 × 512 elements. The direct comparison of the two simulations at time t = 0.4 is shown in Figure 15. Overall we can indeed note an excellent agreement between the behavior of the diffuse interface GPR model in the stiff relaxation limit and the weakly compressible inviscid non-hydrostatic free surface flow model of Dumbser [106] and Gaburro et al. [107].
CONCLUSIONS AND OUTLOOK
In the first part of this paper we have provided a review of the ADER approach, whose development started about 20 years ago with the seminal works of Toro et al. [20] Millington et al. [19], Titarev and Toro [29], and Toro and Titarev [28] in the context of approximate solvers for the generalized Riemann problem (GPR). The ADER method provides fully discrete explicit onestep schemes that are in principle arbitrary high order accurate in both space and time. The most recent developments include ADER schemes for stiff source terms, as well as ADER finite volume and discontinuous Galerkin finite element schemes on fixed and moving meshes, which are all based on a spacetime predictor-corrector approach. The fact that ADER schemes are fully discrete makes the implementation of time accurate local time stepping (LTS) particularly simple, both on adaptive Cartesian AMR meshes [54], as well as in the context of Lagrangian schemes on moving grids [64,65]. The fully discrete space-time formulation also allows the treatment of topology changes during one time step in a very natural way [77]. In the second part of the paper we have then shown several applications of high order ADER finite volume and discontinuous Galerkin finite element schemes to the novel unified hyperbolic model of continuum mechanics (GPR model) proposed by Godunov, Peshkov and Romenski [56,57,59]. The presented test problems cover the entire range of continuum mechanics, from ideal elastic solids over plastic solids to viscous fluids. The use of a diffuse interface approach allows also to simulate moving boundary problems on fixed Cartesian meshes. Future developments will concern the extension of the mathematical model to non-Newtonian fluids [183] and to free surface flows with surface tension, see Schmidmayer et al. [184] and Chiocchetti et al. [185], as well as to the conservative multi-phase model of Romenski et al. [186,187]. In future work we will also consider the use of novel all speed schemes [188] and semi-implicit space-time discontinuous Galerkin finite element schemes [189][190][191] for the diffuse interface version of the GPR model used in this paper.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
AUTHOR CONTRIBUTIONS
The governing PDE system was developed by IP. The numerical method and the computer codes were developed by MD, EG, and SC. The test problems were computed by MD, EG, and SC. The analysis of the method was performed by SB. All authors discussed the results and contributed to the final manuscript. | 19,338.2 | 2019-12-04T00:00:00.000 | [
"Engineering",
"Physics"
] |
Anticancer derivative of the natural alkaloid, theobromine, inhibiting EGFR protein: Computer-aided drug discovery approach
A new semisynthetic derivative of the natural alkaloid, theobromine, has been designed as a lead antiangiogenic compound targeting the EGFR protein. The designed compound is an (m-tolyl)acetamide theobromine derivative, (T-1-MTA). Molecular Docking studies have shown a great potential for T-1-MTA to bind to EGFR. MD studies (100 ns) verified the proposed binding. By MM-GBSA analysis, the exact binding with optimal energy of T-1-MTA was also identified. Then, DFT calculations were performed to identify the stability, reactivity, electrostatic potential, and total electron density of T-1-MTA. Furthermore, ADMET analysis indicated the T-1-MTA’s general likeness and safety. Accordingly, T-1-MTA has been synthesized to be examined in vitro. Intriguingly, T-1-MTA inhibited the EGFR protein with an IC50 value of 22.89 nM and demonstrated cytotoxic activities against the two cancer cell lines, A549, and HCT-116, with IC50 values of 22.49, and 24.97 μM, respectively. Interestingly, T-1-MTA’s IC50 against the normal cell lines, WI-38, was very high (55.14 μM) indicating high selectivity degrees of 2.4 and 2.2, respectively. Furthermore, the flow cytometry analysis of A549 treated with T-1-MTA showed significantly increased ratios of early apoptosis (from 0.07% to 21.24%) as well as late apoptosis (from 0.73% to 37.97%).
Introduction
The world health organization (WHO) anticipated that during the next few years, cancer will dominate all other causes of death [1]. Developing treatments that suppress the growth of cancer by interacting with specific molecular targets and damaging the cancer cells is a major [3,4-d]pyrimidine that showed excellent efficacy for inhibiting EGFR-TK at nono-molar doses [34,35]. Our team previously synthesized compound V (a thieno [2,3-d]pyrimidine derivative) that was promising anti-proliferative and EGFR inhibitor [36] (Fig 1). These compounds possess some pharmacophoric features of EGFR-TKIs. These features are a planar heterocyclic system, an NH spacer, a terminal hydrophobic head and a hydrophobic tail. The key roles of the above-mentioned structural moieties are to occupy the adenine binding pocket [37], interact with amino acid residues in the linker region [38], to be inserted in the hydrophobic region I [39], and to occupy the hydrophobic region II [40,41], respectively (Fig 1).
In this work and as an extension of our previous efforts in the discovery of new anti-EGFR agents [36,[42][43][44], compound V was used as a lead compound to reach a more promising anticancer agent targeting EGFR. Several chemical modifications were carried out at four positions. The first position is the planar heterocyclic system. We applied the ring variation strategy as the thieno [2,3-d] pyrimidine moiety was replaced by a xanthine derivative (3-methyl-3,7-dihydro-1H-purine-2,6-dione). The six hydrogen bond (HB) acceptors may facilitate the HB interaction in the adenine binding pocket. Chain extension strategy was applied in the liker region through the replacement of the NH-linker with acetamide moiety. The terminal hydrophobic head (3-iodobenzoic acid) of the lead compound was replaced by toluene moiety) via ring variation strategy. A simplification strategy was applied for the hydrophobic tail (cyclohexene) of the lead compound. It was replaced by methyl group at 7-posision of xanthine moiety (Fig 2).
The 1 H NMR spectrum of T-1-MTA showed singlet signal at δ = 8.07 for CH imidazole and multiplet signals ranging from δ 7.41 to 6.87 for aromatic protons besides remarkable singlet signals for the CH 3 (of m-tolyl group) and CH 2 groups at δ = 2.27 and 4.67, respectively. The IR spectrum of the same product revealed absorption bands at 1711, 1662 cm -1 corresponding to carbonyl groups and absorption bands at 3255 cm -1 corresponding to NH. Regarding the 13 C NMR spectrum, four shielded signals appeared at 43.84, 33.66, 29.90, and 21.62 ppm corresponding to CH 2 and the three CH 3 groups, respectively.
Molecular docking
The examined proteins' X-ray structures (EGFR WT ; PDB: 4HJO and EGFR T790M ; PDB: 3W2O) were acquired from the Protein Data Bank (PDB, http://www.pdb.org). First, the docking protocol was verified for both wild and mutant EGFR and the RMSD results were 1.20 and 1.15 Å, respectively Erlotinib, as a native inhibitor for EGFR WT , revealed an affinity value of -20.50 kcal/mol. The binding pattern of erlotinib revealed a key HB with Met769 (2.11 A˚) in addition to four hydrophobic interactions (HI) in the adenine pocket and three HIs with Ala719 and Val702, and Lys721 in the hydrophobic pocket (Fig 4). TAK-285, as a native inhibitor for EGFR T790M , presented a binding energy of -7.20 kcal/mol. The binding pattern of TAK-285 revealed a key HB with Met793 (2.44 A˚) through the pyrimidine moiety in the adenine pocket. The later moieties (3-(trifluoromethyl)phenoxy and N-ethyl-3-hydroxy-3-methylbutanamide moieties)
PLOS ONE
were fixed in the hydrophobic pocket via a network of HIs with Lys745, Ile759, Met790, Val726, and Ala743, and Leu844 (Fig 5).
Regarding the EGFR WT , a comparable affinity value to erlotinib was obtained by T-1-MTA (-20.45 kcal/mol). Additionally, it interacts with the EGFR WT active site similar to erlotinib and adopts the same orientation. Besides, the 3,7-dimethyl-2,6-dioxo-2,3,6,7-tetrahydro-1Hpurine arm formed a crucial HB with Met769 besides two HIs with Lue694 inside the adenine pocket. On the other side, five HIs with Leu764, Ala719, Val702, and Lys721 were achieved via the m-tolyl moiety in the conserved hydrophobic pocket. The methyl group at 7-posision of xanthine moiety failed to form HIs in the hydrophobic pocket II (Fig 6).
MD simulations
The MD analyses obtained on a 100 ns production run showing an overall system stability. The RMSD plot (Fig 8A) showed a stable trend for the EGFR only and the EGFR_T-1-MTA complex that were represented as blue and green curves showings averages of 2.16 Å and 2.97 Å, respectively. Moreover, the RMSD of the T-1-MTA (red) showed three states during the whole trajectory. The first 10 ns show an average of 2.16 Å before spiking to an average of 9.43 Å for the next 30 ns. Moreover, the last 60 ns show a large stable average value of 17.72 Å. The reason for this increase in the RMSD values of the compound T-1-MTA is due to the translational movement of the compound T-1-MTA relative to the protein as shown in Fig 8G which compares between the positions of the ligand at 1.5 ns (green sticks), 29.5 ns (cyan sticks), 83.9 ns (magenta sticks), and 94 ns (yellow sticks). The RoG (Fig 8B), SASA and (Fig 8C) HB show a stable protein fluctuation with an average of 19.51 Å, and 15285 Å 2 , respectively. The change in HBs between the T-1-MTA and EGFR (Fig 8D) shows that there is, approximately, at least one HB formed during the first 40 ns and it increases to at least two bonds during the rest of the simulation. The amino acids' fluctuation was depicted in the RMSF plot (Fig 8E) showing low values of fluctuation (less than 2 Å) excepting the free C-terminal and the loop region E842:Y845 reaching 7 Å, and 3.5 Å, respectively. During the simulation time, the distance between the center of mass of compound T-1-MTA and the center of mass of EGFR protein shows a similar trend to the RMSD values of the ligand (three states) (Fig 8F). It started with an average of 16.72 Å for the first 15 ns before slightly decreasing to an average of 14.02 Å for the next 25 ns (from 15 ns to 40 ns). Finally, the last 60 ns showed an average value of 11.87 Å showing a stable interaction (Fig 8G).
MM-GBSA studies
The binding free energy of the EGFR_T-1-MTA complex was further analyzed deeply by the MM-GBSA analysis. As Fig 9 shows, the EGFR_T-1-MTA complex had a total binding energy
Protein-Ligand Interaction Profiler (PLIP) studies
After that, to obtain a representative frame for each cluster of the EGFR_T-1-MTA complex, the obtained trajectory was clustered. The elbow method was used to automatically choose the number of clusters, as described in the methodologies section, and this resulted in four clusters. The PLIP website was used to determine the number and types of interactions between T-1-MTA and EGFR for each cluster representative ( Table 1). As can be seen, HIs have a similar overall number of interactions in all the clusters compared to the HBs (7 HIs vs. 6 HBs). Additionally, a.pse file was generated to understand the 3D conformations of T-1-MTA as well as its interaction against the EGFR (Fig 11).
DFT studies
In an attempt to clarify the inhibitory activity of T-1-MTA, theoretical DFT studies have been explored. The conceptual DFT has been used for understanding the electronic structure of the prepared molecule to determine its structural features which has far-reaching consequences on the molecules' reactivity. Hence, the DFT-based reactivity descriptors (global), frontier molecular orbital analysis (FMO), and surface potential maps have been investigated to explore the reactivity of the prepared compound.
Geometry optimization
The reactivity of T-1-MTA is mainly determined by its chemical structure, so the structure is fully optimized and computed using DFT. The single bond length N2-C14 is 1.4765 Å,
Frontier molecular orbital analysis (FMO) analysis
Border molecular orbitals in a molecule play a vital role in the electric properties as the system with a smaller value of energy gap between the border orbitals (E gap = E LUMO -E HOMO ) should be more reactive than one having a greater E gap . Fortunately, T-1-MTA reported a smaller E gap value, so the electronic movement between the border orbitals; LUMO and HOMO, could occur easily [47]. The nodal properties of HOMO-LUMO orbitals of the studied heterocyclic molecule in
PLOS ONE
within the molecule is easy [48]. The quantum chemical parameters such as ionization potential (IP) and electron affinity (EA) were calculated and listed in Table 2.
Global reactive indices and total density of state (TDOS). Based on the density functional theory (DFT) concept, global reactivity parameters are essential tools for comprehending the behavior of any chemical molecular structure. Such global reactivity indices depend on the value of E gap . In Table 2, the static global properties of T-1-MTA, namely the electrophilicity (ω), maximal charge acceptance (N max ), energy change (ΔE), chemical potential (μ), global chemical softness (σ), global electronegativity (χ), global chemical hardness (η), and electron Table 2 indicated that T-1-MTA is treated as soft within the nucleophilicity and electrophilicity scales [49]. The density of states and the distribution function probability determined by the occupied states per unit volume are important to provide an accurate description best than frontier molecular orbitals. The TDOS spectrum of T-1-MTA in Fig 14 depicted that the highest electronic intensity is located in the occupied orbitals under the HOMO orbital. Also, the TDOS spectrum confirmed the narrow HOMO-LUMO gap.
2.6.4. Molecular surface potential maps. Molecular electrostatic surface potential discovers the relationship between the electronic distribution over the molecule surface and its binding ability. The molecular electrostatic potential explains and predicts the noncovalent
ADMET profiling study
The approval of any new compound as a marketed drug is based on a pharmacokinetic evaluation in addition to its biological activity. So, analyzing the ADME properties of a compound at the early stages should keep the discovery process from being delayed [50]. Although ADMET studies in vitro can investigate the properties of the absorbent, distribution, metabolism, excretion, and toxicity of drugs, in silico studies are advantageous because of their ability of saving cost, time, effort in addition to the regulations restricting the use of animals [51]. Computing
PLOS ONE
ADMET parameters using Discovery is used to determine the ADMET parameters for T-1-MTA against erlotinib. Interestingly, the obtained results of T-1-MTA comparing erlotinib (Fig 16 and Table 3) showed a high likeness degree as it was anticipated to have a low potential to pass the BBB. Additionally, hepatotoxicity (HT) and the inhibition of cytochrome P-450 (CYP2D6-I) were expected to be absent. Also, T-1-MTA levels of aqueos solubility (AS) and intestinal absorption (IA) were computed as good.
In silico toxicity studies
For a drug to be developed successfully, toxicity assessment at the early stages must be done in order to control the possibility of failure in the clinical stage [52]. The in silico approach to toxicity assessment is promising being accurate and avoiding ethical and resource constraints in the in vitro and in vivo phases of toxicity development [53]. In silico prediction of toxicity basically uses the structure-activity relationship (SAR)-predicting toxicity. In detail, the computer compares the chemical properties of the examined molecules against the structural properties of tens of thousands of compounds of reported safety or toxicity [54]. Employing the Discovery studio software, eight toxicity models were used to estimate T-1-MTA's toxicity in Table 4) 2.9. Biological evaluation 2.9.1. In vitro EGFR inhibition. For the purpose of examining the design and the computational outcomes that clearly demonstrated T-1-MTA's significant affinity for EGFR, T-1-MTA's inhibitory ability was assessed in vitro against the EGFR protein (Fig 17). The obtained inhibition value (22.89 nM) was near to erlotinib's value, and the resulting in vitro results confirmed T-1-MTA 's suppressive potential.
Cytotoxicity and safety
In vitro cytotoxicity assessment was performed for T-1-MTA using compared to erlotinib as demonstrated in Table 5. The obtained IC 50 values of T-1-MTA against A549 and HCT-116 malignant cells were 22.49 and 24.97 μM, respectively. T-1-MTA's anticancer potential was close to that of erlotinib.
As a confirmation of the computed safety pattern of T-1-MTA and to explore its selectivity, T-1-MTA was tested against the W138 human normal cell line. T-1-MTA showed a high IC 50 value of 55.14 μM as well as very high selectivity indexes (SI) of 2.4 and 2.2 against the two cancer cell lines, respectively (Fig 18).
2.9.3. Cell cycle analysis and apoptosis assay. Firstly, the cell cycle phases of A549 after T-1-MTA's treatment was analyzed by flow cytometry according to the reported method before [55,56]. A concentration of 22.49 μM of T-1-MTA was added to A549 cells for 72 h. Then, the cancer's cell cycle was investigated. Interestingly, T-1-MTA decreased the percentage of A549 cells in the Sub-G1 and S phases from 0.75% and 68.17% to 0.36% and 28.60%, respectively. Contraversly, in the G2/M phase, the A549 percent was significantly increased from 18.69 to 49.20 after T-1-MTA's treatment ( Table 6 and Fig 19).
To verify the apoptotic effects of T-1-MTA, the apoptosis percentage in the A549 cells was examined by Annexin V and PI double stains after it was subjected of 22.49 μM of T-1-MTA for 72 h [57,58]. Interestingly, T-1-MTA reduced the viable cancer cell count. Comparing control, T-1-MTA induced higher ratio of apoptotic cells. Also, T-1-MTA caused increased the
PLOS ONE
apoptotic cells' percentage significantly in the early stage of apoptosis (from 0.07% to 21.24%) as well as the late stage of apoptosis (from 0.73% to 37.97%). Also, the necrosis percentage was elevated to be 1.78, compared to 0.04% in the control cells (Fig 19 & Table 7). In conclusion, T-1-MTA successfully arrested the A549 cell cycle at the G2/M phase causing cytotoxic potentialities that may be connected to apoptosis.
Conclusion
According to the essential structural features of EGFR inhibitors, a new lead theobrominederived candidate, T-1-MTA has been designed. An anti-EGFR potential of the T-1-MTA was showed by molecular docking and verified by six MD simulations (over an100 ns), three MM-GBSA, and three DFT studies. Likely, computational ADMET studies indicated a general drug-likeness and safety. The biological evaluation confirmed the in silico results as T-1-MTA showed EGFR inhibitory activity with IC 50 .21g) was added to a solution of the potassium 3,7-dimethyl-3,7-dihydro-1H-purine-2,6-dione 2 (0.001 mol, 0.25g) in DMF (10 mL), and the mixture was heated in a water bath for 8 h. After being poured onto ice water (200 mL), the reaction mixture was gently stirred for certain time. To Fig 17. In vitro EGFR-inhibition potentialities of T-1-MTA (A) and erlotinib (B).
Comp.
In vitro cytotoxicity IC 50 afford T-1-MTA (Fig 20), the obtained ppt was filtered, water washed, and crystallized from methanol.
Docking studies
Was operated for T-1-MTA by MOE2014 software. The supplementary section (S2) in S1 Data includes a detailed explanation.
MD simulations
Was operated for T-1-MTA by the CHARMM-GUI web server and GROMACS 2021 [24,59]. The supplementary section (S3) in S1 Data includes a detailed explanation.
MM-GBSA
Was operated for T-1-MTA by the Gmx_MMPBSA package [60]. The supplementary section (S4) in S1 Data includes a detailed explanation.
DFT
Was operated for T-1-MTA by Gaussian 09 and GaussSum3.0 programs. The supplementary section (S5) in S1 Data includes a detailed explanation.
ADMET studies
Was operated for T-1-MTA by Discovery Studio 4.0. The supplementary section (S6) in S1 Data includes a detailed explanation.
Toxicity studies
Was operated for T-1-MTA by Discovery Studio 4.0. The supplementary section (S7) in S1 Data includes a detailed explanation.
In vitro EGFR inhibition
Was operated for T-1-MTA by Human EGFR ELISA kit. The supplementary materials (S8) in S1 Data show a comprehensive explanation.
In vitro antiproliferative activity
Was operated for T-1-MTA by MTT procedure. The supplementary materials (S9) in S1 Data show a comprehensive explanation.
Safety assay
Was operated for T-1-MTA by MTT procedure utilizing W138 cell lines. The supplementary section (S10) in S1 Data includes a detailed explanation.
Cell cycle analysis and apoptosis
Was operated for T-1-MTA flowcytometry analysis technique. The supplementary section (S11 and S12) in S1 Data includes a detailed explanation. | 3,995.6 | 2023-03-09T00:00:00.000 | [
"Medicine",
"Chemistry",
"Computer Science"
] |
In Silico Study on Sulfated and Non-Sulfated Carbohydrate Chains from Proteoglycans in Cnidaria and Their Interaction with Collagen
Proteoglycans and collagen molecules are interacting with each other thereby forming various connective tissues. The sulfation pattern of proteoglycans differs depending on the kind of tissue and/or the degree of maturation. Tissues from Cnidaria are suitable examples for exploration of the effects in relation to the presence and the absence of sulfate groups, when studying characteristic fragments of the long proteoglycan carbohydrate chains in silico. It has been described that a non-sulfated chondroitin appears as a scaffold in early morphogenesis of all nematocyst types in Hydra. On the other hand, sulfated glucosaminoglycans play an important role in various developmental processes of Cnidaria. In order to understand this biological phenomenon on a sub-molecular level we have analysed the structures of sulfated and non-sulfated proteoglycan carbohydrate chains as well as the structure of diverse collagen molecules with computational methods including quantum chemical calculations. The strong interactions between the sulfate groups of the carbohydrates moieties in proteoglycans and positively charged regions of collagen are essential in stabilizing various Cnidaria tissues but could hinder the nematocyst formation and its proper function. The results of our quantum chemical calculations show that the sulfation pattern has a significant effect on the conformation of chondroitin structures under study.
Introduction
Proteoglycans represent a special class of glycoproteins, which are heavily glycosylated.These bio-macromolecules consist of a core protein with one or more covalently attached glycosaminoglycan (GAG) chain(s).The glycosaminoglycan chains are long, linear carbohydrate polymers that are negatively charged under physiological conditions, due to the occurrence of sulfate and uronic acid groups.Chondroitin sulfate is the most prevalent GAG.Its linkage geometry between predominant disaccharide units is GlcAβ1-3GalNAcβ1.The monomeric residues are GlcA or GlcA(2S) and GalNAc or GalNAc(4S) or GalNAc(6S) or GalNAc(4S,6S) [1].Beside chondroitin, hyaluronan is another GAG, which consists of the same disaccharide units as chondroitin sulfate but is the only GAG that is exclusively non-sulfated.Since the sulfation pattern in Cnidaria differs depending on its locus, tissue or organelle type [2] we used various tissue probes from these organisms and analysed them with AFM methods as well as with optic microscopy techniques.This strategy allows an optimal preparation of collagen-proteoglycan samples and enables to estimate the influence of sulfation on tissue differentiation.Cnidaria are Anthozoa (corals, sea anemones, sea fanes), Hydrozoa (hydra), Cubozoa (box jellyfishes) and Scyphozoa (jellyfishes).Like in tissues of other animals sulfated and non-sulfated proteoglycan carbohydrate chains have been found in Cnidaria [2,3].The proteoglycan carbohydrate chains consist of numerous saccharide moieties.In order to design reliable models of fragments of these large carbohydrate chains we have generated its diverse building blocks with the high-est possible degree of accuracy in the calculation process.First, five different hexasaccharides have been constructed, which can be considered as the characteristic fragments of the carbohydrate proteoglycan chains.The impact of the presence and the absence of sulfate groups on the hydrogen bond network in these carbohydrate chains was studied with quantum chemical calculations.
Methods for theoretical calculation and modeling
At first, we have generated a conformation analysis of the trisaccharide building blocks using the force field CHARMM27 and refined the conformations with the semi-empirical method AM1 with the Hyperchem 8.0 prof.packet [9].The results are a prerequisite for our density functional theory (DFT) calculations.The models were used as starting structures for DFT calculations at a deeper level of theory with the B3LYP/6-31G * approach implemented in Gaussian 03 [10].We have also performed calculations with B3LYP/6-31+G * and could show that the extension of time by a factor of 5 to 10 was the only significant result.Considerable alterations in the calculated structures did not occur.The constructed fragments were grouped together to five different hexasaccharides (I)-(V) and calculated again with DFT using B3LYP/ 6-31G * .The docking calculations between the glycans and colla-gen were performed with the Molegro trial version [11].The following geometric and energy optimizations were carried out again with the force field CHARMM27 in-cluded in the Hyperchem 8.0 prof.packet [9].
Preparation of exumbrella tissue samples
Exumbrella of salted eatable jellyfish Rhopilema esculentum was washed several times in distilled water and then equilibrated for at least one hour in PBS.Small pieces were prepared for cryo-cutting by embedding in Tissue Tek ® (Sakura) and freezed at -20˚C.Cryo-cutting was carried out with a Leica CM 3050S machine.For DAPIstaining (Roche diagnostics) a 1 µg/ml solution in PBS was dropped on the slices, incubated at room temperature for half an hour in darkness and then rinsed with PBS.The optic microscopy study was performed using fluorescence dyes.The AFM measurements have been performed as described in literature [12].For the AFM analysis of the exumbrella tissue of salted eatable jellyfish.
Rhopilema esculentum
We have dissolved 2.8 mg in 1 ml PBS buffer and diluted with pure water to a concentration corresponding to 10 ng/ml.From this probe we laid 50 µl on the mica plates and dried it about 20 min under nitrogen.
Calculations
The quantum-chemical calculations were started with core glycan structures and consisting of trisaccharide building blocks.These building blocks were used for the construction of five different Glycan structures Figure 1 I-V, which differ by their sulfation pattern as described above.It was our aim to calculate precisely defined fragments of the huge proteoglycan carbohydrate chains in order to generate the building blocks for an extensive in silico interaction study with diverse collagen molecules occurring in different tissues of Cnidaria.The calculated energy minimum structures are shown in Figures 2 (I)-(V).
In a first step, the energies and conformations of trisaccharides with and without sulfate groups were calculated with Gaussian 03/Hyperchem 8.0, (Preliminary calculations: AM1, Gnorm = 0.00001).The quantum chemical
Figure 2. Minimal energy conformations of hexasaccharides (I); (II); (III); (IV) and (V).
Under physiological pH conditions the COO -and SO 4 groups are de-protonated.However, in our DFT calculations using B3LYP/6-31G * we have chosen the non-ionized state.It turned out that the protonated state corresponds to the conditions in the water environment much better than the de-protonated state in the vacuum.Certain energy minima are overestimated others are underestimated when quantum chemical calculations are carried out under vacuum conditions.Independent of the protonation state certain energy-minima are adopted under classical conditions when running MD simulations in a water environment.For the in silico molecular docking studies the de-protonated Glycan forms have been used since the electrostatic interactions between the Glycans and the proteins are of major importance.
The calculated glycosidic linkages are shown in Table 1.Corresponding trisaccharides have been constructed and energetically minimized with semi-empirical methods (AM1).The models were used as starting structures for DFT (Density Functional Theory) calculations in a deeper level of theory with the RB3LYP/6-31G * approach.
The trimeric saccharides with the lowest energies were combined to the larger hexameric saccharide chains of Glycans I-V.The energies and conformational angles Φ and Ψ were again calculated with the DFT approach at the RB3LYP/6-31G * level of theory.
The conformation of Glycan I (Figure 2) listed in Table 2 is mainly stabilized by hydrogen bonds, which are displayed for all five glycan stuctures in Table 3.In the case of Glycan II (Figure 2), the sulfate group on Gal-NAc in position 6 has no significant effect on the Φ and Ψ angles due to the high mobility of the sulfate group.Therefore, Φ and Ψ angles similar to those calculated for Glycan (I) occur in Table 2.The sulfate group in 4 position of GalNAc, which is present in Glycan III (Figure 2) suppresses hydrogen bond formation.Therefore, the Φ1-and Ψ1-angle values differ significantly from those of Glycan I.The sulfate group also influences the CH 2 OH group of the GalNAc, which has an impact on the Φ2 and Ψ2 angle values.When calculating the glycosidic angles of Glycan IV (Figure 2) it is obvious that the presence of a sulfate group of the Gal-residue influences the glycosidic angles Φ3 and Ψ3 between GlcA and Gal.In the case of the disulfated Glycan V (Figure 2) the glycosidic angles Φ3 und Ψ3 are similar to those of Glycan IV, as expected.However, the second sulfate group of the second galactose (unit E) causes an alteration in the Φ4 and Ψ4 glycosidic angles due to the hydrogen bonding between this sulfate group and the CH 2 OH group of the other galactose (unit D).As has already been shown for fucobiosides and fucoidans [13], it is possible to determine conformational differences in dependence of the sulfation profile.In contrast to the polysaccharides from Cnidaria, which are discussed here, fucoidans are a group of highly sulfated polysaccharides of brown seaweeds, which have received increasing interest as readily available biopolymers having many promising biological activities.Another source of sulfated carbohydrates are marine fishes.The structural characterization, the anti-inflammatory and anticoagulant activity of chondroitin sulfates from cartilage of Atlantic salmon (Salmo salar), Greenland shark (Somniosus microcephalus), Blackmouth catshark (Galeus melastomus), Birdbeak dogfish (Deania calcea) and Arctic skate (Amblyraja hyperborea) have recently been described [14].
The absence or the presence of a sulfate group is not only important for the energetic minima of the Glycans, the interactions of the Glycans with collagen molecules also depend on the position of the sulfate groups.Therefore, we have highlighted these functional parts of the proteoglycan molecules in their homo-orbital presentations (Figure 3).
In relation to our docking studies it is remarkable that the sulfate groups with their significant HOMO orbitals are essential contact groups for various collagen molecules (Figures 4 and 5).
Molecular interaction partners of proteoglycans in Cnidaria tissues are mini-collagens with charged ends (Figure 4) [15] as well as triple-helical collagen fragments [12,16].The positively charged contact points consist of Arg and Lys residues [12,15,16].Also in the case of triple-helical collagen structures Arg residues are the most suited contact points (Figure 5).
We have performed in silico interaction studies of the five designed Glycan structures (Glycan I-V) and mini-collagen (Figure 4).In the same way we have studied the interactions between Glycans I-V with triplehelical collagen structures (Figure 5).It turned out that the glycosidic angles and , which correspond to the low-energy conformation of the ligand-free state differ slightly from those of the components interacting with the collagen molecules.
Complexes between mini-collagen fragments and characteristic Glycans (Figure 4) have been performed by a feasible molecular docking program (the Molegro trial version [11]).The same software was used for the docking studies, in which triple helical collagen fragments are interacting with the Glycan hexamers under study (Figure 5).The triple helical structure was taken from an Xray model of an integrin-collagen complex (1dzi.pdb) [16].The Glycans and the Arg residues, which are preferentially interacting with the proteoglycan sulfate groups are highlighted in their van-der-Waals representation.
Mini-collagen molecules and small collagen fragments [12,15,[17][18][19] as well as longer collagen fragments [12,17] are interacting in multiple manners with the Glycan structures under study.These results differ significantly from our observations concerning collagen integrin interactions [12].Sophisticated microscopic techniques are needed to clarify how the absence and presence of sulfate groups in proteoglycans from Cnidaria trigger its morphogenesis.Furthermore, the nematocyst discharge processes can be explained in a better way when the interaction mechanisms between non-sulfated proteoglycans and minicollagen molecules can be described on a sub-molecular level.In a nematocyst poly-gamma-glutamate rich minicollagens are synthesized during the formation of nematocyst capsules in Hydra [20].Together with the available information about the differentiation process [21][22][23][24][25] a first step is made to simulate nematocyst discharge processes (coulomb explosions) as well as proteoglycan collagen interactions in cartilage tissues with computational methods.As outlined in two recent publications a solid knowledge about the structural properties of Glycan sulfate groups is a prerequisite for a detailed understanding in respect to their biological function [26,27].
Experimental Part
For the AFM analysis probes of the exumbrella tissue of salted eatable Rhopilema jellyfish were put on mica plates Figure 6 (top).We compared the AFM probes with probes from the same exumbrella tissue, which were prepared for light microscopic analysis by Dapi staining -Figure 6 (bottom).After treatment with an aprotic solvent we recognized that the addition of DMSO can disturb and destroy the collagen-proteoglycan network so that only small tissue pieces remain (Figure 7).
In our present study the quantum chemical calculations and the computational docking analysis of the five Glycans are flanked by an AFM analysis of various jellyfish tissues.It is described in the literature that hydrolyzed collagen can induce chondrogenic differentiation of equine adipose tissue-derived stromal cells [28].Although the impact of proteoglycan fragments in such differentiation processes is unknown it is now possible to combine the theoretically derived data about the collagen-proteoglycan interactions with results obtained by AFM and light microscopic techniques (Figure 8).After this study the theoretical background, which allows us a better defined discussion of the influence of collagen fragments and proteoglycans from various species on a sub-molecular level.Equine mesenchymal stem cells can be cultured on various media consisting of collagen (Figure 8) or of collagen-proteoglycan mixtures.Thereby, considering studies of stem cells in hydra [29,30].We recognized that the differentiation processes are strongly depending on the proteoglycan-collagen ratio of the growth media.
Beside the unspecific proteoglycan collagen interacttion our theoretical and experimental results argue in favor that for the stabilization of the nematocyst tissue also a specific carbohydrate binding protein, i.e. nematogalectin [31] must be present.However, this kind of galectin is only specific for non-sulfated carbohydrates as it is the case for galectins in general [32].To summarize our studies we have opened new routes to investigate proteoglycan-collagen interactions at an atomic size-level, as described for collagen hydrolysates and collagen fragments from various species [33].
Figure 3 .
Figure 3. Homo-orbital presentation of GalNAc, non-sulfated (top) and sulfated at position 6 (middle) as well as at position 4 (bottom).The sulfate groups are charged.Orbitals have been calculated for the whole molecules.When a sulfate group is present the orbitals are located at the corresponding position.
Figure 4 .
Figure 4. Mini-collagen fragment (1sop.pdb)[15, 17] in complex with non-sulfated Glycan I (top) and sulfated Glycan II (below the top) in their backbone presentations with highlighted Arg23 and Lys24 residues.The surface presentations are given in the same orientation: Gycan I (above bottom) and sulfated Glycan II (bottom).
Figure 5 .
Figure 5. Triple-helical collagen structure in complex with Glycan I (top) and Glycan II (bottom).
Figure 6 .
Figure 6.AFM and fluorescent pictures of the exumbrella tissue of salted eatable Rhopilema jellyfish.
Figure 7 .
Figure 7. Impact of DMSO on the exumbrella tissue of the jellyfish Rhopilema esculentum.Only small pieces of the jellyfish tissue remain in the aprotic solvent.The sample was dried under nitrogen.
Figure 8 .
Figure 8. AFM pictures of an equine mesenchymal stem cell (cultured on a collagen surface).The middle and right picture shows a detailed presentation of their cilia. | 3,383.8 | 2012-05-22T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Social Media, Ethics and the Privacy Paradox
Today’s information/digital age offers widespread use of social media. The use of social media is ubiquitous and cuts across all age groups, social classes and cultures. However, the increased use of these media is accompanied by privacy issues and ethical concerns. These privacy issues can have far-reaching professional, personal and security implications. Ultimate privacy in the social media domain is very difficult because these media are designed for sharing information. Participating in social media requires persons to ignore some personal, privacy constraints resulting in some vulnerability. The weak individual privacy safeguards in this space have resulted in unethical and undesirable behaviors resulting in privacy and security breaches, especially for the most vulnerable group of users. An exploratory study was conducted to examine social media usage and the implications for personal privacy. We investigated how some of the requirements for participating in social media and how unethical use of social media can impact users’ privacy. Results indicate that if users of these networks pay attention to privacy settings and the type of information shared and adhere to universal, fundamental, moral values such as mutual respect and kindness, many privacy and unethical issues can be avoided.
Introduction
The use of social media is growing at a rapid pace and the twenty-first century could be described as the "boom" period for social networking. According to reports provided by Smart Insights, as at February 2019 there were over 3.484 billion social media users. The Smart Insight report indicates that the number of social media users is growing by 9% annually and this trend is estimated to continue. Presently the number of social media users represents 45% of the global population [1]. The heaviest users of social media are "digital natives"; the group of persons who were born or who have grown up in the digital era and are intimate with the various technologies and systems, and the "Millennial Generation"; those who became adults at the turn of the twenty-first century. These groups of users utilize social media platforms for just about anything ranging from marketing, news acquisition, teaching, health care, civic engagement, and politicking to social engagement.
The unethical use of social media has resulted in the breach of individual privacy and impacts both physical and information security. Reports in 2019 [1], Security and Privacy From a Legal, Ethical, and Technical Perspective 2 reveal that persons between the ages 8 and 11 years spend an average 13.5 hours weekly online and 18% of this age group are actively engaged on social media. Those between ages 12 and 15 spend on average 20.5 hours online and 69% of this group are active social media users. While children and teenagers represent the largest Internet user groups, for the most part they do not know how to protect their personal information on the Web and are the most vulnerable to cyber-crimes related to breaches of information privacy [2,3].
In today's IT-configured society data is one of, if not the most, valuable asset for most businesses/organizations. Organizations and governments collect information via several means including invisible data gathering, marketing platforms and search engines such as Google [4]. Information can be attained from several sources, which can be fused using technology to develop complete profiles of individuals. The information on social media is very accessible and can be of great value to individuals and organizations for reasons such as marketing, etc.; hence, data is retained by most companies for future use.
Privacy
Privacy or the right to enjoy freedom from unauthorized intrusion is the negative right of all human beings. Privacy is defined as the right to be left alone, to be free from secret surveillance, or unwanted disclosure of personal data or information by government, corporation, or individual (dictionary.com). In this chapter we will define privacy loosely, as the right to control access to personal information. Supporters of privacy posit that it is a necessity for human dignity and individuality and a key element in the quest for happiness. According to Baase [5] in the book titled "A Gift of Fire: Social, Legal and Ethical Issues for Computing and the Internet," privacy is the ability to control information about one' s self as well as the freedom from surveillance from being followed, tracked, watched, and being eavesdropped on. In this regard, ignoring privacy rights often leads to encroachment on natural rights.
Privacy, or even the thought that one has this right, leads to peace of mind and can provide an environment of solitude. This solitude can allow people to breathe freely in a space that is free from interference and intrusion. According to Richards and Solove [6], Legal scholar William Prosser argued that privacy cases can be classified into four related "torts," namely: 1. Intrusion-this can be viewed as encroachment (physical or otherwise) on ones liberties/solitude in a highly offensive way.
2. Privacy facts-making public, private information about someone that is of no "legitimate concern" to anyone.
3. False light-making public false and "highly offensive" information about others.
4. Appropriation-stealing someone's identity (name, likeness) to gain advantage without the permission of the individual.
Technology, the digital age, the Internet and social media have redefined privacy however as surveillance is no longer limited to a certain pre-defined space and location. An understanding of the problems and dangers of privacy in the digital space is therefore the first step to privacy control. While there can be clear distinctions Social Media, Ethics and the Privacy Paradox DOI: http://dx.doi.org /10.5772/intechopen.90906 between informational privacy and physical privacy, as pointed out earlier, intrusion can be both physical and otherwise. This chapter will focus on informational privacy which is the ability to control access to personal information. We examine privacy issues in the social media context focusing primarily on personal information and the ability to control external influences. We suggest that breach of informational privacy can impact: solitude (the right to be left alone), intimacy (the right not to be monitored), and anonymity (the right to have no public personal identity and by extension physical privacy impacted). The right to control access to facts or personal information in our view is a natural, inalienable right and everyone should have control over who see their personal information and how it is disseminated.
In May 2019 the General Data Protection Regulation (GDPR) clearly outlined that it is unlawful to process personal data without the consent of the individual (subject). It is a legal requirement under the GDPR that privacy notices be given to individuals that outline how their personal data will be processed and the conditions that must be met that make the consent valid. These are: 1. "Freely given-an individual must be given a genuine choice when providing consent and it should generally be unbundled from other terms and conditions (e.g., access to a service should not be conditional upon consent being given)." 2. "Specific and informed-this means that data subjects should be provided with information as to the identity of the controller(s), the specific purposes, types of processing, as well as being informed of their right to withdraw consent at any time." 3. "Explicit and unambiguous-the data subject must clearly express their consent (e.g., by actively ticking a box which confirms they are giving consentpre-ticked boxes are insufficient)." 4. "Under 13s-children under the age of 13 cannot provide consent and it is therefore necessary to obtain consent from their parents." Arguments can be made that privacy is a cultural, universal necessity for harmonious relationships among human beings and creates the boundaries for engagement and disengagement. Privacy can also be viewed as instrumental good because it is a requirement for the development of certain kinds of human relationships, intimacy and trust [7]. However, achieving privacy is much more difficult in light of constant surveillance and the inability to determine the levels of interaction with various publics [7]. Some critics argue that privacy provides protection against anti-social behaviors such as trickery, disinformation and fraud, and is thought to be a universal right [5]. However, privacy can also be viewed as relative as privacy rules may differ based on several factors such as "climate, religion, technological advancement and political arrangements" [8,9]. The need for privacy is an objective reality though it can be viewed as "culturally rational" where the need for personal privacy is viewed as relative based on culture. One example is the push by the government, businesses and Singaporeans to make Singapore a smart nation. According to GovTech 2018 reports there is a push by the government in Singapore to harness the data "new gold" to develop systems that can make life easier for its people. The [10] report points out that Singapore is using sensors robots Smart Water Assessment Network (SWAN) to monitor water quality in its reservoirs, seeking to build smart health system and to build a smart transportation system to name a few. In this example privacy can be describe as "culturally rational" and the rules in general could differ based on technological advancement and political arrangements.
In today's networked society it is naïve and ill-conceived to think that privacy is over-rated and there is no need to be concerned about privacy if you have done nothing wrong [5]. The effects of information flow can be complex and may not be simply about protection for people who have something to hide. Inaccurate information flow can have adverse long-term implications for individuals and companies. Consider a scenario where someone's computer or tablet is stolen. The perpetrator uses identification information stored on the device to access their social media page which could lead to access to their contacts, friends and friends of their "friends" then participate in illegal activities and engage in anti-social activities such as hacking, spreading viruses, fraud and identity theft. The victim is now in danger of being accused of criminal intentions, or worse. These kinds of situations are possible because of technology and networked systems. Users of social media need to be aware of the risks that are associated with participation.
Social media
The concept of social networking pre-dates the Internet and mass communication as people are said to be social creatures who when working in groups can achieve results in a value greater than the sun of its parts [11]. The explosive growth in the use of social media over the past decade has made it one of the most popular Internet services in the world, providing new avenues to "see and be seen" [12,13]. The use of social media has changed the communication landscape resulting in changes in ethical norms and behavior. The unprecedented level of growth in usage has resulted in the reduction in the use of other media and changes in areas including civic and political engagement, privacy and safety [14]. Alexa, a company that keeps track of traffic on the Web, indicates that as of August, 2019 YouTube, Facebook and Twitter are among the top four (4) most visited sites with only Google, being the most popular search engine, surpassing these social media sites.
Social media sites can be described as online services that allow users to create profiles which are "public, semi-public" or both. Users may create individual profiles and/or become a part of a group of people with whom they may be acquainted offline [15]. They also provide avenues to create virtual friendships. Through these virtual friendships, people may access details about their contacts ranging from personal background information and interests to location. Social networking sites provide various tools to facilitate communication. These include chat rooms, blogs, private messages, public comments, ways of uploading content external to the site and sharing videos and photographs. Social media is therefore drastically changing the way people communicate and form relationships.
Today social media has proven to be one of the most, if not the most effective medium for the dissemination of information to various audiences. The power of this medium is phenomenal and ranges from its ability to overturn governments (e.g., Moldova), to mobilize protests, assist with getting support for humanitarian aid, organize political campaigns, organize groups to delay the passing of legislation (as in the case with the copyright bill in Canada) to making social media billionaires and millionaires [16,17]. The enabling nature and the structure of the media that social networking offers provide a wide range of opportunities that were nonexistent before technology. Facebook and YouTube marketers and trainers provide two examples. Today people can interact with and learn from people millions of miles away. The global reach of this medium has removed all former pre-defined boundaries including geographical, social and any other that existed previously. Technological advancements such as Web 2.0 and Web 4.0 which provide the framework for collaboration, have given new meaning to life from various perspectives: political, institutional and social.
Privacy and social media
Social medial and the information/digital era have "redefined" privacy. In today's Information Technology-configured societies, where there is continuous monitoring, privacy has taken on a new meaning. Technologies such as closedcircuit cameras (CCTV) are prevalent in public spaces or in some private spaces including our work and home [7,18]. Personal computers and devices such as our smart phones enabled with Global Positioning System (GPS), Geo locations and Geo maps connected to these devices make privacy as we know it, a thing of the past. Recent reports indicate that some of the largest companies such as Amazon, Microsoft and Facebook as well as various government agencies are collecting information without consent and storing it in databases for future use. It is almost impossible to say privacy exists in this digital world (@nowthisnews).
The open nature of the social networking sites and the avenues they provide for sharing information in a "public or semi-public" space create privacy concerns by their very construct. Information that is inappropriate for some audiences are many times inadvertently made visible to groups other than those intended and can sometimes result in future negative outcomes. One such example is a well-known case recorded in an article entitled "The Web Means the End of Forgetting" that involved a young woman who was denied her college license because of backlash from photographs posted on social media in her private engagement.
Technology has reduced the gap between professional and personal spaces and often results in information exposure to the wrong audience [19]. The reduction in the separation of professional and personal spaces can affect image management especially in a professional setting resulting in the erosion of traditional professional image and impression management. Determining the secondary use of personal information and those who have access to this information should be the prerogative of the individual or group to whom the information belongs. However, engaging in social media activities has removed this control.
Privacy on social networking sites (SNSs) is heavily dependent on the users of these networks because sharing information is the primary way of participating in social communities. Privacy in SNSs is "multifaceted." Users of these platforms are responsible for protecting their information from third-party data collection and managing their personal profiles. However, participants are usually more willing to give personal and more private information in SNSs than anywhere else on the Internet. This can be attributed to the feeling of community, comfort and family that these media provide for the most part. Privacy controls are not the priority of social networking site designers and only a small number of the young adolescent users change the default privacy settings of their accounts [20,21]. This opens the door for breaches especially among the most vulnerable user groups, namely young children, teenagers and the elderly. The nature of social networking sites such as Facebook and Twitter and other social media platforms cause users to re-evaluate and often change their personal privacy standards in order to participate in these social networked communities [13].
While there are tremendous benefits that can be derived from the effective use of social media there are some unavoidable risks that are involved in its use. Much attention should therefore be given to what is shared in these forums. Social platforms such as Facebook, Twitter and YouTube are said to be the most effective Security and Privacy From a Legal, Ethical, and Technical Perspective 6 media to communicate to Generation Y's (Gen Y's), as teens and young adults are the largest user groups on these platforms [22]. However, according to Bolton et al. [22] Gen Y's use of social media, if left unabated and unmonitored will have longterm implications for privacy and engagement in civic activities as this continuous use is resulting in changes in behavior and social norms as well as increased levels of cyber-crime.
Today social networks are becoming the platform of choice for hackers and other perpetrators of antisocial behavior. These media offer large volumes of data/ information ranging from an individual's date of birth, place of residence, place of work/business, to information about family and other personal activities. In many cases users unintentionally disclose information that can be both dangerous and inappropriate. Information regarding activities on social media can have far reaching negative implications for one's future. A few examples of situations which can, and have been affected are employment, visa acquisition, and college acceptance. Indiscriminate participation has also resulted in situations such identity theft and bank fraud just to list a few. Protecting privacy in today's networked society can be a great challenge. The digital revolution has indeed distorted our views of privacy, however, there should be clear distinctions between what should be seen by the general public and what should be limited to a selected group. One school of thought is that the only way to have privacy today is not to share information in these networked communities. However, achieving privacy and control over information flows and disclosure in networked communities is an ongoing process in an environment where contexts change quickly and are sometimes blurred. This requires intentional construction of systems that are designed to mitigate privacy issues [13].
Ethics and social media
Ethics can be loosely defined as "the right thing to do" or it can be described as the moral philosophy of an individual or group and usually reflects what the individual or group views as good or bad. It is how they classify particular situations by categorizing them as right or wrong. Ethics can also be used to refer to any classification or philosophy of moral values or principles that guides the actions of an individual or group [23]. Ethical values are intended to be guiding principles that if followed, could yield harmonious results and relationships. They seek to give answers to questions such as "How should I be living? How do I achieve the things that are deemed important such as knowledge and happiness or the acquisition of attractive things?" If one chooses happiness, the next question that needs to be answered is "Whose happiness should it be; my own happiness or the happiness of others?" In the domain of social media, some of the ethical questions that must be contemplated and ultimately answered are [24]: • Can this post be regarded as oversharing?
• Has the information in this post been distorted in anyway?
• What impact will this post have on others?
As previously mentioned, users within the ages 8-15 represent one of the largest social media user groups. These young persons within the 8-15 age range are still learning how to interact with the people around them and are deciding on the moral values that they will embrace. These moral values will help to dictate how they will interact with the world around them. The ethical values that guide our interactions are usually formulated from some moral principle taught to us by someone or a group of individuals including parents, guardians, religious groups, and teachers just to name a few. Many of the Gen Y's/"Digital Babies" are "newbies" yet are required to determine for themselves the level of responsibility they will display when using the varying social media platforms. This includes considering the impact a post will have on their lives and/or the lives of other persons. They must also understand that when they join a social media network, they are joining a community in which certain behavior must be exhibited. Such responsibility requires a much greater level of maturity than can be expected from them at that age.
It is not uncommon for individuals to post even the smallest details of their lives from the moment they wake up to when they go to bed. They will openly share their location, what they eat at every meal or details about activities typically considered private and personal. They will also share likes and dislikes, thoughts and emotional states and for the most part this has become an accepted norm. Often times however, these shares do not only contain information about the person sharing but information about others as well. Many times, these details are shared on several social media platforms as individuals attempt to ensure that all persons within their social circle are kept updated on their activities. With this openness of sharing risks and challenges arise that are often not considered but can have serious impacts. The speed and scale with which social media creates information and makes it available-almost instantaneously-on a global scale, added to the fact that once something is posted there is really no way of truly removing it, should prompt individuals to think of the possible impact a post can have. Unfortunately, more often than not, posts are made without any thought of the far-reaching impact they can have on the lives of the person posting or others that may be implicated by the post.
Why do people share?
According to Berger and Milkman [25] there are five (5) main reasons why users are compelled to share content online, whether it is every detail or what they deem as highlights of their lives. These are: • cause related • personal connection to content • to feel more involved in the world • to define who they are • to inform and entertain People generally share because they believe that what they are sharing is important. It is hoped that the shared content will be deemed important to others which will ultimately result in more shares, likes and followers. Figure 1 below sums up the findings of Berger and Milkman [25] which shows that the main reason people feel the need to share content on the varying social media platform is that the content relates to what is deemed as worthy cause. 84% of respondents highlighted this as the primary motivation for sharing. Seventyeight percent said that they share because they feel a personal connection to the content while 69 and 68%, respectively said the content either made them feel more involved with the world or helped them to define who they were. Forty-nine percent share because of the entertainment or information value of the content. A more in depth look at each reason for sharing follows.
Content related to a cause
Social media has provided a platform for people to share their thoughts and express concerns with others for what they regard as a worthy cause. Cause related posts are dependent on the interest of the individual. Some persons might share posts related to causes and issues happening in society. In one example, the parents of a baby with an aggressive form of leukemia, who having been told that their child had only 3 months to live unless a suitable donor for a blood stem cell transplant could be found, made an appeal on social media. The appeal was quickly shared and a suitable donor was soon found. While that was for a good cause, many view social media merely as platforms for freedom of speech because anyone can post any content one creates. People think the expression of their thoughts on social media regarding any topic is permissible. The problem with this is that the content may not be accepted by law or it could violate the rights of someone thus giving rise to ethical questions.
Content with a personal connection
When social media users feel a personal connection to their content, they are more inclined to share the content within their social circles. This is true of information regarding family and personal activities. Content created by users also invokes a deep feeling of connection as it allows the users to tell their stories and it is natural to want the world or at least friends to know of the achievement. This natural need to share content is not new as humans have been doing this in some form or the other, starting with oral history to the media of the day; social media. Sharing the self-created content gives the user the opportunity of satisfying some fundamental needs of humans to be heard, to matter, to be understood and emancipated. The problem with this however is that in an effort to gratify the fundamental needs, borders are crossed because the content may not be sharable (can this content be shared within the share network?), it may not be share-worthy (who is the audience that would appreciate this content?) or it may be out of context (does the content fit the situation?).
Content that makes them feel more involved in the world
One of the driving factors that pushes users to share content is the need to feel more in tune with the world around them. This desire is many times fueled by jealousy. Many social media users are jealous when their friends' content gets more attention than their own and so there is a lot of pressure to maintain one's persona in social circles, even when the information is unrealistic, as long as it gets as much attention as possible. Everything has to be perfect. In the case of a photo, for example, there is lighting, camera angle and background to consider. This need for perfection puts a tremendous amount of pressure on individuals to ensure that posted content is "liked" by friends. They often give very little thought to the amount of their friend's work that may have gone on behind the scenes to achieve that perfect social post.
Social media platforms have provided everyone with a forum to express views, but, as a whole, conversations are more polarized, tribal and hostile. With Facebook for instance, there has been a huge uptick in fake news, altered images, dangerous health claims and cures, and the proliferation of anti-science information. This is very distressing and disturbing because people are too willing to share and to believe without doing their due diligence and fact-checking first.
Content that defines who they are
Establishing one's individuality in society can be challenging for some persons because not everyone wants to fit in. Some individuals will do all they can to stand out and be noticed. Social media provides the avenue for exposure and many individuals will seek to leverage the media to stand out of the crowd and not just be a fish in the school. Today many young people are currently being brought up in a culture that defines people by their presence on social media where in previous generations, persons were taught to define themselves by their career choices. These lessons would start from childhood by asking children what they wanted to be when they grew up and then rewarding them based on the answers they give [27]. In today's digital era, however, social media postings and the number of "likes" or "dislikes" they attract, signal what is appealing to others. Therefore, post that are similar to those that receive a large number of likes but which are largely unrealistic are usually made for self-gratification.
Content that informs and entertains
The acquisition of knowledge and skills is a vital part of human survival and social media has made this process much easier. It is not uncommon to hear persons realizing that they need a particular knowledge set that they do not possess say "I need to lean to do this. I'll just YouTube it." Learning and adapting to change in as short as possible time is vital in today's society and social media coupled with the Internet put it all at the finger tips. Entertainment has the ability to bring people together and is a good way for people to bond. It provides a diversion from the demands of life and fills leisure time with amusement. Social media is an outlet for fun, pleasurable and enjoyable activities that are so vital to human survival [28]. It is now common place to see persons watching a video, viewing images and reading text that is amusing on any of the available social media platforms. Quite often these videos, images and texts can be both informative and entertaining, but there can be problems however as at times they can cross ethical lines that can lead to conflict.
Ethical challenges with social media use
The use of modern-day technology has brought several benefits. Social media is no different and chief amongst its benefit is the ability to stay connected easily and quickly as well as build relationships with people with similar interests. As with all technology, there are several challenges that can make the use of social media off putting and unpleasant. Some of these challenges appear to be minor but they can have far reaching effects into the lives of the users of social media and it is therefore advised that care be taken to minimize the challenges associated with the use of social media [29].
A major challenge with the use of social media is oversharing because when persons share on social media, they tend to share as much as is possible which is often times too much [24]. When persons are out and about doing exciting things, it is natural to want to share this with the world as many users will post a few times a day when they head to lunch, visit a museum, go out to dinner or other places of interest [30]. While this all seems relatively harmless, by using location-based services which pinpoint users with surprising accuracy and in real time, users place themselves in danger of laying out a pattern of movement that can be easily traced. While this seems more like a security or privacy issue it stems from an ethical dilemma-"Am I sharing too much?" Oversharing can also lead to damage of user's reputation especially if the intent is to leverage the platform for business [24]. Photos of drunken behavior, drug use, partying or other inappropriate content can change how you are viewed by others.
Another ethical challenge users of social media often encounter is that they have no way of authenticating content before sharing, which becomes problematic when the content paints people or establishments negatively. Often times content is shared with them by friends, family and colleagues. The unauthenticated content is then reshared without any thought but sometimes this content may have been maliciously altered so the user unknowingly participates in maligning others. Even if the content is not altered the fact that the content paints someone or something in a bad light should send off warning bells as to whether or not it is right to share the content which is the underlying principle of ethical behavior.
Conflicting views
Some of the challenges experienced by social media posts are a result of a lack of understanding and sometimes a lack of respect for the varying ethical and moral standpoints of the people involved. We have established that it is typical for persons to post to social media sites without any thought as to how it can affect other persons, but many times these posts are a cause of conflict because of a difference of opinion that may exist and the effect the post may have. Each individual will have his or her own ethical values and if they differ then this can result in conflict [31]. When an executive of a British company made an Instagram post with some racial connotations before boarding a plane to South Africa it started a frenzy that resulted in the executive's immediate dismissal. Although the executive said it was a joke and there was no prejudice intended, this difference in views as to the implications of the post, resulted in an out of work executive and a company scrambling to maintain its public image.
Impact on personal development
In this age of sharing, many young persons spend a vast amount of time on social media checking the activities of their "friends" as well as posting on their own Social Media, Ethics and the Privacy Paradox DOI: http://dx.doi.org/10.5772/intechopen.90906 activities so their "friends" are aware of what they are up to. Apart from interfering with their academic progress, time spent on these posts at can have long term repercussions. An example is provided by a student of a prominent university who posted pictures of herself having a good time at parties while in school. She was denied employment because of some of her social media posts. While the ethical challenge here is the question of the employee's right to privacy and whether the individual's social media profile should affect their ability to fulfill their responsibilities as an employee, the impact on the individual's long term personal growth is clear.
Conclusion
In today's information age, one's digital footprint can make or break someone; it can be the deciding factor on whether or not one achieves one's life-long ambitions. Unethical behavior and interactions on social media can have far reaching implications both professionally and socially. Posting on the Internet means the "end of forgetting," therefore, responsible use of this medium is critical. The unethical use of social media has implications for privacy and can result in security breaches both physically and virtually. The use of social media can also result in the loss of privacy as many users are required to provide information that they would not divulge otherwise. Social media use can reveal information that can result in privacy breaches if not managed properly by users. Therefore, educating users of the risks and dangers of the exposure of sensitive information in this space, and encouraging vigilance in the protection of individual privacy on these platforms is paramount. This could result in the reduction of unethical and irresponsible use of these media and facilitate a more secure social environment. The use of social media should be governed by moral and ethical principles that can be applied universally and result in harmonious relationships regardless of race, culture, religious persuasion and social status.
Analysis of the literature and the findings of this research suggest achieving acceptable levels of privacy is very difficult in a networked system and will require much effort on the part of individuals. The largest user groups of social media are unaware of the processes that are required to reduce the level of vulnerability of their personal data. Therefore, educating users of the risk of participating in social media is the social responsibility of these social network platforms. Adapting universally ethical behaviors can mitigate the rise in the number of privacy breaches in the social networking space. This recommendation coincides with philosopher Immanuel Kant's assertion that, the Biblical principle which states "Do unto others as you have them do unto you" can be applied universally and should guide human interactions [5]. This principle, if adhered to by users of social media and owners of these platforms could raise the awareness of unsuspecting users, reduce unethical interactions and undesirable incidents that could negatively affect privacy, and by extension security in this domain.
Author details
Nadine Barrett-Maitland and Jenice Lynch University of Technology, Jamaica, West Indies *Address all correspondence to<EMAIL_ADDRESS>© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 8,376.2 | 2020-02-05T00:00:00.000 | [
"Computer Science"
] |
On the optimality of the neighbor-joining algorithm
The popular neighbor-joining (NJ) algorithm used in phylogenetics is a greedy algorithm for finding the balanced minimum evolution (BME) tree associated to a dissimilarity map. From this point of view, NJ is "optimal" when the algorithm outputs the tree which minimizes the balanced minimum evolution criterion. We use the fact that the NJ tree topology and the BME tree topology are determined by polyhedral subdivisions of the spaces of dissimilarity maps R+(n2) to study the optimality of the neighbor-joining algorithm. In particular, we investigate and compare the polyhedral subdivisions for n ≤ 8. This requires the measurement of volumes of spherical polytopes in high dimension, which we obtain using a combination of Monte Carlo methods and polyhedral algorithms. Our results include a demonstration that highly unrelated trees can be co-optimal in BME reconstruction, and that NJ regions are not convex. We obtain the l2 radius for neighbor-joining for n = 5 and we conjecture that the ability of the neighbor-joining algorithm to recover the BME tree depends on the diameter of the BME tree.
Introduction
The popular neighbor-joining algorithm used for phylogenetic tree reconstruction [1] has recently been "revealed" to be a greedy algorithm for finding the balanced minimum evolution tree associated to a dissimilarity map [2]. This means the following: Let be a dissimilarity map (this is an n × n symmetric matrix with zeroes on the diagonals and nonnegative real entries). The balanced minimum evolution problem is to find the unrooted binary tree T with n leaves that minimizes Here where if i ≠ j and . In [3], Day shows that choosing a minimizing tree for (2) from among the (2n-5)!! unrooted binary trees is an NPhard problem. Yet it is desirable to find algorithms for minimizing (2) because of the following statistical interpretation:
Theorem 1.2 ([4])Let T be a binary tree with edge lengths given by l: E(T)
→ + and a dissimilarity map. If the variance of d ij is proportional to (i. e., var(d ij ) = for some constant c) then (2) is the minimum variance tree length estimator of T. Moreover, the weighted least squares tree length estimate is equal to (2).
This result provides a weighted least squares rationale for the minimization of (2), and highlights the importance of understanding the balanced minimum evolution polytope:
Definition 1.3 The balanced minimum evolution polytope is the convex hull of the vectors
Example. There are four trees with n = 4 leaves. They are the 3 binary trees and the star-shaped tree. In this case the balanced minimum evolution polytope is the convex hull of the vectors: The balanced minimum evolution polytope in this case is a triangle in 6 . Note that the star-shaped tree is in the interior of the triangle.
For any dissimilarity map, the trees which minimize (2) will be vertices of the balanced minimum evolution polytope; these are always the binary trees. In fact, for such trees ; this is Pauplin's formula [5].
The BME polytope lies in and has dimension -n. The normal fan [6] of the BME polytope gives rise to BME cones which form a polyhedral subdivision of the space of dissimilarity maps . They describe, for each tree T, those dissimilarity maps for which T minimizes (2). We provide an introduction to the necessary polyhedral combinatorics in Section 2, and discuss the polytope in more detail in Section 3.
The neighbor-joining algorithm is a greedy algorithm for finding an approximate solution to (2). We omit a detailed description of the algorithm here -readers can consult [2] -but we do mention the crucial fact that the selection criterion is linear in the dissimilarity map [7]. Thus, the NJ algorithm will pick pairs of leaves to merge in a particular order and output a particular tree T if and only if the pairwise distances satisfy a system of linear inequalities, whose solution set forms a polyhedral cone in . We call such a cone a neighbor-joining cone. or NJ cone. The NJ algorithm will output a particular tree T if and only if the distance data lies in a union of NJ cones. In Section 4 we show that the NJ cones partition , but do not form a fan. This has important implications for the behavior of the NJ algorithm.
Our main result is a comparison of the neighbor-joining cones with the normal fan of the balanced minimum evolution polytope. This means that we characterize those dissimilarity maps for which neighbor-joining, despite being a greedy algorithm, is able to identify the balanced minimum evolution tree. These results are discussed in Section 5.
Polyhedral preliminaries
In this section we will introduce some of the elementary polyhedral combinatorics necessary for this paper. For more details see [8].
where V ⊆ d is a subspace and a ∈ A. V is uniquely determined by A and the affine dimension of A is defined to be the dimension of V.
Given two distinct points x, y ∈ d , the set [x, y] = {αx + (1 -α)y : 0 ≤ α ≤ 1} of all convex combinations of x and y is called the interval with endpoints x and y.
Then the set is called a polyhedron. The convex hull of a finite set of points in d is called a polytope and the Weyl-Minkowski Theorem says that a polytope is a bounded polyhedron [9]. Polytopes are familiar objects in geometry. In the plane, polytopes are precisely the convex polygons. In 3 , examples of polytopes are shown in Figure 1. The dimension dim P of a polytope or polyhedron P is defined to be the dimension of the affine hull of P.
A (d -1) dimensional affine set in d is called a hyperplane and every hyperplane can be represented as {x ∈ d : n·x = b} for some n ≠ 0 ∈ d and b ∈ , where n·x is the dotproduct of n and x. We call n a normal vector of this hyperplane.
A polyhedron C is a cone if it can be written as The four types of facets of P Figure 1 The four types of facets of P.
for some y 1 , ..., y N ∈ d . This is equivalent to the existence Given a face F of a polytope P, the normal cone N(F) is the set of all vectors c for which c·v = max x∈P c·x for all v ∈ F.
The collection of relative interiors of normal cones of faces of P partition d , and for each face we have dim(F) + dim(N(F)) = d. The collection of normal cones of faces of P is called the normal fan of P.
Given a polyhedron P, the lineality space of P is the set of vectors v for which y + c·v ∈ P for all y ∈ P and c ∈ R. The largest such subspace is called lineality space of P. If a polyhedron P has lineality space V, we can let V' be the orthogonal complement V' (i.e. V ⊕ V' = d ) and consider the polyhedron P' := P ʝ V', which has lineality space {0}.
The balanced minimum evolution polytope
Throughout this paper we work with binary unrooted trees on n leaves labeled {1, ..., n}. Such trees are also known as phylogenetic X-trees. We refer the reader to [10] for more detail about such trees, and for related definitions. Recall there are 2n -3 edges in an unrooted tree with n leaves. For a fixed tree topology T, let B T be the × (2n -3) matrix with rows indexed by pairs of leaves and columns indexed by edges in T defined as follows: For example, for the tree in Figure 2, where its rows are indexed by pairs of leaves (1, 2), (1, 3), (2,3), (1,4), (2,4), (3,4), (1,5), (2,5), (3,5), (4,5) and its columns are indexed by edges (1, a), (2, a), (3, b), (4, c), with a is an internal node adjacent to leaves 1 and 2, c is an internal node adjacent to leaves 4, 5, and b is an internal node adjacent to nodes 3, a and c. Given edge lengths l : E(T) → + we let b be the vector with components l(e) as e ranges over E(T). Any dissimilarity map d (encoded as a row vector) can now be written as where e is a vector of "error" terms that are zero when d is a tree metric.
The weighted least squares solution for the edge lengths b assuming a variance matrix V with off-diagonal entries (as defined in the introduction) and dissimilarity map d is given by where · t denotes matrix transpose. The length of T with respect to the least squares edge lengths is then where 1 and 1 is the vector of all 1's. We call the vectors v T the balanced minimum evolu- tion vectors (or BME vectors). In the case of Figure 2, the BME vector is The BME method is equivalent to minimizing the linear functional v T ·d over all BME vectors for all tree topologies T. The BME polytope is the convex hull of all BME vectors in . The following facts follow from the definition of the balanced minimum evolution tree: The vertices of the BME polytope are the BME vectors of binary trees. The BME vector of the star phylogeny lies in the interior of the BME polytope, and all other BME vectors lie on the boundary of the BME polytope.
The normal fan of a BME polytope partitions the space of dissimilarity maps into cones, one for each tree. We call these BME cones. They completely characterize the BME method: T is the BME tree topology if and only if the dissimilarity map D lies in the BME cone of T.
For a leaf node a in a binary unrooted tree, the shift vector s a is the dissimilarity map in which a is at distance 1 from all other leaves, and all other distances are 0 (see [11] for the description of shift vectors). According to [5], for a tree T, (v T ) ab gives the probability that a will immediately precede b in a random circular ordering of T. Thus the dotproduct of a BME vector with a shift vector must necessarily equal 1, and in fact the lineality space of BME cones is spanned by shift vectors. So when we describe a BME cone we will always describe just the pointed component, i.e. modulo the lineality space of shift vectors.
As part of our computational study, we computed the BME polytope and BME cones for trees with n = 4, 5, 6, 7, 8 leaves using the software polymake [12]. In Table 1 we display some of the components of f-vectors we were able to compute. This provides information about the polytopes: Recall that the ith component of the f-vector of a polytope is the number of faces of dimension i -1. For example, the first component in each vector in Table 1 is the number of 0-dimensional faces (vertices) of the corresponding BME polytope, i.e., the number of binary trees.
We found that the edge graph of the BME polytope is the complete graph for n = 4, 5, 6 which means that for every pair of trees T 1 and T 2 with the same number (≤ 6) of leaves, there is a dissimilarity map for which T 1 and T 2 are (the only) co-optimal BME trees. However, for n = 7, the BME polytope does in fact have one combinatorial type of non-edge. Namely, two bifurcating trees with seven leaves and three cherries (two leaves adjacent to the same node in the tree) will form a non-edge if and only if they are related by two leaf exchanges as depicted in Figure 3. This completely characterizes the non-edges for n = 7. It is an interesting open problem to characterize the non-edges of the BME polytope in general.
Neighbor-joining cones
The neighbor-joining algorithm takes as input a dissimilarity map and outputs a tree. The tree is constructed "one cherry at a time". In each step the algorithm chooses a pair of leaves a and b that minimize the Q-criterion, which is defined by the formula The non-edges on the BME polytope for n = 7 Figure 3 The non-edges on the BME polytope for n = 7. Two trees will form a non-edge if and only if they are trees that have three cherries, and differ by the pair of leaf exchanges shown in the figure. There are two ways to perform each leaf-exchange, so each binary tree with three cherries is not adjacent to 4 trees. cones is a subset -but not necessarily a face -of the boundary of each of the cones. Given an input from the interior of C σ , the NJ algorithm will pick the cherries in the order σ and output the corresponding tree. For inputs d on the boundary of one (and therefore at least two) of the cones, the order in which NJ picks cherries is undefined, because at some point there will be two cherries both of which have minimal Q-criterion. We call the cones C σ neighbor-joining cones, or NJ cones. See [11] for the hyperplane representation of NJ cones and descriptions how to construct each cone.
Example. There is only one unlabeled binary tree with 5 leaves and there are 15 distinct labeled trees. For each labeled tree, there are two ways in which a cherry might be picked by the NJ algorithm in the first step. For instance, neighbor-joining applied to any dissimilarity map in C 12,45 or C 45,12 will produce the tree in Figure 2. There are a total of 30 NJ cones for n = 5.
We note that all Q-criteria for shift vectors equal -2, so adding any linear combination of shift vectors to a dissimilarity map does not change the relative values of the Qcriteria. Also, after picking a cherry, the reduced distance matrix of a shift vector is again a shift vector. Thus, for any input vector d, the behavior of the NJ algorithm on d will be the same as on d + s if s is any linear combination of shift vectors. In fact it can be shown that the lineality space of NJ cones is spanned by shift vectors, just as for BME cones [11]. So from now on, when we refer to NJ cones, we will mean the pointed portion of the cone, i.e. modulo the lineality space.
Theorem 4.1
The cones in do not form a fan. In particular, they are not the normal fan of any polytope for n ≥ 5.
The theorem follows from that fact that the NJ cones have rays which are on the boundary of other cones but not rays of them. Thus there are pairs of cones whose intersection is not a face of both cones. We describe the case n = 5 in detail; it also suffices to prove the theorem.
We begin by noting that all of the NJ cones are equivalent under the action of the symmetric group on five elements (S 5 ), where an element of S 5 permutes the five taxa or, equivalently, the rows and columns of the input distance matrix. Each NJ cone is defined by inequalities that are implied by the Q-criteria as the NJ algorithm picks the two cherries. The cones are 5-dimensional, and their intersection with a suitable hyperplane leaves a four dimensional polytope P. The f-vector of P is (14,32,27,9).
The 30 cones share many of their rays, giving a total of 82 rays which decompose into three orbits under the action of S 5 . We refer to the types of rays as Type I, Type II and Type III. Each cone has 6 rays of type I, 4 rays of type II and 4 rays of type III. Each ray of type I is the common ray of 3 cones, and belongs to 2 other cones of which it is not a ray (i.e. it is in the interior of a face). Note that this implies that the cones cannot form a fan. The type II rays are contained in 10 cones each, and the type III rays in 12. Type II and III rays are rays of all cones which contain them. For the cone C 23,45 , this information is tabulated in Table 2.
We note that the rays of NJ cones are minimal intersections of NJ cones, and thus give dissimilarity maps for which the NJ algorithm is least stable.
Example. Consider two alignments of 5 sequences that are to be used to construct a tree. These may consist of two different genes and for each of them the homologs among 5 genomes. Suppose that distances are estimated using the Jukes-Cantor correction [6,13] where g ij is the fraction of different nucleotides between sequences i and j in the second set.
If the fractions f ij and g ij are given by then we obtain Notice that the vector representation of D 1 lies in the cone C 12,45 and the vector representation of D 2 lies in the cone C 45,12 . Thus NJ returns the same tree topology for both D 1 and D 2 .
If we concatenate the alignments and combine the data to build one tree, then we estimate the distances using the average of f and g:
. Each ray is determined by a vector shown in the second column. The third column shows, for each ray, which cones it belongs to. If a cone is starred then the ray is on the boundary of that cone, but not a ray of it.
Type rays Cones Using this frequency matrix we obtain the distance matrix D 3 via the Jukes-Cantor correction: However, the vector representation of D 3 lies in the cone C 24, 15 , which means that neighbor-joining returns a different tree topology for D 3 . This example provides a distancebased recon-struction analog to the recent mixture model results of [14].
An analysis of the rays of suffices to prove Theorem 4.1. but the facet structure of each cone is also informative, and we were able to obtain complete information for n = 5. The types of facets constituting each cone are shown in Figure 1. We used our description of the NJ cones to examine the l 2 distance between tree metrics and the boundaries of NJ cones. Without loss of generality, by shifting the leaves in the cherries, we can assume the tree metric is of the form where α and β are the internal branch lengths, α ≥ and α + β = 1. It is easy to see that D T ∈ C 12,45 confirming the consistency of neighbor-joining. The cone C 12,45 contains 9 faces, but we may ignore one of them (namely the one shared with C 45,12 ) because it is shared with a cone result-ing in the same tree topology. The distance to the closest of the remaining eight faces is The l 2 radius is obtained by dividing (4) by min(α, β), so the minimum is attained at α = β =
Theorem 4.2
The l 2 radius of neighbor-joining for 5 taxa is ≈ 0.5773. This is slightly larger than the l ∞ radius of given by Atteson's theorem [15]. It is an interesting problem to compute the l 2 radius for neighbor-joining with more taxa.
The description of the NJ cones we have provided can also be used in practice to evaluate the robustness of the algorithm when used with a specific dataset. For n = 5, we examined data simulated from subtrees of the two tree models T 1 and T 2 in [16] with the Jukes-Cantor model and the Kimura 2-parameter models [6]. For each of 40, 000 simulations, we calculated the ᐍ 2 -distance between the NJ cone of the given tree and the maximum likelihood estimates for the pairwise distances (see supplementary material). These show that in many cases the maximum likelihood estimates lie very close to the boundary. In such cases, one must conclude that the NJ tree is possibly incorrect due to the variance in the distance estimates.
Optimality of the neighbor-joining algorithm
In order to study the optimality of the neighbor-joining algorithm, we compared the BME cones with the NJ cones. Such a comparison involves intersecting the cones with the ( -1)-sphere (in the first orthant) and then studying the volumes of their intersection by computing the standard Euclidean volume of the resulting surfaces. These surfaces are an intersection of closed hemispheres, i.e. spherical polytopes. Computing Euclidean volumes of (non-spherical) polytopes is a standard problem that is usually solved by triangulating and summing the volumes of the simplices. However there has been no publicly available software developed for computing or approximating volumes of spherical polytopes of dimension > 3 using this method. One possible reason for this is that in higher dimensions the volumes of spherical simplices are given by complicated analytical formulas [17] whose computational complexities are unknown.
We implemented two approaches in MATLAB (using polymake as a preprocessing step) for approximating the volume of a spherical polytope P. One approach is trivial: it simply samples uniformly from the sphere, and counts how many points are inside P. This approach is particularly suitable if P has large volume, or if many spherical polytopes are being simultaneously measured which partition the sphere, as is the case for NJ and BME cones. The second approach is suitable for spherical polytopes having small volume. We used this approach for computing the volumes of consistency cones [18] which we discuss briefly in the Discussion section. Our main results on the optimality of NJ for n = 5, 6, 7, 8 taxa are summarized in Table 3. Each row of the table describes one type of tree. Trees are classified by their topology. A k-cherry tree is a tree with k cherries. The NJ volume column shows the volume of that part of the positive orthant of dissimilarity maps for which the NJ tree is of the specified type. Similarly, the BME volume column shows the same statistic for BME trees. Finally, NJ accuracy shows the fraction of the BME cone that overlaps the NJ cone. In other words, NJ accuracy is a measure of how frequently NJ will find the BME tree for a dissimilarity map that is chosen at random.
We also classified and measured the intersections of NJ and BME cones in which the NJ tree differs from the BME tree. Many of these intersection cones are equivalent under the action of S n on the leaf labels, particularly as the stabilizer of the BME tree permutes the leaf labels in the NJ tree. In fact, for n = 5 taxa there are only three types of mistakes that the NJ algorithm can make when it fails to reproduce the BME tree. These are depicted in Figure 4 and Frequencies of the all three possible types of NJ trees that may picked instead of the BME tree for n = 5 leaves Figure 4 Frequencies of the all three possible types of NJ trees that may picked instead of the BME tree for n = 5 leaves. Neighbor-joining agrees with the BME tree 98.06% of the time. the normalized spherical volumes of corresponding NJ/ BME intersection cones are given. Figure 4 can be interpreted as follows: For a random dissimilarity map, if the NJ algorithm does not produce the BME tree, then with probability 0.67 it produces the tree on the right, and if not then it almost always produces the tree in the middle. This tree differs from the BME tree significantly. A surprising result is that the tree on the left is almost never the NJ tree. We believe that a deeper understanding of the "mistakes" NJ makes when it does not optimize the balanced minimum evolution criterion may be important in interpreting the results, especially for large trees.
We also computed analogous results for n = 6, 7, 8, 9, 10. They are available, together with the software for computing volumes at [19].
Discussion
Theoretical studies of the neighbor-joining algorithm have focused on statistical consistency and the robustness of the algorithm to small perturbations of tree metrics. The paper by [20] established the consistency of NJ, that is, if D T is a tree metric then NJ outputs the tree T. This result was then extended in [15] and more recently by [18] who show that if D is "close" to a tree metric D T for some T, then NJ outputs T on input D.
Our results provide a different perspective on the NJ algorithm. Namely, we address the question of the accuracy of the greedy approach for the underlying linear programming problem of BME optimization. This led us to the study of BME polytopes, and the combinatorics of these polytopes is interesting in its own right: [21]for an example of a local search approach to finding minimum evolution trees.
Similarly, a better understanding of the combinatorics of the NJ cones will lead to a clearer view of the strengths and weaknesses of the neighbor-joining algorithm. A basic problem is the following:
Question 6.2
Find a combinatorial description of the NJ cones for general n. How many facets/rays are there?
Our computational results lend new insights into the performance of the NJ and BME algorithms for small trees. We have measured the relative sizes of cones for different shapes of trees, and measured the frequencies of all combinatorial types of discrepancies between BME and NJ trees. In particular, we have observed that the NJ algorithm is least likely to reproduce the BME tree when the BME tree is the caterpillar tree.
Conjecture 6.3
For n > 6, it is the caterpillar tree that yields the smallest ratio of spherical cone volumes vol(NJ ʝ BME)/vol(BME) where NJ is the spherical cone volume of a union of the NJ cones and BME is the spherical cone volume of the BME cone for a fixed tree. In other words, the caterpillar tree is the most difficult BME tree topology for the NJ algorithm to reproduce.
Another problem we believe is very important is to extend the results shown in Figure 4 to large trees. In other words, to understand how neighbor-joining can fail when it does not succeed in finding the balanced minimum evolution tree.
Question 6.4
What tree topologies is neighbor-joining likely to pick when it fails to construct the balanced minimum evolution tree?
There are many other interesting cones related to distancebased methods that can be considered in this context. For example, in [18], it is shown that the quartet consistency condition is sufficient for neighbor-joining to reconstruct a tree from a dissimilarity map for n ≤ 7 leaves. The quartet consistency conditions define polyhedral cones (consistency cones) in ; see [18] for details. For n = 4 taxa the consistency cones cover all of showing that quartet consistency explains the behavior of neighbor-joining for all dissimilarity maps. Using the second method outlined in Section 4 we succeeded in computing the volumes of the consistency cones intersected with the first orthant of the sphere for n = 5 taxa. There are 15 Such computations are pushing the boundary of computational polyhedral geometry. For n ≥ 6 taxa, triangulating a consistency cone is too unwieldy, although we are confident that spherical volumes could still be computed using polynomial time hit-and-run sampling methods for volume approximation [22]. Such methods are complicated and not yet implemented.
Finally, we comment on the example in Section 3 that shows how different alignments may lead to the same neighbor-joining tree, whereas the neighbor-joining tree constructed from a concatenation of the alignments is different. This result has significant implications for studies where species trees are constructed from multiple gene families by combining the data. | 6,564.6 | 2007-10-26T00:00:00.000 | [
"Mathematics"
] |
Multiset Analysis of Consequences of Natural Disasters Impacts on Large-Scale Industrial Systems
Paper is dedicated to the new approach to distributed industrial systems (IS) sustainability/ vulnerability assessment. This approach is based on the unitary multiset grammars (UMG) as a flexible and convenient tool designed specially for large systems analysis and optimization. UMG description of IS technological base as well as multiset representation of order completed by the IS, its resource base and impact on the IS are presented. Criterion for recognition of IS sustainability to the impact is formulated. UMG extension for natural disasters impacts (NDI) representation is introduced, and criterion for recognition of IS sustainability to the NDI is also presented. The solution of the reverse problem, concerning part of the order, which may be completed by the affected IS, is described. Implementation issues are considered.
Introduction
Modern industrial systems (no matter, global, regional, national, transnational etc.) are complicated, distributed, strongly interconnected networks of local facilities, producing, transporting and utilizing various products and resources. That's why, -first of all, of the mentioned strong interconnectivity and associated with it multiple chain effects, -these systems are often vulnerable to natural disasters in such a way, that the one only facility destruction may cause consequences far beyond area or place this facility is located. So one of the most actual, important and at the same time difficult problems of the modern system analysis is development of the widely available mathematical toolkits, models and computer technologies, which would be able to provide the decision makers from the government and corporate stuffs with comprehensive and precise prognostic information about such consequences -first of all, would be system, affected by natural disaster impact (NDI), vulnerable or sustainable to it.
Industrial system (IS) in general case contains technological base (various industrial complexes and devices) producing (manufacturing) various objects (cars, computers, buildings etc.) and utilizing various resources (materials, microchips etc.), necessary for this process (Fig. 1a). Natural disaster destroys some segments of the technological and resource bases, that's why amounts of objects produced are decreasing (Fig. 1b).
By this we may formulate criterion for IS robustness/vulnerability assessment (Fig. 2).
If amounts of objects, produced by IS before and after natural disaster impact, are equal, then IS is sustainable to the NDI; otherwise it is vulnerable to the impact. In the last case there is also one more important "reverse" problem: what part of the work may be executed by vulnerable IS, which facilities are affected by NDI?
Let us consider formulated problems in more details.
As was said higher, IS include industrial (manufacturing) devices (complexes). Each such device may be represented as a "black box" B with m inputs and one output (Fig. 3). Every i-th input is marked by a i -name of object (item, resource) type, -and n i -amount (volume, quantity) of this object. So n 1 objects (of type) a 1 , . . . , n m objects (of type) a m are required for one object (of type) a manufacturing. IS as a whole may be represented by k such "black boxes" B 1 , . . . , B k interconnected by the "logistical ring" L, providing transport of objects, manufactured by devices, from their outputs to inputs of another devices, thus forming integrated manufacturing process (Fig. 4).
This process, however, is driven by orders, which sources are external systems or persons, consuming objects, produced by IS. Each order q in general case defines types and quantities of objects, which would be manufactured (n -1 objects of type a -1 , . . . , nl objects of type al at Fig. 4). From the other side, IS itself consumes resources, which are represented at Fig. 4 as set I containing n l 1 objects (of type) a l 1 , . . . , n l p objects (of type) a l p . By this, IS along with order also may be considered as "black box", i.e. device producing object q after being applied to initial resources set I. Any impact on IS eliminates some initial resources and devices entering this system, thus reducing its producing capability.
Let us underline, that there are no static interconnections between devices B 1 , . . . , B k , and structure, representing IS in the described manner, is not graph, usually considered as a canonical form of such systems description and modelling (Burkart, 1997;Mills and Dabrowski, 2006;Hespanha et al., 2007;Levin et al., 2009;Mills et al., 2012;Carreras et al., 2005;Mills et al., 2011;Lade and Gross, 2012;Scheffer, 2009;Sheffer et al., 2009;D'Andrea and Dullerud, 2003;Dullerud and Paganini, 2005;Goh and Yang, 2002;Horn and Johnson, 1991;Jadbabiae et al., 2003;Klavins et al., 2006;Mesbahi and Egerstedt, 2010;Mills and Dabrowski, 2008;Olfati-Saber et al., 2007;Stewart, W 1994). Here every order completion provides its own tree, including manufacturing operations and transfers of their results among devices, incorporated to this process. Moreover, there may be a lot of variants of each order completion, by reason IS may contain devices manufacturing similar objects in different alternative ways. So, every order completion process generates, in fact, each own cooperation, or contracts set, providing necessary items manufacturing.
Described formalization is basic for direct and reverse problems, verbally formulated higher, primary consideration. But it is not sufficient for strict mathematical description and, further, necessary algorithms design. There would be some constructive mathematical toolkit providing solutions of mentioned problems. But multiple attempts of well-known general-purpose tools, based on vector-matrix calculus, graphs theory, Markovian chains, Petri nets etc. (Burkart, 1997;Mills and Dabrowski, 2006;Hespanha et al., 2007;Levin et al., 2009;Mills et al., 2012;Carreras et al., 2005;Mills et al., 2011;Lade and Gross, 2012;Scheffer, 2009;Sheffer et al., 2009;D'Andrea and Dullerud, 2003;Dullerud and Paganini, 2005;Goh and Yang, 2002;Horn and Johnson, 1991;Jadbabiae et al., 2003;Klavins et al., 2006;Mesbahi and Egerstedt, 2010;Mills and Dabrowski, 2008;Olfati-Saber et al., 2007;Stewart, W 1994), application in the described area were not successful by reasons of problem's dimension, difficulties of IS manufacturing process precise representation and computational complexity. This lead us to the new approach, which is, by our opinion, more flexible and simple in application to the large-scale IS representation, analysis and synthesis, as well as more efficient from the computational complexity point of view, especially in high-parallel computing environments. This new approach is strongly based on the recursive multisets (MS) theory (Sheremet, 2010(Sheremet, , 2011) -more precisely, multiset grammars (MG), -that's why is called "multigrammatical". Multiset grammars, namely, one of their possible dialects, -constraint multiset grammars (CMG) -were at first proposed in (Marriott, 1994;Marriott and Meyer 1997;Marriott, 1996) as a tool for recognition of visual objects with complex structure. CMG may be also considered as one of the problem-oriented constraints logical programming languages (Marriott and Stucky, 1998;Apt, 2003;Fruhkwirth and Abdennadher, 2003).
Authors' main contribution to this area are so called unitary multiset metagrammars (UMMG) (Sheremet, 2010(Sheremet, , 2011, which are specific knowledge representation model, providing deep integration and convergence of classical optimization theory and modern knowledge engineering. As shown in (Sheremet, 2010(Sheremet, , 2011, various subsets ( subclasses) of UMMG provide efficient solution of various problems from system analysis and classical optimization areas.
Problem considered in this paper was announced in (Sheremet, 2016), and it is solved mainly by applying one of the simplest classes of UMMG family -unitary multiset grammars (UMG). Also general form multigrammars are applied to the mentioned reverse problem study.
We consider main results of UMG/MG application to the problem verbally formulated higher in a following way. Section 2 is dedicated to basic notions and definitions. Main elements of the approach (technological base, resource base, impact on industrial system) multiset/multigrammatical representation as well as IS sustainability (vulnerability) conditions are described in the Section 3, while Section 4 is dedicated to the NDI representation and mentioned conditions transformation to the corresponding form. Reverse problem concerning recognition of the abilities of the vulnerable IS is considered in Section 5. Inplementation issues are discussed in short in Section 6. Further directions of the development of multigrammatical approach practical applications are described at the conclusion.
Basic Notions and Definitions
According to (Calude et al., 2001;Petrovskiy, 2003;Banatre and Le Metoyer, 1993;Singh et al., 2007;Red'ko et al., 2015), multiset is set of so called multiobjects of the form n • a, that means there are n identical objects (of type) a, and that is written as where n is multiset name, n 1 • a 1 , . . . , n m • a m are multiobjects entering n, and integer numbers n 1 , . . . , n m are called multiplicities of a 1 , . . . , a m objects, all of which are different. (1) means that there are n 1 objects (of type) a 1 , . . . , n m objects (of type) a m in multiset n.
Unitary multiset grammar (UMG) is couple S = < a 0 , R >, where a 0 is title object, and R is scheme -set of the so called unitary rules (UR) of the form 1 1 , . . . , , m m a n a n a where a object is called head and list n 1 • a 1 , . . . , n m • a m -body of the UR. UMG were designed specially for the representation of hierarchical systems and objects, so the most valuable in the considered problem area is so called structural interpretation of the unitary rules. According to the structural interpretation, UR (2) means that object a consists of n 1 objects a 1 , . . . , n m objects a m .
Example 1. Let S = < car, R >, where R contains two URs: Sheremet: Multiset Analysis of Consequences of Natural Disasters Impacts on Large-Scale Industrial Systems Art. 4, page 5 of 17 As seen, car consists from one frame, one engine, four doors and four wheels. Engine, in turn, contains motor, accumulator as well as transmission. ▪ Technological interpretation of unitary rules is extension of the structural one in such a way that UR ' m m k k a n a n a n b n b represents not only structural components (spare parts) of the object a, which are multiobjects n 1 • a 1 , . . . , n m • a m , but also resources, necessary for assemblying object a from these components, and being multiobjects n' Here first four multiobjects of the UR body are the same, as in the first UR in the example 1, while the last three multiobjects define amounts of electrical energy (400 kilowatt), money (300 dollars) and time (10 minutes of the assemblying line operation), which are necessary for assemblying car from its spare parts (frame, engine, 4 doors, 4 wheels). ▪ Unitary multiset grammars define systems (devices, processes etc.) in the easily understood top-down manner, and result of UMG application is set of multisets, each containing multiobjects with multiplicities defining total amounts of specific elementary (non-decomposed) objects having place in the system (device) or utilized while its manufacturing. Degree of decomposition is regulated by the analyst applying this tool while problem solving.
To define formally UMG semantics, i.e. process of generation of set of multisets V S represented by UMG S, we shall use some operations and relations on multisets (Sheremet, 2010(Sheremet, , 2011Petrovskiy, 2003). Lower +, − and * are symbols of multisets addition, subtraction and multiplication operations correspondingly, which are defined as follows: n' a' a' ' n n a ' n a n' a' n n n n n n n n Here symbols + , − and × denote usual arithmetic operations. In (4) we assume, that object a absence in multiset n is equivalent to it's zero-value multiplicity.
Lower we shall designate by β(n) set of all objects having place in multiset n (it is called multiset basis): Sheremet: Multiset Analysis of Consequences of Natural Disasters Impacts on Large-Scale Industrial Systems Art. 4, page 6 of 17 Symbol "⊆" denotes multisets inclusion relation, and, if n' ⊆ n, then n' is called submultiset of n. Formally n' a ' n a n' n Also we shall use multisets intersection operation denoted by bold symbol "∩" and defined as follows: n a n' a n n' a ' n n n n As in (4), a ∉ n and 0 • a ∈ n are equivalent. By A S we shall designate set of all objects having place in UMG S = < a 0 , R >, while by A S --set of all terminal objects having place in S, i.e. objects which are not heads of unitary rules r ∈ R and present only in URs bodies. As seen, Multiset n ∈ V S is called terminal multiset (TMS), if there is no one UR in the scheme R, which may be applied to n, i.e. it contains only terminal objects. Set of terminal multisets Strict mathematical definition of the UMG application, i.e. set of multisets generation process, which we shall use here, is as follows (Sheremet, 2010(Sheremet, , 2011: m m a n a n a n a n n a n a n a v p n n < ´´>´´´Î where UR a →n 1 • a 1 , . . . , n m • a m for unambiguity is represented in angle brackets, i.e. as < a →n 1 • a 1 , . . . , n m • a m >. As seen, multisets generation is implemented by application to set of MS V (i ) , created at previous i steps, all unitary rules r ∈ R. In turn, every such UR is applied to all multisets n ∈ V (i ) by special function π of two arguments, first being n, and second -UR r in the form a →n 1 • a 1 , . . . , n m • a m . If n contains multiobject n • a, it is replaced by multiset n * {n 1 • a 1 , . . . , n m • a m } (this, of course, is followed by summarizing multiplicities of identical objects in the MS sum). Otherwise result is empty multiset. The described iterative process in general case is infinite, and set of multisets, defined by UMG S = < a 0 , R >, is it's fixed point V (∞) , while set of terminal multisets, defined by S, is subset of V S , defined by (17).
n a a n a n a R V V n a n n a n a n n where $∈ means selection of any one multiobject n • a from multiset n. That provides sharp reduction of generation computational complexity (Sheremet, 2010(Sheremet, , 2011.
Let us continue multigrammatical formalization of basic notions concerning the main subject of the paper.
Industrial system may be represented now by UMG S = < tb, R >, where tb (acronym from " technological base") is title object, and R is set of unitary rules in technological interpretation. Order completed by industrial system may be represented by multiset q = {n q 1 • a q 1 , . . . . . . , n q l • n q l }, and, evidently, resources amount, necessary for order q completion, is set of terminal multisets generated by UMG i.e. V -Sq (for simplicity we shall use V q instead of V -Sq lower). Because of possibility of multiple ways of some objects manufacturing, i.e. so called internal alternativity of S q (Sheremet, 2010(Sheremet, , 2011, there may be more than one TMS in V q . Example 3. Let S be the same, as in the previous example, and q = {3 • car }, i.e. IS must manufacture 3 cars. Then Resource base of industrial system may be represented by multiset, which multiobjects define amounts of objects, being available for technological base. Evidently, resource base n is sufficient for order q completion by IS S = < tb, R >, if there exists at least one set of resources amounts, necessary for this completion, being submultiset of n : is not sufficient, because number of frames having place in the resource base, i.e. one, is less than it is necessary for 3 cars assemblying, i.e. three, although all other resources amounts are sufficient (they even exceed necessary values). ▪ Impact on industrial system may be represented as multiset ∆n , which defines resources amount eliminated from the resource base of IS, so the last after impact would be n −∆n .
All introduced notions and definitions provide formulation of the condition of IS sustainability/ vulnerability to impact.
Let resource base n is sufficient for order q completion by IS S = < tb, R >, i.e. it satisfies (20). Then IS S completing order q with resource base n is sustainable to impact ∆n , if Otherwise IS S is vulnerable to impact ∆n . Example 5. IS S = < tb, R > from example 3, completing order q = {3 • car } with resource base from example 4, sufficient for this order completion, is sustainable to impact is not sufficient for completion (number of engines is less than necessary). ▪
Natural Disasters Impacts on Industrial System
Until now we have considered impacts without their specific features. Natural disaster impacts most general feature is their localization, i.e. connection with some fixed areas (points) affected by the NDI. However, any information about technological base, as well as resource base, elements locations, in the considered higher UMG representation of industrial systems is not included.
Let us extend unitary rules in technological interpretation by geospatial information in such a way, that every object, having place in UR, would have form a/z, where z is locator defining area (point), where this object presents, and "/" is divider, which is not used anywhere in a and z strings. So UR would have the following form: that means object a at location z may be produced (assembled) if there are n 1 objects a 1 at location z 1 , . . . , n m objects a m at location z m . (Objects like a/z, a 1 /z 1 , . . . , a n /z n are called "composite objects", or "composits").
Similarly, we shall extend resource base and order representation, which would become respectively After that there is a simplest way for NDI representation, namely, by the set Z = {z 1 , . . . , z k }, of locations destroyed by this NDI completely. Corresponding relation for NDI in the multiset form is constructed directly: and From (23)- (24) and Z definition we may write the condition of IS sustainability/vulnerability, which is evident generalization of (21). If then IS S = < tb, R >, completing order with resource base n , is sustainable to NDI Z. Otherwise IS is vulnerable to Z.
Example 6. Let us consider IS, which resource base is n = {10 • a 1 /z 1 , 5 • a 2 /z 2 , 19 • a 3 /z 3 , 7 • a 4 /z 4 }, technological base is represented by scheme R containing two unitary rules a/z 1 → 3 • a 1 /z 1 , 2 • a 2 /z 2 , 7 • a 3 /z 3 , a/z 1 → 2 • a 2 /z 2 , 3 • a 4 /z 4 , and order q completed by this IS is q = {2 • a 1 /z 1 }. Natural disaster impact Z = {z 3 } causes destruction of subset of the resource base, located at z 3 , so according to (25) that's why IS is sustainable to NDI Z . ▪ Note, that NDI may destruct facilities not always completely, but very often partially. In this case some objects located at the area, affected by the NDI, may remain in the undestructed state and thus may be used elements for manufacturing some products.
As seen, because of IS is vulnerable to NDI ∆n (Z ). ▪
Reverse Problem
Let us consider the case, when IS R, completing order q with resource base n , is vulnerable to NDI (Z or ∆n (Z )).
The question is as follows: what part of order q may be completed by IS R affected by NDI Z ? Simple, but, however, non-evident approach to this question consideration is based on the general form multiset grammars use for constructing the solution.
Multiset grammar is couple S = < n 0 , R >, where n 0 is multiset called kernel, and R is, as in the UMG, scheme, which elements are called rules, which structure is where n and n' are multisets. Semantics of multigrammars is similar to UMG semantics (Sheremet, 2010(Sheremet, , 2011: As seen, MG provides generation of new multisets from already generated, beginning from the kernel, by replacement of their submultisets in accordance with rules having place in the scheme. Generation is executed until it is possible, so, in general case, V S and V -S may be infinite. However, if there exists i such, that V (i) = V (i+1) , then V (i) = V S , then both V S and V -S are finite sets, and MG is finitary. Let us consider UMG S = < tb, R >, representing IS with technological base R, and this IS resource base n . Multiset grammar S * n = < n , R * > will be called dual to unitary multiset grammar S = < tb, R >, if R n a n a a a n a n a R i.e. every unitary rule a →n 1 • a 1 , . . . , n m • a m having place in the UMG S scheme R is substituted by one and the only one "mirror" rule in MG S * n scheme R * . As may be seen, set of multisets V S * n , generated by MG S * n , contains all variants of production, which may be manufactured by IS S, beginning from the resource base n . Set of variants of order q partial completion is, thus, subset of set generated by MG S * n , each TMS of which contains at least one object from q order (here and lower we denote mentioned set of variants as V (R, n , q)): As a consequence, total set of solutions of the reverse problem is According to (36), V (R, n − ∆n (Z ), q ) = {n 1 , n 2 , n 3 }, where n 1 = {7 • a 1 /z 1 , 3 • a 2 /z 2 , 5 • a 4 /z 4 , 1 • a/z 1 }, n 2 = {5 • a 1 /z 1 , 3 • a 2 /z 2 , 2 • a 4 /z 4 , 2 • a/z 1 }, n 3 = {10 • a 1 /z 1 , 3 • a 2 /z 2 , 7 • a 3 /z 3 , 2 • a 4 /z 4 , 1 • a/z 1 }.
As seen, n 1 is result of application of r 1 to n 0 ; n 2result of application of r 2 to n 1 ; while n 3 is result of application of r 2 to n 0 .
So, a valuable part of the order (2 of 3 objects a located at z 1 ) may be completed by the affected IS, and even some valuable part of the resource base would remain after following this way of order completion. ▪ However, in general case some of multisets entering set V (R, n − ∆n (Z ), q ) may be of no practical use (as n 1 and n 3 in the previous example 8), because the only purpose of the assessment is to get the best ("valuable" in the common sense) variants of order completion, which are not improvable from the practical point of view.
To filter STMS, constructed in accordance with (37), we shall define one useful function max, which value max(V) is subset of V including only so called non-dominated multisets entering V: i.e. there is no one multiset in max(V ) ⊆ V, which is submultiset of some other multiset entering V ; max(V ) is thus set of maximal elements of V.
As may be seen, according to (38), because of n 2 ⊆ n 1 , n 3 ⊆ n 1 , max(V) = {n 1 , n 4 }. ▪ From here set of valuable ("precise") solutions of reverse problem, denoted V -(R, n − ∆n (Z ), q) may be constructed in a following evident way: As seen, set of valuable solutions includes those only intersections of elements of V (R, n − ∆n (Z ), q) and q order, which are non-dominated multisets.
Example 10. Consider V (R, n − ∆n(Z), q) = {n 1 , n 2 , n 3 } and q = { 3 • a/z 1 } from Example 9. According to (39), Note, that general form multigrammars application instead of unitary MG provides generation of all possible variants of work distribution between alternative facilities (devices), not only monopolial ones. However, this opportunity as a consequence leads to sensitive increasing of computational complexity of generation mentioned. Theory and implementation of this complexity reduction would be considered in separate publications. a a n a n a a n a n → →
Implementation issues
While generation, set B is selected from data base by query, which processing is implemented by special function READ (a). Similarly, insertion of new unitary a→w to the knowledge base is implemented by function INSERT (a, w), as well as UR a→w deletion from KB (DB) is implemented by function DELETE (a, w). Also there is function DELALL (a), eliminating from KB all unitary rules with head a, i.e. deletion from data base record <a, B >, where B is current set of bodies of URs with head a.
Associative internal organization of the described data base along with physical blocks cash, supporting DB management via virtual memory space, provides fast execution of functions mentioned without redundant search (Sheremet, 2013).
Software implementation of assessment of part of the order, which may be completed by the affected vulnerable IS ("reverse problem") employs knowledge base (scheme R* of MG, dual to the initial UMG) representation and storage in a form of data base similar to the considered higher.
However, this data base contains records of two types: 1) triples of the form <r, w, a>, where r is unique identifier (key of the record) of the "mirror" rule w→{1 • a}, where w multiset is represented as in (41); 2) couples of the form <a, w >, where a is object (key of the record), while w is set of couples <r, n>, where r is identifier of rule, which left part contains multiobject n • a .
Records of the first type are used while generation of the TMS (i.e. solutions) set in accordance with (36)-(38) by application of "mirror" rules. If this would be done without any improvements, there would be full search of all |R*| rules at every generation step in order to apply some of them to the current multiset, generated while previous steps execution. So for practically interesting knowledge bases with |R*|, beginning from 10 5 -10 6 , direct implementation of generation algorithmics is unreal. To avoid this principal difficulty, records of the second type are introduced. They are stored and selected by key (object a) in such a way, that by one access to data base identifiers of all "mirror" rules, which possibly may be applied to current multiset by reason there is multiobject n • a in their left parts, and n ≤ m, where m • a enters current multiset. This techniques provides cut-off all the rest "mirror" rules, non-applicable to this MS by reason of object a absence in their left parts. There are several implementations of this basic idea, and they would be considered separately.
Described approach to software implementation of the proposed algorithmics is somewhat efficient from the practical point of view, that is verified by real software-intensive IS management experience Karasev and Sheremet, 2008). Such software operates CALS-originated knowledge bases, which contain structural descriptions of all types of objects, manufactured by various facilities, entering distributed IS. Due to "granularity" of the described knowledge representation, all accumulation and correction of such KB from the distributed local sources is performed without any difficulties in the online regime. Assessment of the possibility of orders completion at moments, when they enter IS, is performed in soft real time mode (minutes per message) on knowledge bases containing 10-12 mln. unitary rules with 2-3 mln. type of objects (items) used and manufactured by the industrial system, and this mode does not require special-purpose or high-parallel hardware. Reverse problem software is activated, when various impacts like manufacturing equipment malfunctions and necessary resources delays occur, and is solved on the same hardware in practically the same time scale. More detailed description of the UMG toolkit software implementation in various hardware environments may be object of the special paper.
Conclusion
Multigrammatical approach shortly presented in the paper may be efficiently used for the assessment of consequences of natural disasters impacts (as well as human-implemented impacts) on large-scale industrial systems, no matter what production they are manufacturing, what infrastructure they use, what kind of producing devices they contain etc., due to the generality and flexibility of MG/UMG toolkit used for the assessment.
The main four directions of further development of the approach, in our opinion, are: 1) application of so called temporal MG/UMG with parallel processes modelling features to more deep and adequate assessment of NDI consequences; 2) implementation of "what-if" regimes for decision makers; 3) minimization of computational complexity of multisets generation as well as high-parallel generation algorithms development for grid/cloud hardware; 4) development of "Big Knowledge" paradigm and unconventional computational models for its hardware implementation.
The first direction background is following. As it is easy to see, multigrammatical approach is strongly based on the additivity of quantities of objects represented by their multiplicities. However, time is additive only in consideration to one processing (manufacturing) unit (for example, assemblying line): if there are two or more such units, they may operate in parallel, that's why total work may be completed by the system faster than in the sequential mode with additive time operating. Temporal MG/UMG are such generalization of the considered higher mathematical tools, that provide simple description of industrial systems and their elements extended by time intervals which are necessary to these elements to execute operations. Basic construction of temporal unitary multiset grammar is so called temporal unitary rule of the form 1 1 , , . . . , , m m a n t n a n a → ⋅ ⋅ ⋅ where t is fixed object denoting time measurement unit (for example, minute) while n multiplicity is duration of time interval (number of t objects), necessary for assemblying a object by utilizing n 1 units of resource a 1 , . . . , n m units of resource a m . This extension provides as a final purpose construction of Gantt diagrams charts of manufacturing (production) processes evolving all devices which are necessary for order completion, that's why temporal MG/UMG algorithmics is in fact algorithmics of optimal (rational) scheduling (Conway et al., 2003;Hermann, 2006). It is much more deep and sophisticated in comparison with MG/UMG, however, providing practically useful generalization of well known scheduling problems and their solutions. The second of the listed directions is from the practical point of view very important, because it provides end users (decision makers) with the opportunity to prepare to possible NDI before they really occur.
The third direction, being quite evident, follows from the non-procedural, declarative representation of the knowledge about industrial systems and their operation items. "Granularity" of multigrammatical knowledge bases provides easy, in fact, "additive" accumulation of knowledge as well as its correction.
However, like any other knowledge representation model, MG/UMG need special algorithmics providing minimization of redundant search while multisets generation, especially for the so called filtering MG/UMG (Sheremet, 2010(Sheremet, , 2011. This direction is, perhaps, most critical, because it eliminates or mitigates sharp growth of redundant generation steps, which reason is well-known combinatorial explosion while knowledge base volume expansion. There would be combination of two basic approaches for such kind of problems solution: redundant steps elimination by some smart cutting-off conditions exploitation at maximal early stages of multisets generation, and "brutal force" application by parallel generation of alternative branches on asynchronously operating processor units. State of the art in this area is described in (Sheremet 2010(Sheremet , 2011. The most general is, in our opinion, the fourth direction. As may be easily estimated, real large-scale industrial systems technological bases and IS production (manufactured objects) would be represented by UMG knowledge bases containing millions and millions unitary rules. Creating, maintaining and utilizing such amounts of knowledge is a great problem being the next step of computer technologies application after Big Data, which are in fact everyday reality, however, not yet understood (Roberts, 2016) (as may be seen from section V, when UMG are used, then Big Knowledge is implemented by Big Data tools). In many considerations, it would be a new way of thinking, which must lead us to the new understanding of the global technosphere as an interconnected set of devices, joined with one another by information infrastructure (that's already "Internet of Things") and logistical networks. Such understanding, in turn, may optimize humanity behavior and its relations with the nature. | 7,921.2 | 2018-02-09T00:00:00.000 | [
"Computer Science"
] |
E ff ects of Sulfuric Acid Treatment on the Performance of Ga-Al 2 O 3 for the Hydrolytic Decomposition of 1,1,1,2-Tetrafluoroethane (HFC-134a)
: HFC-134a, one of the representative hydrofluorocarbons (HFCs) used as a coolant gas, is a known greenhouse gas with high global warming potential. Catalytic decomposition is considered a promising technology for the removal of fluorinated hydrocarbons. However, systematic studies on the catalytic decomposition of HFC-134a are rare compared to those for other fluorinated hydrocarbon gases. In this study, Ga-Al 2 O 3 and S / Ga-Al 2 O 3 catalysts were prepared and the change in their properties post-acid treatment was investigated by X-ray di ff raction (XRD), Brunauer-Emmett-Teller (BET), temperature-programmed desorption of ammonia (NH 3 -TPD), in situ Fourier-transform infrared spectroscopy (FT-IR), scanning electron microscopy combined with energy-dispersive X-ray spectroscopy (SEM-EDS), and X-ray photoelectron spectroscopy (XPS). The S / Ga-Al 2 O 3 catalyst achieved a much higher HFC-134a conversion than Ga-Al 2 O 3 , which was ascribed to the promotional e ff ect of the sulfuric acid treatment on the Lewis acidity of the catalyst surface, as confirmed by NH 3 -TPD. Furthermore, the e ff ect of hydrogen fluoride (HF) gas produced by HFC-134a decomposition on the catalyst was investigated. The S / Ga-Al 2 O 3 maintained a more stable and higher HFC-134a conversion than Ga-Al 2 O 3 . Combining the results of the stability test and characterization, it was established that the sulfuric acid treatment not only increased the acidity of the catalyst but also preserved the partially reduced Ga species.
Introduction
Chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs) are two classes of coolants, which have been found to directly contribute to the destruction of the stratospheric ozone layer [1,2]. The Montreal protocol in 1987 banned the use of these coolants, and hydrofluorocarbons (HFCs) were developed to replace them [1,2]. With the increase in use of air conditioning, the concentration of HFCs in the atmosphere has risen significantly [1][2][3]. HFCs do not deplete the ozone layer, but as greenhouse gases, their global warming potential is~12,000 times higher than that of CO 2 [1,3]. HFC-134a is one of the most commonly used HFC refrigerants today, and measures to remove it from the atmosphere are urgently required to prevent global warming [1].
Various catalysts, such as waste concrete, supported catalysts, and metal phosphate catalysts, have been investigated [7][8][9][10]. Alumina (Al 2 O 3 )-based catalysts have been commonly applied for the decomposition of HFC-134a because Al 2 O 3 is inexpensive and a representative acid catalyst [8,10]. Han et al. reported that an Al 2 O 3 -based catalyst exhibits a very high activity and showed a higher stability when using water as a hydrogen donor. Swamidoss et al. tested the catalytic decomposition of HFC-134a over Mg-supported Al 2 O 3 catalysts [8]. They found that the Mg/Al 2 O 3 catalyst calcined at 650 • C has a higher amount of weak acid sites, an important factor for HFC-134a decomposition [8]. Song et al. tested CF 4 decomposition over metal-supported Al 2 O 3 and elucidated that modification of the catalyst by metal impregnation preserves its active sites [10]. They found that using a metal-sulfate precursor could further enhance the catalytic performance by increasing the acid sites [10]. Takita et al. investigated metal sulfate catalysts for CCl 2 F 2 decomposition [11]. The authors insisted that metal oxides were not stable for CCl 2 F 2 decomposition, due to weak resistance to HF, while metal sulfate catalysts, especially Zr(SO 4 ) 2 , achieved complete conversion over 350 • C in the presence of water vapor [11]. Previous research on the use of acid-treated catalysts for the decomposition of other fluorinated hydrocarbons suggest that the catalytic efficiency and stability of Al 2 O 3 -based catalysts can be increased by acid treatment, but there has been little systematic investigation on using alumina-based catalysts for HFC-134a decomposition [8,[10][11][12][13].
In this study, Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts were prepared to investigate the change in the properties of the catalyst on acid treatment. Furthermore, the effect of the HF gas produced by HFC-134a decomposition on the catalyst was investigated.
Improvement in Catalytic Performance in HFC-134a Decomposition
Pristine Ga-Al 2 O 3 and sulfuric acid-treated Ga-Al 2 O 3 catalysts were synthesized and tested for HFC-134a decomposition reaction. To ensure the elemental composition of as-prepared catalysts, the amounts of Ga and Al were estimated by inductively coupled plasma, and that of S was measured by an elemental analyzer, given in Table 1. Figure 1 shows the temperature dependence of HFC-134a conversion over Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts. The catalysts exhibited markedly different performances. S/Ga-Al 2 O 3 , having a small amount of H 2 SO 4 loading (1 wt.% of S), exhibited a higher HFC-134a conversion (90.5% at 450 • C) than Ga-Al 2 O 3 (62% at 450 • C). It has been reported that large amounts of HF molecules are inevitably produced during HFC-134a decomposition, which negatively affects the catalyst performance because of halogenide formation on the catalyst surface (Reaction (1)) [7]. In particular, the activity of the alumina-based catalyst is remarkably decreased by formation of AlF 3 (Reaction (2)) [7,14]. Thus, it is necessary to observe the catalyst stability during the HFC-134a decomposition reaction.
Catalysts 2020, 10, x FOR PEER REVIEW 3 of 12 catalyst is remarkably decreased by formation of AlF3 (Reaction 2) [7,14]. Thus, it is necessary to observe the catalyst stability during the HFC-134a decomposition reaction. The inset of Figure 1 presents the results of the catalyst stability test. It reveals that with time on stream, HFC-134a conversion over Ga-Al2O3 decreased much faster than that over S/Ga-Al2O3, retaining ~40% and 83% after 30 h, respectively. As both catalysts used the same amount of Ga (15 wt.%), it could be said that the large difference and good stability in HFC-134a decomposition performance are likely due to the pretreatment with sulfuric acid [14].
It has been reported that the catalytic properties such as crystallinity, surface area, and acidity are drastically influenced by pretreatment with sulfuric, hydrofluoric, nitric, and phosphoric acids [10,15,16]. XRD analysis was performed to confirm the crystal structure of our catalysts. Figure 2 presents the XRD patterns of Ga-Al2O3 and S/Ga-Al2O3 catalysts, revealing that both catalysts contained γ-Al2O3 (JCPDS #29-63) [17,18]. No peaks of Ga2O3 were observed in any case, which was ascribed to the high dispersion of Ga or the formation of Ga nanoparticles [17]. Therefore, only the γ-Al2O3 phase was detected by XRD [17,18]. The absence of sulfate-related peaks was attributed to the good dispersion of these species on the catalyst surface [19]. As shown in Table 2, Ga-Al2O3 and S/Ga-Al2O3 had BET surface areas of 227.5 and 187.4 m 2 g −1 and total pore volumes of 0.35 and 0.29 m 3 g −1 , respectively. The inset of Figure 1 presents the results of the catalyst stability test. It reveals that with time on stream, HFC-134a conversion over Ga-Al 2 O 3 decreased much faster than that over S/Ga-Al 2 O 3 , retaining~40% and 83% after 30 h, respectively. As both catalysts used the same amount of Ga (15 wt.%), it could be said that the large difference and good stability in HFC-134a decomposition performance are likely due to the pretreatment with sulfuric acid [14].
It has been reported that the catalytic properties such as crystallinity, surface area, and acidity are drastically influenced by pretreatment with sulfuric, hydrofluoric, nitric, and phosphoric acids [10,15,16]. XRD analysis was performed to confirm the crystal structure of our catalysts. Figure 2 presents the XRD patterns of Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts, revealing that both catalysts contained γ-Al 2 O 3 (JCPDS #29-63) [17,18]. No peaks of Ga 2 O 3 were observed in any case, which was ascribed to the high dispersion of Ga or the formation of Ga nanoparticles [17]. Therefore, only the γ-Al 2 O 3 phase was detected by XRD [17,18]. The absence of sulfate-related peaks was attributed to the good dispersion of these species on the catalyst surface [19]. As shown in Table 2 When H2SO4 is doped in the mixed oxide, it generates acid sites on the catalyst [14,19]. Moreover, as sulfate ions are Lewis acids, they attract electrons to create new Lewis acid sites that could further improve the catalytic performance for HFC-134a decomposition [14,19]. Temperature-programmed desorption of ammonia (NH3-TPD) and in situ FT-IR analysis were conducted to observe the acidic strength and type of surface acidity on Ga-Al2O3 and S/Ga-Al2O3 catalysts. Figure 3 presents the NH3-TPD profiles of the two catalysts recorded at 55-700 °C. According to desorption temperature T, the sites could be grouped into those with weak (T < 250 °C), medium (250 °C < T < 400 °C), and strong (400 °C < T) acid sites, which implied the presence of sites with different acidic strengths [10,20]. Sulfuric acid treatment increased the amount of weak and medium acid sites, whereas that of strong acid sites was not significantly affected [21,22]. This finding indicates that the addition of sulfate strongly influences the acid properties of alumina-based catalysts [10,14,19]. The total amounts of acid sites of both catalysts are also listed in Table 2. The total acid sites were higher for S/Ga-Al2O3, indicating that sulfate addition increased the surface acidity. Table 2.
Characterization results of catalysts: BET surface area, pore volume, and temperature-programmed desorption of ammonia (NH 3 -TPD). When H 2 SO 4 is doped in the mixed oxide, it generates acid sites on the catalyst [14,19]. Moreover, as sulfate ions are Lewis acids, they attract electrons to create new Lewis acid sites that could further improve the catalytic performance for HFC-134a decomposition [14,19]. Temperature-programmed desorption of ammonia (NH 3 -TPD) and in situ FT-IR analysis were conducted to observe the acidic strength and type of surface acidity on Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts. Figure 3 presents the NH 3 -TPD profiles of the two catalysts recorded at 55-700 • C. According to desorption temperature T, the sites could be grouped into those with weak (T < 250 • C), medium (250 • C < T < 400 • C), and strong (400 • C < T) acid sites, which implied the presence of sites with different acidic strengths [10,20]. Sulfuric acid treatment increased the amount of weak and medium acid sites, whereas that of strong acid sites was not significantly affected [21,22]. This finding indicates that the addition of sulfate strongly influences the acid properties of alumina-based catalysts [10,14,19]. The total amounts of acid sites of both catalysts are also listed in Table 2. The total acid sites were higher for S/Ga-Al 2 O 3 , indicating that sulfate addition increased the surface acidity. Figure 4 shows in situ FT-IR spectra of Ga-Al2O3 and S/Ga-Al2O3 catalysts exposed to a flow of NH3 at 25 °C for 1 h and then purged with He for 30 min to remove physically adsorbed species. In the case of Ga-Al2O3, peaks at 1262, 1462, 1612, and 1689 cm −1 were detected, losing intensity with increasing temperature. The bands at 1262 and 1612 cm −1 corresponded to the bending vibrations of N-H bonds in coordinated NH3 + on Lewis acid sites, and the peaks at 1462 and 1689 cm −1 were attributable to NH4 + species on Lewis acid sites [14,19,23]. The spectra of S/Ga-Al2O3 were different from those of the Ga-Al2O3 catalyst, featuring adsorption bands at 1386, 1486, 1620, and 1693 cm −1 . The band at 1620 cm −1 on S/Ga-Al2O3 was assigned to coordinated ammonia species, the same as 1612 cm −1 on the Ga-Al2O3 catalyst [14,19]. The bands at 1486 and 1693 cm −1 were due to NH4 + species on Lewis acid sites. These IR bands of NH4 + species (1486 and 1693 cm −1 ) were blue-shifted by ~20 cm −1 compared to those of the non-sulfated catalyst because of the higher NH4 + -catalyst bonding strength. Furthermore, a new peak at 1386 cm −1 in S/Ga-Al2O3 was observed at 250 °C, which was not detected below 200 °C, because of the nearby overlapping band. This could be assigned to the presence of medium Lewis acid sites, which are stable up to 500 °C. Thus, the NH3-TPD and FT-IR results imply that the amount of acid sites on the Ga-Al2O3 catalyst could be increased by sulfuric acid treatment. Like the use of acid-treated catalysts for the decomposition of other fluorinated hydrocarbons, the catalytic activity for HFC-134a decomposition could be enhanced by acid treatment of the catalyst. Although sulfate treatment decreases the surface area and pore volume, it apparently increases the amount of Lewis acid sites that positively influence the HFC-134a decomposition.
Observation of Change in Surface Properties by HF Poisoning
As mentioned above, in the catalytic decomposition of HFC-134a, poisoning by HF is the main reason for catalytic deactivation. However, most of the studies so far have aimed only at improving the catalytic activity by increasing the acidity of the catalyst, and no detailed study of the physicochemical change on the used catalysts was investigated. We analyzed the change in the surface of the fresh and used catalysts via characterization by XRD, SEM-EDS, and XPS. For these analyses, the catalyst tested for 30 h in the HFC-134a decomposition reaction was referred to as a used catalyst.
The XRD pattern of used Ga-Al2O3 and S/Ga-Al2O3 catalysts is given in Figure 5. Used catalysts had γ-Al2O3 (JCPDS #29-63)-related peaks, and similar to the fresh catalysts, no peaks corresponding to Ga2O3 and sulfate species were detected [10,24]. However, the XRD patterns of used catalysts showed higher crystallinity than that of fresh catalysts, and characteristic peaks of AlF3 were clearly detected for both used catalysts. Thus, it was qualitatively confirmed that regardless of the acid treatment, AlF3 was formed on the catalyst surface during HFC-134a decomposition. Like the use of acid-treated catalysts for the decomposition of other fluorinated hydrocarbons, the catalytic activity for HFC-134a decomposition could be enhanced by acid treatment of the catalyst. Although sulfate treatment decreases the surface area and pore volume, it apparently increases the amount of Lewis acid sites that positively influence the HFC-134a decomposition.
Observation of Change in Surface Properties by HF Poisoning
As mentioned above, in the catalytic decomposition of HFC-134a, poisoning by HF is the main reason for catalytic deactivation. However, most of the studies so far have aimed only at improving the catalytic activity by increasing the acidity of the catalyst, and no detailed study of the physicochemical change on the used catalysts was investigated. We analyzed the change in the surface of the fresh and used catalysts via characterization by XRD, SEM-EDS, and XPS. For these analyses, the catalyst tested for 30 h in the HFC-134a decomposition reaction was referred to as a used catalyst.
The XRD pattern of used Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts is given in Figure 5. Used catalysts had γ-Al 2 O 3 (JCPDS #29-63)-related peaks, and similar to the fresh catalysts, no peaks corresponding to Ga 2 O 3 and sulfate species were detected [10,24]. However, the XRD patterns of used catalysts showed higher crystallinity than that of fresh catalysts, and characteristic peaks of AlF 3 were clearly detected for both used catalysts. Thus, it was qualitatively confirmed that regardless of the acid treatment, AlF 3 was formed on the catalyst surface during HFC-134a decomposition. To investigate the formation of AlF3 located on the catalyst surface, SEM-EDS analysis was conducted on the used catalysts. Figure 6 shows the SEM images of the used catalysts, revealing the presence of AlF3 on both catalyst surfaces (in agreement with the XRD analysis in Figure 5). More AlF3 was observed on the surface of Ga-Al2O3 than on the surface of S/Ga-Al2O3. Table 3 shows the elemental compositions as determined by EDS. Although both used catalysts had similar Ga content, Ga-Al2O3 contained almost twice as much F as S/Ga-Al2O3. Therefore, in good agreement with the stability test, it might be concluded that sulfuric acid treatment not only improves the catalytic performance but also inhibits the formation of AlF3 on the catalyst surface. To investigate the formation of AlF 3 located on the catalyst surface, SEM-EDS analysis was conducted on the used catalysts. Figure 6 shows the SEM images of the used catalysts, revealing the presence of AlF 3 on both catalyst surfaces (in agreement with the XRD analysis in Figure 5). More AlF 3 was observed on the surface of Ga-Al 2 O 3 than on the surface of S/Ga-Al 2 O 3 . Table 3 shows the elemental compositions as determined by EDS. Although both used catalysts had similar Ga content, Ga-Al 2 O 3 contained almost twice as much F as S/Ga-Al 2 O 3 . Therefore, in good agreement with the stability test, it might be concluded that sulfuric acid treatment not only improves the catalytic performance but also inhibits the formation of AlF 3 on the catalyst surface. To investigate the formation of AlF3 located on the catalyst surface, SEM-EDS analysis was conducted on the used catalysts. Figure 6 shows the SEM images of the used catalysts, revealing the presence of AlF3 on both catalyst surfaces (in agreement with the XRD analysis in Figure 5). More AlF3 was observed on the surface of Ga-Al2O3 than on the surface of S/Ga-Al2O3. Table 3 shows the elemental compositions as determined by EDS. Although both used catalysts had similar Ga content, Ga-Al2O3 contained almost twice as much F as S/Ga-Al2O3. Therefore, in good agreement with the stability test, it might be concluded that sulfuric acid treatment not only improves the catalytic performance but also inhibits the formation of AlF3 on the catalyst surface. The surface electronic state and atomic concentration of Ga and Al in the fresh and used catalysts were investigated by XPS analysis. A curve-fitting for this analysis was carried out after Shirley-type background subtraction using a combination of Gaussian and Lorentzian functions. Figure 7A depicts the Ga 2p 3/2 spectra for the fresh Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts. The XPS peaks of Ga 2p 3/2 at 1117.4 and 1118.7 eV can be ascribed to Ga 0 and Ga 3+ [25][26][27]. The Ga 0 peak increased with sulfuric acid treatment of the Ga-Al 2 O 3 catalyst, indicating that the acid sites on the Ga-Al 2 O 3 catalyst could partially reduce the Ga 3+ to Ga 0 because they attract electrons to create more Lewis acid sites. Figure 7B presents the Ga 2p 3/2 spectra for the used Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts. There was little change in peak position compared to fresh catalysts. The Ga 0 /(Ga 0 + Ga 3+ ) values given in Table 4 are different for the used catalysts, because the HFC-134a decomposition occurs in a highly oxidative atmosphere and at high temperature. The Ga 0 /(Ga 0 + Ga 3+ ) value of the Ga-Al 2 O 3 catalyst decreased from 0.30 to 0.11, while the S/Ga-Al 2 O 3 catalyst retained Ga 0 species after the HFC-134a decomposition reaction. This result clearly indicates that the sulfuric acid treatment not only increases the acidity of the catalyst but also increases and preserves partially reduced Ga 0 species. The Al 2p spectra of the fresh catalysts are shown in Figure 7C. Both catalysts have a well-developed Al 2p peak located at 74.2 eV, indicating the formation of an Al-O bond [28,29]. The peak shift with Al 2p on sulfuric acid treatment was not observed. However, in Figure 7D, another set of peaks, attributed to the Al-F bond, appeared in the range of 76.6-75.9 eV for the used Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts [28]. According to the literature, the binding energy range of the Al-F bond was found at 75.6-76.6 eV [28]. The XPS results of Al in Figure 7D are very similar to that, which can be thought of as peaks due to the formation of Al-F bonding. In the case of the Ga-Al 2 O 3 catalyst, moreover, the peak intensity of the Al-O bond is significantly decreased by the formation of the Al-F bond [28][29][30]. It indicates that the Al-F bond of AlF 3 was formed by the replacement of the Al-O bond of Al 2 O 3 during the HFC-134a decomposition reaction. The appearance of the Al-F peak after the reaction indicates that F incorporation occurs only on the Al 2 O 3 surface, and not Ga 2 O 3 . Furthermore, this result suggests that the sulfuric acid treatment on the Ga-Al 2 O 3 catalyst could alleviate the elemental composition change from Al 2 O 3 to AlF 3 . Table 4. Surface atomic concentration of Ga 0 /(Ga 0 + Ga 3+ ) and binding energies for the Ga 0 and Ga 3+ value in Ga 2p 3/2 .
Catalyst Preparation
Ga-Al2O3 was synthesized by co-precipitation, with the Ga loading fixed at 15 wt.%. Stoichiometric quantities of gallium nitrate (99.9%, Aldrich, St. Louis, MO, USA) and aluminum
Catalyst Preparation
Ga-Al 2 O 3 was synthesized by co-precipitation, with the Ga loading fixed at 15 wt.%. Stoichiometric quantities of gallium nitrate (99.9%, Aldrich, St. Louis, MO, USA) and aluminum nitrate (98%, Aldrich) were dissolved in distilled water, and the resulting solution was slowly treated with 15 wt.% aqueous NH 4 OH with vigorous agitation until the pH reached 9.1. The resulting slurry was aged for 24 h at room temperature, and the precipitate was thoroughly washed to remove impurities, dried at 110 • C for 24 h, and calcined at 600 • C for 5 h to finally obtain the Ga-Al 2 O 3 catalyst. S/Ga-Al 2 O 3 was prepared by impregnating the Ga-Al 2 O 3 catalyst with appropriate amounts of H 2 SO 4 (1 wt.% S) followed by drying at 110 • C for 24 h and calcination at 600 • C for 5 h.
Catalytic Reaction
The catalytic reaction was performed in a fixed-bed Incornel reactor (10.5 mm i.d.) under atmospheric pressure. The reaction temperature was determined by using a thermocouple directly inserted into the catalyst bed. Prior to the reaction, the catalyst powders were pressed into pellets, crushed, and sieved to 40-60 mesh. The reactant gas mixture (1 vol.% HFC-134a, 25 vol.% H 2 O, and balance air) was introduced into the reactor at a gas hourly space velocity (GHSV) of 2362 h −1 . Water, quantitatively introduced using a syringe pump, was passed through a pre-heater at 200 • C before being injected into the reactor. To remove HF, the product gas was passed through aqueous KOH and then analyzed by an online gas chromatograph equipped with a thermal conductivity detector (iGC 7200, DS Science, Gwangju, Gyeonggi, R. Korea).
Characterization
The crystal structure of the catalyst was probed by X-ray diffraction (XRD, Rigaku D/MAX-2500, Cu K α radiation). Brunauer-Emmett-Teller (BET) surface areas were determined from N 2 adsorption-desorption isotherms recorded at −196 • C (BELSORP-max, BEL Japan, Inc., Osaka, Japan). The structure of the catalyst samples was observed by scanning electron microscopy (SEM, Hitachi S-4300, Tokyo, Japan) coupled with energy-dispersive X-ray spectroscopy (EDS, Horiba EX-200, Horiba, Tokyo, Japan). Prior to the temperature-programmed desorption of ammonia (NH 3 -TPD) experiments (BELCAT II, BEL Japan, Inc.), samples were pretreated in helium flow at 400 • C for 1 h to remove impurities, cooled to 50 • C, exposed to excess 5% NH 3 /He for 1 h, and purged with He. NH 3 -TPD was performed at temperatures of up to 700 • C in helium flow. In situ Fourier-transform infrared spectroscopy (FT-IR) was carried out in a ceramic IR cell equipped with ZnSe windows using a diffuse-reflectance infrared (IR) accessory (PIKE Technologies, Madison, WI, USA) connected to a Nicolet iS10 (Thermo Scientific, Waltham, MA, USA) IR spectrometer with an MCT-A detector. Spectra were recorded by the averaging of 64 scans with a resolution of 8 cm −1 . Before IR spectral observation, samples were pretreated in a flow of He at 400 • C for 1 h to remove impurities, and then cooled down to 25 • C to probe NH 3 adsorption behavior in the temperature range of 25-600 • C. X-ray photoelectron spectroscopy (XPS) was conducted on an ESCALAB Mark II spectrometer (Vacuum Generators, Su ssex, UK) using Al Kα radiation (hν = 1486.6 eV) at a constant energy of 50 eV. The binding energy was aligned based on the C 1s transition at 285 eV.
Conclusions
To investigate the effect of sulfuric acid treatment on catalysts for HFC-134a decomposition, Ga-Al 2 O 3 and S/Ga-Al 2 O 3 catalysts were prepared by co-precipitation and impregnation methods. The S/Ga-Al 2 O 3 catalyst achieved a much higher HFC-134a conversion than Ga-Al 2 O 3 , which was ascribed to the promotional effects of sulfuric acid treatment on catalytic activity, as reported in many earlier studies for the catalytic decomposition of other fluorinated hydrocarbons. The effects of sulfuric acid treatment were probed by NH 3 -TPD and in-situ FT-IR analysis. Treatment with sulfuric acid was shown to influence the amount of Lewis acidity and improve the catalytic activity for HFC-134a decomposition. Furthermore, the S/Ga-Al 2 O 3 catalyst retained its efficiency with minor fluctuation for the 30 h test, with its HFC-134a conversion maintained at~80%.
The changes in surface structure of the used catalysts were characterized by XRD, SEM-EDS, and XPS analyses. Both catalysts contained AlF 3 after 30 h of HFC-134a decomposition reaction, confirmed by XRD. In particular, almost twice as many F sources were detected in the Ga-Al 2 O 3 catalyst compared to the S/Ga-Al 2 O 3 catalyst. Based on the XPS analysis results, the sulfuric acid treatment not only increased the acidity of the catalyst but also preserved the partially reduced Ga species. Moreover, this treatment could alleviate the elemental composition change from Al 2 O 3 to AlF 3 . | 6,080.2 | 2020-07-09T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Landau-Khalatnikov-Fradkin transformation in three-dimensional quenched QED
We study the gauge-covariance of the massless fermion propagator in three-dimensional quenched Quantum Electrodynamics in the framework of dimensional regularization in d=3-2\ep. Assuming the finiteness of the quenched perturbative expansion, that is the existence of the limit \ep \to 0, we state that, exactly in d=3, all odd perturbative coefficients, starting with the third order one, should be zero in any gauge.
I. INTRODUCTION
Quantum electrodynamics in three space-time dimensions (QED 3 ) with N flavors of four-component massless Dirac fermions has been attracting continuous attention as a useful field-theoretical model for the last forty years. It served as a toy model to study several key quantum field theory problems such as infrared singularities in low-dimensional theories with massless particles, non-analyticity in coupling constant, dynamical symmetry breaking and fermion mass generation, phase transition and relation between chiral symmetry breaking and confinement.
In the last three decades, QED 3 found many applications in condensed matter physics, in particular, in high-T c superconductivity [1-3], planar antiferromagnets [4], and graphene [5] where quasiparticle excitations have a linear dispersion at low energies and are described by the massless Dirac equation in 2+1 dimensions (for graphene, see reviews in Ref. [6]). QED 3 is described by the action (in Euclidean formulation) where D µ = ∂ µ − ieA µ , i = 1, 2, . . . N , Euclidean gamma matrices satisfy γ † µ = γ µ , {γ µ , γ ν } = δ µν , and the gauge coupling constant has a dimension [e] = (mass) 1/2 . It is super-renormalizable with a mass scale e 2 N/8. As such, the model is plagued with severe infrared (IR) singularities in the loop expansion in e 2 of various Green's functions since, for dimensional reasons, higher-order diagrams contain higher powers of momentum in the denominator. For example, the fermion propagator is affected by IR divergences starting at two-loops. The problem of IR divergences in QED 3 has been intensively investigated by various methods in the 80s [7][8][9][10][11].
In order to better appreciate how these IR singularities may be cured, let's recall that the effective dimensionless coupling of QED 3 may be written as: α(q) = e 2 q (1 + Π(q 2 )) = e 2 /q q ≫ e 2 N/8 8/N q ≪ e 2 N/8 , where the one-loop polarization operator was used: Π(q 2 ) = N e 2 /(8q). The corresponding beta function reads: and displays two stable fixed points: an asymptotically free UV fixed point (α → 0) and an interacting IR fixed point (α → 8/N ). 1 The presence of an infrared fixed point in QED 3 is intriguing especially because of the possible existence of an analogous fixed point in four-dimensional SU (N c ) gauge theories with N massless fermions. In this case, there exists a so-called conformal window: a region of values of colors N c and flavors N , for which the beta function has a form resembling Eq. (3). Thus, the theory is asymptotically free at short distances while the long distance physics is governed by a non-trivial fixed-point [15] (for studies on the lattice and an extensive list of references see Ref. [16]).
At the fixed points of the beta-function (3), IR singularities of QED 3 are cured. In the IR limit, q ≪ e 2 N/8, this goes through an infrared softening of the photon propagator, due to vacuum-polarization insertions, together with the disappearance of the dimensionful e 2 in favour of a dimensionless coupling constant proportional to 1/N . The theory has then the same power counting as the (renormalizable) four-dimensional one. In the following, we shall refer to this limit as the large-N limit of QED 3 . In this limit, the fermion propagator is affected by (standard) UV singularities that can be renormalized [17].
Massless QED 3 also plays an important role in studying the problems of dynamical symmetry breaking and fermion mass generation in gauge theories. This comes from the fact that the properties (3) are reminiscent of those of QCD: the intrinsic dimensionful parameter e 2 N/8 plays a role similar to the QCD scale Λ QCD , and the effective coupling α(q) approaches zero at large momenta q. The main question that has been debated for a long time is then whether there exists a critical fermion flavor number, N cr , separating the chiral-symmetric and the chiral-symmetry-broken phases [17][18][19][20][21][22][23][24][25][26][27] (for recent studies in this direction see, e.g., Refs. [28][29][30][31][32]).
Analytical studies of chiral-symmetry breaking and mass generation in QED 3 are based on using truncated Schwinger-Dyson (SD) equations with some approximations (ansatzes) for the full fermion-photon vertex. For example, the simplest one is the replacement of the full vertex by the bare vertex γ µ (the ladder approximation). A more sophisticated way is to use the vertex consistent with the vector Ward-Takahashi identity and satisfying several other requirements such as the absence of kinematic poles and multiplicative renormalizability. The most known among them are the Ball-Chiu [33] and the Curtis-Pennington [34] vertices constructed for the quenched approximation of four-dimensional QED (QED 4 ) -for similar vertices in QED 3 , see [35].
Another crucial requirement for truncated SD equations, not implemented yet, is the covariance of the fermion propagator and the vertex under the Landau-Khalatnikov-Fradkin (LKF) transformations [36][37][38]. These transformations have a simple form in the coordinate space representation and allow us to evaluate Green's functions in an arbitrary covariant gauge if we know their value in any particular gauge. However, in momentum space, the LKF transformations have a rather complicated form for the vertex, and this is the reason why they were not fully involved yet for restricting the form of the vertex even in the quenched approximation (for work in this direction, see papers by Bashir and collaborators [35,39,40] and the review in Ref. [41]).
In the present paper, we study the gauge-covariance of the massless fermion propagator in quenched QED 3 in a linear covariant gauge. The LKF transformation for unquenched QED 3 including a non-local gauge will be considered in the future. At this point, let's recall that the quenched limit of QED is the approximation in which we can neglect the effects of closed fermion loops. The approximation came from investigations of the lattice representation of QED 4 (see [42,43]) which showed that a reasonable estimate of the hadron spectrum could be obtained by eliminating all internal quark loops. Moreover, the quenched approximation in QED 4 is now used to include QED effects within lattice QCD calculations (see the recent paper [44] and references and discussions therein). Just after its introduction, the quenched approximation in QED 4 has also been used in Refs. [45][46][47] within the formalism of the SD equations.
The paper is organized as follows. In Sec. II, we con-sider the LKF transformation for the fermion propagator in momentum space in exactly three dimensions. We notice that it cannot be applied to terms higher than e 4 in the perturbative expansion since they lead to infrared singularities in the LKF relation. Therefore, in Sec. III, we apply dimensional regularization (following [47][48][49]) to deal with higher order terms. The obtained general expressions, valid for arbitrary order terms, are then analyzed thus revealing the self-consistency of the LKF transformation in Secs. IV and V. The analysis leads to the conclusion that exactly in three dimensions, d = 3, all odd perturbative coefficients, starting with third order one, should be zero in any gauge. The results are summarized and discussed in Sec. VI. In three Appendices A, B, and C we present the details of the calculations.
II. LKF TRANSFORMATION
In the following, we shall consider a Euclidean space of dimension d = 3 − 2ε. The general form of the fermion propagator S F (p, ξ) in some gauge ξ reads: where the tensorial structure, e.g., the factorp containing Dirac γ-matrices, has been extracted and P (p, ξ) is a scalar function of p = p 2 . It is also convenient to introduce the x-space representation S F (x, ξ) of the fermion propagator as: The two representations, S F (x, ξ) and S F (p, ξ), are related by the Fourier transform which is defined as: In momentum space, the photon propagator can be written in the following general form: where the functions D T (q 2 ) and D L (q 2 ) encode the transverse and longitudinal parts of the photon propagator, respectively, and the ξ-dependence was made explicit. In linear covariant gauges that we shall focus on in this paper, radiative corrections affect transverse photons and not the longitudinal ones, i.e., where Π(q 2 ) is the polarization part.
The LKF transformation then expresses the covariance of the fermion propagator under a gauge transformation. It can be derived by standard arguments, see, e.g., [36][37][38] and its most general form can be written as: thereby relating the fermion propagator in two arbitrary covariant gauges ξ and η. For d = 3 and D L (q 2 ) given by Eq. (8), the LKF transformation takes the form [39,41]: As can be seen from (11), the transformation has a very simple form in configuration space. It is much more complicated in momentum space where the Fourier transform of Eq. (9) is given by a convolution Here G(k, ∆) is the Fourier transform of G(x, ∆) which, for d = 3, is given by: and is such that G(k, ∆ = 0) = (2π) 3 δ (3) (k). In terms of the (scalar) function P of Eq. (4), the LKF transform can be written as (still in d = 3): Integrating over the angles in Eq. (14) yields a more explicit equation relating the P -functions in two different gauges: It is well known that the perturbation theory in α in massless QED 3 suffers from infrared divergences since, in fact, for dimensional reasons, the expansion is in terms of α/p and infrared divergences are encountered as a consequence of the momenta in the denominator. Potentially infrared divergent diagrams contain insertions of the vacuum polarization diagram into the gauge propagator. For the fermion propagator, an infrared divergent contribution appears first at the order α 2 . However, there are weighty arguments that quenched perturbation theory is infrared finite [7] and they have been recently confirmed by lattice studies of the quenched approximation [50]. In the present paper, we therefore focus on the quenched case only.
Let us use as an initial η-gauge 2 the Landau gauge in Eq. (15). At two loop order in quenched perturbation theory, we have [39]: where the second term comes from two-loop self-energy diagram with crossed photon lines. Inserting the last expression in Eq. (15) and evaluating the integral we get Expanding the r.h.s. up to O(α 2 ), we reproduce the perturbative expansion for the massless fermion propagator in an arbitrary covariant gauge (18) Eqs. (17), (18) are in agreement with Ref. [39]. Beyond α 2 , the terms of the P (p, ξ) expansion are not yet calculated in perturbation theory. The LKF transformation, being non-perturbative in nature, contains, at a given order of the loop-expansion, important information about higher order terms. Note, for example, the absence of the α 2 ξ term and it has been suggested [39] that the contributions α 3 ξ m (m = 1, 2, 3) are also absent upon further expanding the expression (17).
Although Eq. (15) is an exact one, it cannot be applied to the terms of order (α/p) 3 and higher ones in the loop-expansion of the massless fermion propagator. This is because the kernel of Eq. (15) (in square brackets) behaves as ∼ k 2 as k → 0 which leads to infrared divergences for higher order terms in the expansion of P . In the following, we will show that studying the LKF transformation in dimensional regularization allows one to get around this difficulty and obtain explicit finite expressions at any order in quenched perturbation theory for d = 3. In general, this approach can also be applied to the unquenched expansion with dimensionally regularized perturbation theory that we postpone for a future study.
Following Refs. [48] and [49], we use from now on dimensional regularization for which D(0) = 0 (D(0) is a massless tadpole and, thus, it is eliminated in dimensional regularization) and Eq. (9) simplifies as: In order to proceed, it is useful to first present the Euclidean space Fourier transforms of massless propagators (see, for example, the recent review [51]), which have very simple and symmetric forms: With the help of (20a), we may evaluate D(x) which is expressed as in d = 3 − 2ε. The calculation yields [47]: which, for ε = 0, leads to the exponent in Eq. (11). Notice that Eq. (23) is finite in the limit ε → 0, i.e., QED 3 is free from UV singularities. Moreover, in the quenched case that we consider, IR singularities arising from fermion loops, e.g., in gauge-invariant contributions that are in the pre-exponential factor of (19), are suppressed. Nevertheless, as anticipated in the Introduction, the super-renormalizable nature of quenched QED 3 may still give rise to IR singularities at high orders. In dimensional regularization, these IR singularities take the form of poles in 1/ε just as UV singularities. In principle, in order to keep track of them, the full ε-dependence of Eq. (23) has to be taken into account.
B. Momentum space LKF transformation
Following the usual steps for the derivation of the LKF transformation, we then consider the fermion propagator S F (p, η) with the external momentum p in some gauge η. The latter has the form (4) with P (p, η) as where a m (η) are coefficients of the loop expansion of the propagator andμ is the scalẽ which is intermediate between the MS scale µ and the MS scale µ. In Eq. (24), the expansion has been written in terms of the dimensionless ratio α/p with an additional conventional factor of 1/(2 √ π). Its exact form is coming from the consideration of four-dimensional QED in [48] (see also App. A and discussions just after Eq. (A2)). Now we would like to find the exact formulas for the transformation a m (η) → a m (ξ). We will do it in two different ways. Following Refs. [48,49], in the following first sub-subsection we will obtain it from the x-space LKF evolution (19) transforming the ansatz (24) to xspace. Since only few details of the approach were given in [48], the full demonstration will be given in App. A in the case of an Euclidean space of arbitrary dimension d. The second sub-subsection, together with App. B for a more detailed analysis, will provide the corresponding derivation in momentum-space with the help of a direct evaluation of G(p, ∆) and then the evaluation Eq. (12).
x-space analysis
Using the ansatz (24) and the Fourier transform (A4), we obtain that (see App. A for a more extended analysis in arbitrary d): Expanding the LKF exponent, we have where Factorizing all the x-dependence yields: Hence, from the correspondence between the results for the propagators P (p, η) and S F (x, η) in (24) and (26), respectively, together with the result (30) for S F (x, η), we have, for P (p, ξ): where In this way, we have derived the expression of a k (ξ) using a simple expansion of the LKF exponent in x-space. From this representation of the LKF transformation, we see that the magnitude a k (ξ) is determined by a m (η) with 0 < m < k.
We would like to note that Eq. (32) exactly reproduces our initial ansatz (24). It shows that the initial ansatz (24) is correct. Moreover, the representation (32) can be used as starting point for the evolution to another gauge (this will be discussed in Sec. V).
Very often, however (see Eqs. (17) and (18) and Ref. [39]), the subject of the study is not the magnitude a m (ξ) but the p-and ∆-dependencies of each magnitude a l (η) as it evolves from the η to the ξ gauge. The corresponding result for the p-and ∆-dependencies ofâ m (ξ, p) can be obtained interchanging the order of the sums in the r.h.s. of (32). Performing such interchange yields: where, now, the coefficients transform aŝ
p-space analysis
Here we present the basic steps of the direct derivation of Eq. (36). A more extended analysis can be found in Appendix B.
To evaluate the Fourier transform of e D(x) we use the Mellin integral representation, integrate over x and get (see App. B for the details): The two-parameter function E ν,α (z) is a special case of the generalized Wright function 1 Ψ 1 [52]. Since the LKF transformation is non-perturbative in nature, the exact expression (39) can be used when solving, for example, truncated Schwinger-Dyson equations.
To evaluate the convolution integral Eq. (12) we use the Mellin-Barnes representation of this function where the contour separates the poles of the gamma functions in the numerator. By using the expansion (24), together with Eqs. (39) and (40), we can integrate over k and get the final answer for P (p, ξ) in the form of (35), with the coefficientsâ m (ξ, p) representing the gauge-evolution of the magnitude a m (η): Using the asymptotic expansion of the Wright function at large values of its argument, we can write the LKF relation between the coefficientsâ m (ξ, p) and a m (η) in the form (36), (37).
A. Coefficientsâ k (ξ, p) at ε = 0 As discussed in Sec. I, quenched QED 3 is a priori free from both IR and UV singularities. Because of this finiteness, it is tempting to set ε = 0 in Eq. (41). We consider this possibility here as a prerequisite to the more complete analysis of the following (sub-)sections.
In the case ε = 0, the fermion propagator (35) takes the simpler form where the coefficients (41) are greatly simplified since the function 1 Ψ 1 reduces to the Gauss hypergeometric function. Hence, we get: The last expression can be rewritten in a slightly different form using analytic continuation through connection formulas for 2 F 1 hypergeometric functions (see, e.g., [53]) : 3â Eqs. (43) and (44) are suitable for describing series expansions at small and large values of p, respectively. Let's end this subsection by giving some explicit formulas for the first values of m from Eq. (43) (see, e.g., [54] for the evaluation of 2 F 1 hypergeometric functions): a 2 (ξ, p) = a 2 (η) 1 (1 + B 2 ) 2 3 We would like to note that these equations can also be obtained by considering even and odd l-values in Eq. (36) at ε = 0.
The results (45) are in exact agreement with those derived from the integral relation (15). Thus, substituting back (45) in the fermion propagator (42), the LKF transformation is found to be in perfect agreement with the perturbative result (18). Starting from m ≥ 3, both Eqs. (44) and (43) are subject to singularities. This agrees with the analysis based on (15) which shows that such singularities are of IR nature and arise from the super-renormalizability of (quenched) QED 3 . Moreover, from Eq. (43) -and equivalently from (44) -a parity effect is clearly displayed whereby even coefficients,â 2s (ξ, p), are finite for all s while odd coefficients,â 2s+1 (ξ, p), are singular for all s ≥ 1.
Such an observation calls for a more careful treatment of quenched QED 3 . In particular, the parameter ε has to be kept non-zero in order to regulate the singularities. This will be the subject of the following subsection.
We would like to complete the results of the previous subsection by computing the coefficientsâ m (ξ, p) at ε → 0. A subtlety in the related ε-expansion is that only the leading order term in ε → 0 (whether a constant, a pole 1/ε and/or a contribution ∼ ε) is needed in order to analyze the previously described parity effect (between odd and even coefficientsâ m (ξ, p)). Such an expansion strategy will be fully appreciated (and detailed) in the next subsection.
Interestingly, from Eq. (41), we see quite clearly that singularities do affect odd coefficients, m = 2s + 1 for s ≥ 1, and that these singularities are entirely located in the first gamma function in the numerator. Other terms including the Wright function, 1 Ψ 1 , are finite. Postponing the details of the derivations to App. B, in the following we shall be brief and present only the final results.
The finite even coefficients,â 2s , may be re-written for ε = 0 as (see, Eq. (44)): where δ s 0 is the Kronecker symbol. In the case of the odd coefficients, m = 2s + 1 (s ≥ 1), the leading order result takes the form of a simple pole in ε. It follows from Eq. (41) that: This can be re-written in slightly different form as: a 2s+1 (ξ, p) = − 2sB (2s + 1)πε a 2s+1 (η) In agreement with the previous subsection, the above results confirm that odd coefficientsâ 2s+1 (ξ, p) are singular for s ≥ 1, see (47), while even coefficients are finite. Assuming that quenched QED 3 is finite (see Refs . [7] and discussion therein), we find that the requirement for the limit ε = 0 to be well-defined is: i.e., the identical vanishing of all odd coefficients (magnitudes) other than a 1 (ξ). In the next section, we will provide a refined proof of (49) based on the direct analysis of the coefficients a k (ξ) (rather than their gauge evolution a m (ξ, p)).
A. Coefficients a k (ξ) at ε → 0 The analysis of the parity effect may be pushed further with the help of the first representation of the LKF transform, Eq. (32), that allows to study the momentum-independent magnitudes, a k (ξ), of Eq. (33). As for the study of their gauge-evolution,â m (ξ, p), the ε-expansion of a k (ξ) will be carried out at leading order in ε → 0. In the following, we shall justify such a procedure by showing that this accuracy is enough to prove the self-consistency of the LKF transformation.
The analysis of the coefficients a k (ξ) requires considering the cases of even and odd values of k separately. The complete evaluation is carried out in App. C. Here we present only the final results.
1. In the case of even k values, i.e., k = 2r, the final results for a 2r (ξ) can be expressed as a sum of the contributions a The latter come in-turn from the corresponding contributions of the initial amplitudes a 2s (η), a 1 (η) and a 2s+1 (η) as a (1) where δ = √ π∆.
2.
In the case of odd k values, i.e., k = 2r + 1, we should consider the cases r = 0 and r ≥ 1 separately.
In the case k = 1, we have the following result: The final result for a 2r+1 (ξ) (for r ≥ 1) can be expressed as a sum of the contributions a The latter come in-turn from the corresponding contributions of the initial amplitudes a 2s (η), a 1 (η) and a 2s+1 (η) as a 2s+1 (η) (r + 1) (s + 1) We note that, as anticipated above, the contributions (51), (52) and (54) correspond to the first terms of the ε-expansion, which is sufficient to analyze the self-consistency given in the next subsection.
For the coefficient a 1 (ξ 1 ), we have (hereafter δ 1 = √ π(ξ 1 − ξ), δ 1 = √ π(ξ 1 − η)): because So, we obtain the expression of a 1 (ξ 1 ) and it coincides with the one obtained directly from the η-gauge with help of Eq. (55b) (with the replacements ξ → ξ 1 and η → ξ). Similarly, the coefficient a 2 (ξ 1 ) changes as: The term in factor of a 1 (η) corresponds to: because of (58). The term in factor of a 2 (η) corresponds to: Taking all the results together, we have: Thus, we derive the expression of a 2 (ξ 1 ) and it coincides with the one obtained directly from the η-gauge with help of Eq. (55c) (with the replacements ξ → ξ 1 and η → ξ). Similar transformations can also be performed for the other coefficients a i (ξ 1 ) (i ≥ 2) in a similar way. So, we can obtain a full agreement between the transformation and the results for a i (ξ 1 ) obtained directly from the ηgauge with help of Eqs. (55) (with the replacements ξ → ξ 1 and η → ξ).
With the purpose of checking the statement a 2m+1 (ξ) = 0 for (m ≥ 1) for d = 3, we plan to perform a direct calculation of the coefficient a 3 in the Feynman and/or Landau gauge. Moreover, with the help of modern methods of calculations (see [51] for a recent review), we may calculate exactly the ε-dependence of a k (ξ) (k = 1, 2, 3) in d = 3 − 2ε (or, at least, obtain the first few coefficients in the expansion with respect to ε) and compare it with the ε-dependence coming from LKF transformation, see Eqs. (32) and (35).
VI. SUMMARY AND CONCLUSION
In this work we have studied the LKF transformation for the massless fermion propagator of three-dimensional QED in the quenched approximation to all orders in the coupling α. 4 Previous studies in the literature were limited to the order α 2 . Our investigations were performed in dimensional regularization in d = 3 − 2ε Euclidean space.
We have formulated two equivalent transformations: Eq. (32) together with (33) and (34) on the one hand, and Eq. (35) together with (36) and (37) on the other hand. Moreover, for the coefficientsâ m (ξ, p) in the transformation (35), we managed to obtain the closed expression (41) in terms of the generalized Wright function 1 Ψ 1 whose asymptotic expansion gives Eqs. (36) and (37).
The transformation (35), which is similar to the ones used at lower orders in other papers [35,[39][40][41], allowed us to study the gauge-evolution (from the η-gauge to the ξ-gauge) of each initial magnitude a m (η) and to reproduce all results of the previous studies [35,39].
There are arguments in favor of ultraviolet and infrared perturbative finiteness of massless quenched QED 3 [7,50]. Hence, assuming the existence of a finite limit as ε → 0, we find that, exactly in d = 3, all odd terms a 2t+1 (ξ) in perturbation theory, except a 1 , should be exactly zero in any gauge, i.e., even in the Landau gauge.
This statement is very strong and needs a further check. At the order α 2 , analytical expressions for the fermion self-energy diagrams are well known. However, to the best of our knowledge, such results are absent at three-loop order. We plan to study the a 3 term, i.e., three-loop diagrams, directly in the framework of perturbation theory in our future investigations.
Moreover, in our future studies, we also plan to consider the LKF transformation in unquenched QED 3 as well as in the large-N limit of QED 3 (see Refs. [28][29][30] and the recent review [31]). In particular, this latter study will be performed in a non-local gauge where we plan to apply results from previous studies [14,56,57] and [49,58] in reduced QED 4,3 which is similar (see Ref. [59]) to QED 3 in the 1/N -expansion.
In this Appendix, we shall present the LKF transformation in a Euclidean space of dimension d. In the course of the evaluation, we shall also use the representation d = 4 − 2ε, which is natural in four-dimensional space. Such a representation is useful because the derived expressions are rather compact when expressed in terms of the ε-dependence. Accordingly, we do not make any decomposition in ε and, therefore, at any stage of the calculation in this Appendix, all results have an exact d-dependence, when ε is replaced by d as ε = (4 − d)/2.
where a m (η) are coefficients of the loop expansion of the propagator andμ is the scale displayed in Eq. (25) which is intermediate between the MS scale µ and the MS scale µ.
In the following, we will derive exact formulas for the transformation a m (η) → a m (ξ) following the LKF transformation (19) which is compact in x-space. In order to do so, it is convenient to first derive an expression of S F (x, η) based on the ansatz (A3) for P (p, η). Using the Fourier transform (20a), we have where a n (β) is defined in (21). Then, using (6b), we obtain that: With the help of (A5) together with an expansion of the LKF exponent, we have Factorizing all x-dependence yields: Hence, taking the correspondence between the results for propagators P (p, η) and S F (x, η) in (A3) and (A5), respectively, together with the result (A7) for S F (x, η), we have for P (p, ξ): where In this way, we have derived the expression of a m (ξ) using a simple expansion of the LKF exponent in x-space. From this representation of the LKF transformation, we see that the magnitude a m (ξ) is determined by a l (η) with 0 ≤ l ≤ m. The corresponding result for the p-and ∆-dependencies ofâ m (ξ, p) (see definition ofâ m (ξ, p) in Sec. III B 1) can be obtained by interchanging the order in the sums in the r.h.s. of (A8). So, we have We would like to note that all of the above results may be expressed in d = 3 − 2ε with the help of the substitutions ε = 1/2 + ε and e 2 d µ = e 2 . The last replacement can also be expressed asᾱ d µ = α/(4π), with the dimensionful α = e 2 /(4π) defined in (11).
With the help of the ansatz (24) and Eq. (B3) together with the Mellin-Barnes representation (B4) for the function E ν,α (−z), this yields: The resulting momentum integral is of the simple massless-propagator type with numerator and can be computed with the help of (see, e.g., Ref. [51]): where k µ1···µn denotes the traceless symmetric tensor, and G (n,0) (α, β) = a n (α)a 0 (β) a n (α + β − d/2) , a n (α) = Performing the integral yields: Hence we obtain The last series has the form (35) and, for the coefficientsâ m (ξ, p), we obtain the expression which after the change of the variable, s → s + d/2 − 1 − mν, can be written in terms of the generalized Wright functionâ .
(B16)
Here we used the Mellin-Barnes representation for this function where the contour separates the poles of gamma functions in the numerator. Eq. (B16) is Eq. (41) from the main text.
The Wright functions p Ψ q (z) are studied rather well in the literature, see, for example, [52,60,63]. Their series expansions can be derived from Mellin-Barnes representation. For 1 Ψ 1 deforming the contour to the left we obtain the small z expansion, On the other hand, deforming the contour to the right one going from ∞ − iδ to ∞ + iδ and enclosing the poles of Γ(a − As), we can evaluate the residues at s = l+a A , l = 0, 1, . . . and obtain the asymptotic expansion at large z ≫ 1, Correspondingly, the relation between the coefficientsâ m (ξ, p) and a m (η) takes the following form at large momentâ which corresponds to Eq. (36) in the main text. For d = 3 (ǫ = 0) the function 1 Ψ 1 in Eq. (41) takes the form where we used the series representation (B18) and the duplication formula for the gamma function. Eqs. (B21) and (41) reproduce Eq. (43) in the main text. Interestingly, Eq. (B21) is finite and so the singularities appearing in (43) are entirely due to the first gamma function in the numerator of (B16). From the latter, we see that these singularities only affect odd coefficients with m = 2s + 1 and s ≥ 1. These singularities can be regularized with the help of a leading order in ε expansion. To this end, it is enough to keep ε = 0 in the singular gamma function and set ε = 0 in all other terms. This yields: For even m = 2s the coefficientsâ 2s (ξ, p) are finite when ε = 0, they are obtained from Eq. (B22) and take the formâ in accordance with Eq. (43). The odd coefficientsâ 2s+1 (ξ, p) are singular for s ≥ 1. Their leading behavior when ǫ → 0 is easily obtained from Eq. (B22) and gives Eq. (47) in the main text.
Appendix C: Evaluation of the coefficients a k (ξ) at ε → 0 As it was shown in subsection V A, the accurate analysis of the coefficients a k (ξ) requires considering the cases of even and odd values of k separately.
where δ was defined after Eq. (51c). So, at the end, we have Eq. (51a) in the main text.
With the help of (C19), we obtain the final expression for a (2) 2r+1 (ξ) and a | 7,875 | 2020-06-16T00:00:00.000 | [
"Physics"
] |
Passengers’ Demand Characteristics Experimental Analysis of EMU Trains with Sleeping Cars in Northwest China
: Passenger demand characteristics for electrical multiple unit (EMU) trains with sleeping cars will directly a ff ect the train operation scheme in a long transportation corridor. Descriptive statistics of individual attributes and passenger choice intentions for EMU trains with sleeping cars are calculated based on the revealed preference (RP) and stated preference (SP) survey data in Northwest China to illustrate the overall conditions of passengers’ demands. Considering the higher dimensionality and multi-collinearity in the dataset of influencing factors, the factor analysis method was first adopted to reduce the number of dimensions of the raw dataset and obtain orthogonal common factors. Then, the ordinal logistic regression model was adopted to test and perform a regression analysis based on multinomial logit theory. The analysis shows that these influencing factors, such as income, profession, educational background and residence, would have a greater impact on the choice of an EMU train with sleeping cars. It is significant that passengers’ choice intentions are positively correlated with income and educational background. The result can provide some reference for the decision-making regarding operating an EMU train with sleeping cars in Northwest China. In addition, the proposed method can be applied to the analysis of passengers’ demand characteristics in similar situations.
Introduction
The purpose of the operation of railway passenger trains is to satisfy the demand of passengers. Therefore, the operation mode of passenger trains depends on the characteristics of passenger flow [1]. Trains running on high-speed railways are highly competitive in the passenger transportation market because of the advantages of high speed, short travel time, high operating density, low energy consumption and low pollution [2,3]. Because high-speed railways operate in the daytime and are stationary at night, high-speed trains seldom include sleeping cars. For those passengers who spend more than five hours travelling, their level of fatigue increases sharply. Therefore, the service quality for passengers will be influenced, and the demand for night travel cannot be met. Electrical multiple unit (EMU) trains with sleeping cars operate with a sunset departure and a sunrise arrival, and this operation is an attempt by the China railways to improve the operation mode of high-speed railways based on market demand. To make full use of the transport capacity of high-speed rails in the evening and meet the market demand better, since 1 January 2015 eight pairs of sunset-departure and sunrise-arrival EMU trains with sleeping cars have been running between Beijing, Shanghai and Guangzhou, Shenzhen. From the beginning, the passenger flows of these trains have been increasing steadily, and the operation effects are good [4]. Subsequently, according to the actual demand, some railway bureaus in central and eastern China added a certain number of EMU trains with sleeping cars.
The sleeping berths of new type EMU train with sleeping cars are a longitudinal arrangement with two layers. The interior and exterior photos of a sleeping car are shown as Figure 1. With this kind of train, a car can accommodate up to 60 berths, while a train can carry 880 passengers. So its transport capacity is higher than that of the first-class berth of an ordinary passenger train. Furthermore, the travel time and the ticket price of EMU train with sleeping cars are highly competitive too. The comparison among different kinds of passenger trains is shown in Figure 2.
The sleeping berths of new type EMU train with sleeping cars are a longitudinal arrangement with two layers. The interior and exterior photos of a sleeping car are shown as Figure 1. With this kind of train, a car can accommodate up to 60 berths, while a train can carry 880 passengers. So its transport capacity is higher than that of the first-class berth of an ordinary passenger train. Furthermore, the travel time and the ticket price of EMU train with sleeping cars are highly competitive too. The comparison among different kinds of passenger trains is shown in Figure 2. The sleeping berths of new type EMU train with sleeping cars are a longitudinal arrangement with two layers. The interior and exterior photos of a sleeping car are shown as Figure 1. With this kind of train, a car can accommodate up to 60 berths, while a train can carry 880 passengers. So its transport capacity is higher than that of the first-class berth of an ordinary passenger train. Furthermore, the travel time and the ticket price of EMU train with sleeping cars are highly competitive too. The comparison among different kinds of passenger trains is shown in Figure 2. In Figure 2a, when the distance is from 1500 km to 2500 km, the travel time of EMU trains with sleeping cars is between 8 and 13 hours. Moreover, the average speed of EMU trains with sleeping cars is far greater than that of ordinary passenger trains, while slightly less than that of EMU trains with speed of 300 km/h. Figure 2b shows that the average ticket price of EMU trains with sleeping cars is approximately equal to that of first class seat of EMU trains with speed of 300k m/h, and it is evidently lower than that of business class seat of EMU trains with speed of 300 km/h, while it is obviously higher than that of EMU trains with speed of 250 km/h and ordinary passenger trains. Although the comfort degree of different passenger trains for passengers is difficult to quantify, it is generally agreed that the traveling comfort by seat is much lower than that of sleeping car when travel time exceeds 8 hours. The EMU trains with sleeping cars can run more than 2000 kilometers at night with higher long-distance service quality [5].
According to the operation status of EMU trains with sleeping cars, they can enhance the service quality and transport capacity of railway passenger transportation. In addition, they help to optimize the product structure of passenger transportation, as well as promote the development of sustainable transportation.
Compared with the developed regions in East China, high-speed rail construction and operation in Northwest China is obviously lagging behind. The Xi'an-Baoji high-speed railway, the Lanzhou-Urumqi passenger dedicated line, and the Baoji-Lanzhou passenger dedicated line opened, respectively, in December 2013, December 2014, and July 2017, so the entire high-speed railway in the Northwest region has been connected (as shown in Figure 3). The distance of the entire line from Xi'an to Urumqi is approximately 2500 kilometers, while the running time of the EMU train is approximately 13-15 hours, and it objectively has the capability to operate an EMU train with sleeping cars [6].
3
In Figure 2(a), when the distance is from 1500 km to 2500 km, the travel time of EMU trains with sleeping cars is between 8 and 13 hours. Moreover, the average speed of EMU trains with sleeping cars is far greater than that of ordinary passenger trains, while slightly less than that of EMU trains with speed of 300 km/h. Figure 2(b) shows that the average ticket price of EMU trains with sleeping cars is approximately equal to that of first class seat of EMU trains with speed of 300k m/h, and it is evidently lower than that of business class seat of EMU trains with speed of 300 km/h, while it is obviously higher than that of EMU trains with speed of 250 km/h and ordinary passenger trains. Although the comfort degree of different passenger trains for passengers is difficult to quantify, it is generally agreed that the traveling comfort by seat is much lower than that of sleeping car when travel time exceeds 8 hours. The EMU trains with sleeping cars can run more than 2000 kilometers at night with higher long-distance service quality [5].
According to the operation status of EMU trains with sleeping cars, they can enhance the service quality and transport capacity of railway passenger transportation. In addition, they help to optimize the product structure of passenger transportation, as well as promote the development of sustainable transportation.
Compared with the developed regions in East China, high-speed rail construction and operation in Northwest China is obviously lagging behind. The Xi'an-Baoji high-speed railway, the Lanzhou-Urumqi passenger dedicated line, and the Baoji-Lanzhou passenger dedicated line opened, respectively, in December 2013, December 2014, and July 2017, so the entire high-speed railway in the Northwest region has been connected (as shown in Figure 3). The distance of the entire line from Xi'an to Urumqi is approximately 2500 kilometers, while the running time of the EMU train is approximately 13-15 hours, and it objectively has the capability to operate an EMU train with sleeping cars [6]. Given the nature of various considerations in Northwest China, such as economic factors, consumption behavior, culture, geography, and so forth, it is generally believed that the passenger flow is insufficient and the consumption capacity is on the low side, so the demand for high-level passenger transportation products would be low in these areas. Therefore, to predict passengers' demand characteristics, this paper is dedicated to data investigation, modeling and analysis of passengers' inclination to use EMU trains with sleeping cars in Northwest China.
Literature Review
Passenger demand is the basis of product design and organization for passenger transportation. In the past several decades, various attempts have been made to obtain the demand characteristics of passengers. These attempts primarily include the following aspects: analysis of passenger demand Given the nature of various considerations in Northwest China, such as economic factors, consumption behavior, culture, geography, and so forth, it is generally believed that the passenger flow is insufficient and the consumption capacity is on the low side, so the demand for high-level passenger transportation products would be low in these areas. Therefore, to predict passengers' demand characteristics, this paper is dedicated to data investigation, modeling and analysis of passengers' inclination to use EMU trains with sleeping cars in Northwest China.
Literature Review
Passenger demand is the basis of product design and organization for passenger transportation. In the past several decades, various attempts have been made to obtain the demand characteristics of passengers. These attempts primarily include the following aspects: analysis of passenger demand characteristics, investigation and study of the travel satisfaction of passengers, and design of passenger transport products based on passenger demands. In terms of research methods, there is not only statistical analysis of actual survey data but also theoretical modeling.
In the 1970s, research on passenger demand was carried out. Ben-Akiva systematically studied the structure of passenger travel demand using a theoretical reasoning method and empirical analysis [7]. This is the representative achievement of the early studies of passenger travel demand, and it provides a good reference for the following research on travel behavior of passengers. Subsequently, passenger demand characteristics of various transport modes were studied, such as railways [1,8], highways [9], public transport [10,11], and air transport [12]. For instance, Owen and Phillips analyzed the travel demand characteristics of railway passengers based on British Rail's monthly ticket data and propose a demand function considering responses to changes in economic variables [1]. Buehler and Pucher compared the public transport demand in America with that in Germany using several indexes, and their conclusion is that public transport in Germany developed more successfully [10].
With the rapid development of high-speed railways, an increasing number of researchers have focused on the travel demand characteristics of high-speed railway passengers. Gunn et al. analyze potential passenger transport markets of high-speed railways in Australia based on investigation, and the results provide a reference for the government to decide whether to construct high-speed railways [13]. Hsiao and Yang collect the data of students in a university of Northern Taiwan and then study their willingness to travel by high-speed railway trains [14]. Moreover, there are also some achievements reporting the impact of high-speed railway services on aviation demand [15,16] and on the tourism market [17,18].
The travel satisfaction of passengers often reflects the degree of matching between passengers' demand attributes and transport supply attributes, and it is commonly measured by the comprehensive quality of passenger service. Shen et al. consider the evaluation method for passenger satisfaction for urban rail transit and establish an evaluation model based on structural equation modeling [19]. The results show that ticket price, information distribution, safety and staff service are the most crucial factors affecting passenger satisfaction. According to nearly half a million data records of travel satisfaction, Abenoza et al. analyze the satisfaction of travelers with public transport in Sweden, and they obtain the service attributes that have the greatest impact on passenger satisfaction [20]. Regarding the satisfaction of railway passengers, Aydin et al. adopt a fuzzy evaluation method to evaluate the level of passenger satisfaction based on a massive amount of survey data, and the results can provide recommendations not only for future investment but also for the improvement of rail transit operation [21,22]. Chou et al. studied relationships among customer loyalty, service quality and customer satisfaction of high-speed railways in Taiwan using a structural equation modeling method [23]. The results show that there is a positive correlation between customer loyalty and service quality, while the relationship between customer loyalty and customer satisfaction is also positive.
Passenger demand data are often obtained by applying the stated preference (SP) survey method and the revealed preference (RP) survey method [24,25]. The results of passenger demand characteristics are usually used to predict future passenger demand [26] or design better passenger transport products [27][28][29][30]. EMU trains with sleeping cars are one of the passenger transport products on high-speed railways, and this kind of train currently operates in China. Thus, related studies are mainly from Chinese scholars. Zhang analyzes the potential passenger market of EMU trains with sleeping cars and then proposes several marketing strategies for this kind of train [4]. An (2016) discusses the pricing strategy of an EMU train with sleeping cars under market-oriented conditions to improve the railway's competitiveness [5]. By analyzing the determinants that influence the operation of EMU trains with sleeping cars, Zhang et al. propose that the reasonable operation distance of EMU trains with sleeping cars should be within 2400 kilometers, while the time value of their major customers should be under 50 Yuan/h [6]. These studies provided useful references for the analysis of passenger demand characteristics and the design of passenger transport products. Most present studies are aimed at traditional passenger transport modes, such as railways, aviation, and urban rail transit. Regrettably, research on the demand characteristics for new passenger transport products is scarce, especially for high-speed railways. EMU trains with sleeping cars have operated for several years on high-speed railways in central and eastern China, and good operation effects have been achieved. Whether there are corresponding passenger demands in Northwest China is worth studying. Thus, based on a survey, this paper studies the passenger demand characteristics of EMU trains with sleeping cars in Northwest China using factor analysis and regression analysis methods.
The remainder of this paper is organized as follows. In Section 3, statistical analysis of the survey data is introduced. Section 4 states the modeling process of passenger choice intention for EMU trains with sleeping cars, and includes three parts: a factor analysis, multinomial logit modeling, and an ordinal logistic regression. Parameter calibration of the model is carried out in Section 5, and then, the analysis of results is also given in this section. Section 6 provides some conclusions and discussions.
Statistical Analysis of the Data
The purpose of this study is to reveal the preferences of different passengers regarding EMU trains with sleeping cars. In Northwest China, taking the travelling group on a data survey day as the research object, there are 5 kinds of transportation modes that can meet the travel distance of about 2000 km, which are air travel, high-speed railway, ordinary railway, long-distance bus, and self-driving travel. We have investigated the first three modes, and abandoned the other modes. The reason is that in the northwestern part of China, high-speed railway has not yet been opened in areas accessible by long-distance buses, and the passenger volume of long-distance buses between cities of 2000 km is very small. The self-driving passengers are not potential passenger group of EMU train with sleeping cars, and the volume is fewer. According to the actual situation, the sampling survey of passenger flows using a one-to-one intentional survey method was conducted by research assistants at major railway stations and airports in the study area including Xi'an Xianyang International Airport, Xi'an Railway Station, Lanzhou Zhongchuan International Airport, Lanzhou West Railway Station and Lanzhou Railway Station in September 2017. The research assistants randomly selected respondents from passengers who were either waiting for a train or a flight in waiting room or departure lounge to guarantee the representativeness and randomness of the samples. In the process of investigation, several railway stations and airports were selected to reflect different places. Sometimes, surveys were conducted on different dates to reflect different times, and the respondents were from several trains and flights to reflect different trains (flights).
In the process of the data survey, we clearly informed passengers that travel by EMU trains with sleeping cars was safer and more comfortable and could achieve sunset departure and sunrise arrival, but the ticket price is higher. The proportion of questionnaires distributed to passengers of different transportation modes was roughly the same as the proportion of passengers carried by various modes on the survey day. When choosing respondents within each transportation mode, we selected them randomly according to the factors that can be taken into account at different times, different places and different trains (flights) to ensure the representativeness of the sample data as far as possible.
In the survey (see Appendix A), 3500 questionnaires were distributed, and 3005 questionnaires were returned, 2966 of which were valid questionnaires. Among them, there are 1738 questionnaires from ordinary railway passengers, 58.6%; 540 questionnaires from high-speed railway passengers, 18.2%; and 688 questionnaires from aviation passengers, 23.2%.
The questionnaire contains both RP and SP survey questions. The RP questions mainly involve some personal attributes of the respondents, including gender, age, educational background, profession, income, and place of residence. SP questions are willingness surveys, and they are mainly used to obtain data that cannot be directly observed. The potential impact of travel cost, travel distance, speed and convenience on the travel mode choice of respondents was further studied through the subjective Sustainability 2019, 11, 5338 6 of 17 preference selection of multiple scenarios under hypothetical conditions. According to the statistical analysis of the data, the general characteristics of the sample are shown in Table 1. From Table 1, one notable characteristic is that the proportion of male to female respondents is extremely unbalanced as male passengers are more than twice of female passengers. In fact, the survey data is approximately consistent with real statistical data. Another obvious characteristic is that respondents living in Northwest China are far more than those living in other areas, this is also similar to real data.
The choice intentions for EMU trains with sleeping cars are investigated among passengers of ordinary railways, high-speed railways, and aviation. The passenger choice intention levels for the EMU trains with sleeping cars are expressed by three grades: willing, perhaps willing, and unwilling. The corresponding frequency and percentage analyses are shown in Figure 4. According to the analysis, the ratio of those who are willing to travel by EMU trains with sleeping cars and those who are not willing to travel by EMU trains with sleeping cars is 0.482 among ordinary railway passengers, while the ratios of high-speed railway passengers and aviation passengers are 0.995 and 1.027, respectively.
The result shows that the proportion of ordinary railway passengers who are willing to travel by EMU trains with sleeping cars is the lowest, while the proportions of high-speed railway passengers and air passengers are similar and far higher than that of ordinary railway passengers. 7 ordinary railways, high-speed railways, and aviation. The passenger choice intention levels for the EMU trains with sleeping cars are expressed by three grades: willing, perhaps willing, and unwilling. The corresponding frequency and percentage analyses are shown in Figure 4. According to the analysis, the ratio of those who are willing to travel by EMU trains with sleeping cars and those who are not willing to travel by EMU trains with sleeping cars is 0.482 among ordinary railway passengers, while the ratios of high-speed railway passengers and aviation passengers are 0.995 and 1.027, respectively. The result shows that the proportion of ordinary railway passengers who are willing to travel by EMU trains with sleeping cars is the lowest, while the proportions of high-speed railway passengers and air passengers are similar and far higher than that of ordinary railway passengers.
Problem Analysis
To analyze the quantitative relationship between passenger willingness to travel by EMU trains with sleeping cars and their personal attributes, regression models are considered for quantitative analysis and prediction. The methods of the binary logit model (BL), multiple logit model (MLN),
Problem Analysis
To analyze the quantitative relationship between passenger willingness to travel by EMU trains with sleeping cars and their personal attributes, regression models are considered for quantitative analysis and prediction. The methods of the binary logit model (BL), multiple logit model (MLN), nested logit model (NL) and mixed logit model (Mixed Logit) are often adopted to research the choice of transportation mode and travel satisfaction [31,32]. If the object of study is transformed from the probability of selecting an event to the ratio of the probability of selecting an event and the corresponding probability of not selecting an event, which is also known as the odds ratio, then the corresponding logistic regression model can be obtained. Furthermore, the numerical relationship between dependent variables and independent variables can be quantitatively analyzed and predicted.
However, if a logistic model is used for regression analysis, there can be no multiple collinearity among the corresponding independent variables. Otherwise, the variance and covariance of the parameter estimates will increase, and the accuracy of the regression analysis will be affected in more severe cases. As for the survey data of the passenger choice intention for EMU trains with sleeping cars, we may discover that there is significant multicollinearity among the passenger's multi-attribute data by testing, so processing of the data is required.
Therefore, the factor analysis can be considered to reveal the inherent common factors and special factors among the diverse attributes of many passengers. In other words, the construction of the factor model decomposes much of the original observed variable into a linear combination of a few factors, and these new factors are orthogonal. That is, there is no multicollinearity among the factors, and the total number of new factors is less than the original variable data dimension. More importantly, the extracted factors reflect the essential characteristic attributes of passenger decision-making more simply and more directly.
Meanwhile, because the dependent variables are ordered as multiclass variables, an ordinal regression method is selected to analyze the choice willingness because it is more accurate. Compared with ordinary logistic regression, ordered regression considers the continuous changes of the internal logic among dependent variables, which avoids the irrationality caused by the discretization choice, so it is more suitable to the actual situation of the selection of an EMU train with sleeping cars.
Consequently, focusing on the problem of passenger choice intention for EMU trains with sleeping cars, this paper adopts a factor analysis method to reduce the data dimensionality of possible factors and obtain orthogonal common factors. Then, an ordered regression method is used to perform parameter estimation and quantitative analysis. The technical roadmap of the methodology is shown as Figure 5.
8 corresponding logistic regression model can be obtained. Furthermore, the numerical relationship between dependent variables and independent variables can be quantitatively analyzed and predicted.
However, if a logistic model is used for regression analysis, there can be no multiple collinearity among the corresponding independent variables. Otherwise, the variance and covariance of the parameter estimates will increase, and the accuracy of the regression analysis will be affected in more severe cases. As for the survey data of the passenger choice intention for EMU trains with sleeping cars, we may discover that there is significant multicollinearity among the passenger's multi-attribute data by testing, so processing of the data is required.
Therefore, the factor analysis can be considered to reveal the inherent common factors and special factors among the diverse attributes of many passengers. In other words, the construction of the factor model decomposes much of the original observed variable into a linear combination of a few factors, and these new factors are orthogonal. That is, there is no multicollinearity among the factors, and the total number of new factors is less than the original variable data dimension. More importantly, the extracted factors reflect the essential characteristic attributes of passenger decisionmaking more simply and more directly.
Meanwhile, because the dependent variables are ordered as multiclass variables, an ordinal regression method is selected to analyze the choice willingness because it is more accurate. Compared with ordinary logistic regression, ordered regression considers the continuous changes of the internal logic among dependent variables, which avoids the irrationality caused by the discretization choice, so it is more suitable to the actual situation of the selection of an EMU train with sleeping cars.
Consequently, focusing on the problem of passenger choice intention for EMU trains with sleeping cars, this paper adopts a factor analysis method to reduce the data dimensionality of possible factors and obtain orthogonal common factors. Then, an ordered regression method is used to perform parameter estimation and quantitative analysis. The technical roadmap of the methodology is shown as Figure 5.
Factor Analysis
If the number of factors that may influence the passenger choice intention for EMU trains with sleeping cars is g, and the data set of all samples is X, then X = (x 1 , x 2 , · · · , x g ), where x i (i = 1, 2, · · · , g) indicates the i-th column data in the dataset, representing all sample data of the i-th factor in the dataset, and it can be further expressed as x i = (x 1i , x 2i , · · · , x ni ) T . If the number of common factors is m, (m < g) and the number of special factors is g, then the vector of common and special factors can be expressed as f = ( f 1 , f 2 , · · · , f m ) and ε = (ε 1 , ε 2 , · · · , ε g ), respectively. We can obtain the factor model as follows: . .
x g = a g1 f 1 + a g2 f 2 + · · · + a gm f m + ε g In the model, f and ε consist of independent variables, and each f i is also orthogonal. a ij is called the factor load and represents the load of the i-th factor on the j-th (j = 1, 2, · · · , m) common factor, and it reflects the corresponding weight. The specific steps of factor analysis are as follows: Step 1: All data in the sample set are normalized according to the factor column, and the expression is shown in Equation (1): Among them, x i and σ i represent the mean value and standard deviation of all sample data for the i-th factor, respectively.
Step 2: Calculate the correlation matrix R of the sample, and the expression is as Equation (2).
Step 3: Calculate the characteristic root and characteristic vector of the correlation matrix R. According to the equation |R − λI| = 0, the characteristic root λ i can be obtained, and we suppose that λ 1 ≥ λ 2 ≥ · · · ≥ λ p ≥ 0. Let L express the characteristic vector of the correlation matrix R, and l ij express the elements of L. Then, the equation L · L = I can be obtained.
Step 4: The number of principal factors expressed by m is determined by the cumulative contribution rate. Generally, the value of m is calculated according to the ratio of the sum of the information amount of the selected principal factor, and the total information amount is not less than 85%, that is, Step 5: Compute the factor load matrix A. If a ij is an element in matrix A, then a ij = l ij λ j .
Step 6: Determine the factor model, whose matrix expression is X = A f + ε.
Step 7: Estimate the score function of factors, and the matrix expression is shown as follows.
Multinomial Logit Model
According to random utility theory, it is assumed that the choice intention of travelers to select EMU trains with sleeping cars is Y,(Y = k, k = 1, 2, 3), which represents willing, perhaps willing, and not willing, while the utility value of the different choice intentions is U k , then U k = V k + ς k , where V k is a fixed item in the utility function and ς k is random. When ς k obeys the Gumbel distribution, it is converted to the ordinary logit model. V k is shown in Equation (4).
In Equation (4), M k represents the constant term corresponding to the k-th choice in the utility function, z k s represents the s-th explanatory variable corresponding to the k-th selection, and θ k s represents the parameter value of the s-th explanatory variable corresponding to the k-th travel mode, where s = 1, 2, · · · , S. Based on the multiple logit model, the probability that the traveler chooses intention k is expressed by Equation (5).
Ordinal Logistic Regression
As the development of a multiple logistic model, ordered logistic regression is a more accurate regression analysis method aiming to produce ordinal variables, and it requires that the regression dependent variables must be ordered multiclass variables. The problem of passenger choice intention for EMU trains with sleeping cars studied in this paper can be solved by the ordered regression method. Then, a function is defined as follows: The function H k (z) is actually a logarithmic transformation of the ratio of the cumulative probability of (Y ≤ k) and the cumulative probability of (Y > k), and it forms a linear equation consisting of a set of parameters (α k , β k 1 , β k 2 , · · · , β k s ), where, α k is a constant term and β k l is the parameter of the l-th explanatory variable in the linear equation corresponding to the k-th choice intention. Regardless of where the break point of the dependent variable is in the model, the coefficient of each independent variable β k l remains unchanged, while the constant term α k changes. After determining H k (z), the probability p k that the dependent variable Y has a value of k can be obtained, which is shown in Equation (7).
Parameter Calibration of the Model
First, the explanatory variables of the passengers' willingness to choose the EMU train with sleeping cars can be divided into two types, which are unordered multiple classified variables and ordered multiple classified variables. Unordered multiple classified variables include place of residence, gender, trip purpose, and so on, while ordered multiple classified variables refer to the results of the factor extraction. To define the model, a dummy variable must be introduced as an unordered multiple classified variable.
In this paper, SPSS20 is used to perform factor analysis and ordered logistic regression analysis on the survey data. There are many extreme values in potential explanatory variables when performing ordered regression, so the Cauchit link function is adopted. A parallel line test in SPSS is needed to determine whether each independent variable has the same effect on the dependent variable in each regression equation, and the test is passed if all effects are the same.
Results of the Factor Analysis
Four variables, including age, educational background, profession and income, are selected for factor analysis, and these four variables reflect the travel choice characteristics of passengers prominently.
The results of the Kaiser-Meyer-Olkin (KMO) test and Bartlett spherical test are shown in Table 2. The KMO test is used to study the partial correlation among variables; generally, the value should be greater than 0.5, and it is 0.563 for our survey data set. The Sig. value of the Bartlett spherical test is 0.000, and thus is less than 0.01. Thus, the null hypothesis that the correlation matrix is a unit matrix is rejected, that is to say, there is significant correlation between the variables. 0.000 Table 3 shows the variance explained by each common factor and its cumulative sum. It can be seen from the cumulative percentage of the initial eigenvalue column that the cumulative variance explained by the first three common factors is more than 85%, so they can explain the information contained in the original variables well. The coefficient matrix of the factor score is shown in Table 4, so the final factor expressions are given by Equations (8), (9), and (10).
Among them, the common factor f 1 is more representative of profession and income factors, f 2 mainly represents the factor of age, and f 3 primarily represents the factor of educational background.
The common factors f 1 , f 2 and f 3 obtained by factor analysis are orthogonal, and can be used as independent variables of an ordinal regression to perform regression analysis.
Results of Ordinal Logistic Regression
An ordinal regression test is carried out taking the passenger's intention to travel by the EMU train with sleeping cars as dependent variables while taking f 1 , f 3 and place of residence x 6 as regression independent variables.
The fitting information of the model and the result of the fitting degree for the data model are shown in Tables 5 and 6, respectively. Table 5 shows that the significance value (Sig.) of the chi-square test for the ordered regression model is 0.000 and is far less than 0.01, which means that the final model is well established. Table 6 shows that the significance values (Sig.) of the Pearson statistics and deviation statistics are both 0.000 for the fitting degree test, which indicates that the fitting degree of the model is good. From Table 7, we can see that the significance value (Sig.) is 0.072, which is greater than 0.05, and the parallel line test is passed. This indicates that the regression equations are parallel to each other, in other words, each independent variable has the same effect on the dependent variable in each regression equation. The final results of regression analysis are shown in Table 8; where, position represents the corresponding relationship between the location and name of regression parameters, FAC1_2 and FAC3_2 represent the clustered factor 1 and factor 2 respectively, and A9 represents the classification variable x 6 . According to Table 8, we obtain the choice intention model of passengers for EMU trains with sleeping cars, which is shown as follows. (12) In formula (11) and (12), H 1 and H 2 respectively represent the upper and lower bounds of regression results. They are regression results expressed by mathematical expressions, indicating the degree of influence of different factors on the choice willingness of EMU train with sleeping cars.
Result Analysis
From the level of significance, we can see that f 1 (mainly representing profession and income) and f 3 (mainly representing educational background) are significant factors influencing the choice of passengers for EMU trains with sleeping cars. Meanwhile, passengers from different places of residence also have different choice intentions for EMU trains with sleeping cars. However, the factor f 2 which mainly represents age is not a significant factor in choosing EMU trains with sleeping cars.
Each influencing factor affects the dependent variable to a different degree. The coefficients corresponding to each influencing factor reflect the degree of their influence on the acceptance of EMU trains with sleeping cars by passengers, while the sign of coefficient values represents the changing trend of the probability of passengers accepting EMU trains with sleeping cars with the influencing factors. When the sign of the corresponding coefficient value of an influencing factor is positive, it shows that with the increase of the value of this variable, a passenger's attraction to EMU trains with sleeping cars will gradually decrease, that is to say, passengers are more reluctant to choose this kind of product. Conversely, a decrease indicates that passengers are more willing to choose this kind of product.
The following results can be concluded according to formulas (13) and (14).
(1) The major factors influencing the choice intention of EMU trains with sleeping cars are successively income, profession, educational background, place of residence and age. (2) Passengers who have higher income tend to choose the EMU trains with sleeping cars.
(3) The choice tendency for EMU trains with sleeping cars gradually, successively increase for passengers whose professions are student, migrant worker, staff of private enterprise, public functionary, staff of state-owned enterprise, and self-employed person. (4) Passengers from different places of residence have different understandings of and inclinations to choose EMU trains with sleeping cars. Passengers from Xinjiang, Gansu, Qinghai, as well as from Beijing, Tianjin and Hebei look forward to EMU trains with sleeping cars. Meanwhile, passengers from Shanxi and the Pearl River Delta region also show a certain willingness to use the product. Nevertheless, passengers from Jiangsu, Zhejiang and Shanghai have a lower propensity for this product, and they show greater interest in aviation. (5) Passengers with a higher education level are more inclined to choose EMU trains with sleeping cars. However, the degree of inclination is lower than that of the income factor.
Discussion and Conclusions
An EMU train with sleeping cars is a new type of passenger transportation product in the long-distance passenger transport market which adopts electric traction. Compared with air transport, it has the advantages of large transport capacity, low unit energy consumption, low environmental pollution, and so on. When conditions permit, the operation of an EMU train with sleeping cars can optimize passenger transport structure and promote sustainable transport development. Through experimental data analysis, it can be seen that passengers traveling in Northwest China have a certain demand for EMU trains with sleeping cars; meanwhile, passengers with different incomes or different educational backgrounds or different places of residence have a different level of willingness to use this kind of train. In this paper, factor analysis and ordered logistic regression are adopted to quantitatively analyze the relationship between passenger choice intention for EMU trains with sleeping cars and the individual attributes of the passengers, and the analysis results can be used to predict the tendency of passengers' choice inclination for EMU trains with sleeping cars based on individual attributes.
To address the problem that the survey data cannot be directly subjected to regression analysis, we propose a new solution approach. First, a factor analysis is used to reduce the dimension of multidimensional data, and a few orthogonalized common factors are obtained. Then, the ordinal regression method is used to analyze and predict the passenger's choice intention for the EMU trains with sleeping cars. From the regression results of the data set in this paper, it can be seen that the fitting degree and regression accuracy of this method are higher, and it can better address this type of problem.
According to our research, the following conclusions can be obtained: (1) The characteristics of potential passenger groups are obvious, and the proportion of passengers being willing and likely willing to select EMU trains with sleeping cars is generally large. (2) The high-income group, highly educated group, government and institutional staff, and individual and private owners are potential customers of EMU trains with sleeping cars in the future in Northwest China. (3) The expectation of local passengers for EMU trains with sleeping cars is significantly higher than that of other passengers. (4) The ticket price of an EMU train with sleeping cars is the primary factor that passengers are concerned about. Furthermore, travel purpose, source of travel expenses, departure and arrival time, travel destinations, and convenience in reaching the high-speed rail station are all factors that will influence a passenger's choice of EMU trains with sleeping cars.
Moreover, according to the result of demand analysis and forecast, compared with the actual data of passenger flow between Xi'an and Urumqi, it is estimated that the direct passenger flow from Xi'an, Lanzhou to Urumqi will be at least 1000 persons every day. So we conclude that the passenger conditions can meet the requirement of operating EMU trains with sleeping cars between Xi'an and Urumqi.
The result can provide a reference for decision making regarding operating EMU train with sleeping cars in Northwest China. Meanwhile, the proposed method can be applied to the analysis of passenger-flow characteristics in similar areas. How the three factors, namely, the ticket price of the EMU train with sleeping cars, the degree of convenience of travel and the quality of travel service affect whether passengers choose the EMU train with sleeping cars, is further research work that we need to carry out in the future.
Author Contributions: All the authors have contributed a lot to this work. Y.Z. and J.W. contributed to the project administration, conceptualization, formal analysis, methodology, funding acquisition, original draft preparation, review, and editing. W.C. contributed to software, validation, investigation, and visualization. All authors approve the final manuscript. | 9,699.2 | 2019-09-27T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Economics"
] |
Gauge generator for bi-gravity and multi-gravity models
Following the Hamiltonian structure of bi-gravity and multi-gravity models in the full phase space, we have constructed the generating functional of diffeomorphism gauge symmetry. As is expected, this generator is constructed from the first class constraints of the system. We show that this gauge generator works well in giving the gauge transformations of the canonical variables.
Introduction
Modification of Einstein-Hilbert theory of general relativity is a great dream for almost one century. One direction in this regard is introducing a consistent covariant theory for massive gravity, beginning from the famous paper of Fierz and Pauli [1] and continuing with the important works of Van Dam, Veltman and Zakharov [2,3], Vainshtein [4] and Boulware-Deser [5].
After almost 70 years, in 2010 the determinative paper of de Rham-Gabadadze and Tolley presented a special interaction term which leads to a ghost free massive gravity [6]. Then Hassan and Rosen lifted the model to one with arbitrary coordinates where the flat metric is replaced by a background auxiliary tensor field f µν [7,8]. Soon after, they added to the model a dynamical term for this tensor field. In this way, the massive gravity more or less was jointed to a theory with two tensor fields g µν and f µν , i.e. the bi-gravity [9].
The crucial point in all modified gravity models is absence of Boulware-Deser ghost. It is well known that the best way to recognize the dynamical variables of a given theory in the non-linear level is the Hamiltonian analysis. Several articles have been appeared on Hamiltonian analysis of massive gravity and bi-gravity [10]- [19]. Despite some challenges, it is finally established [20,21] that HR bi-gravity possesses seven degrees of freedom corresponding to one massive and one massless graviton (and no ghost degree of freedom).
Extensions of bi-gravity as multi-gravity models are also attractive theoretically [22] and in description of some cosmological observations [23]- [28]. Explicit Hamiltonian analysis of multi-gravity in the frame work of ADM variables is also performed recently [29].
Although, the main purpose in the literature for the Hamiltonian analysis of bi-gravity and multi-gravity theories has been counting the number of degrees of freedom, however, another reason for this investigation may be establishing the relationship between the constraint structure and the symmetries of the theories. As is well known, every gauge symmetry should be generated by the first class constraints of the system [30]. In fact, the Hamiltonian analysis is not completed just by obtaining the expected number of degrees of freedom, i.e. a satisfactory analysis should also include the correct gauge transformations of the canonical variables via their Poisson brackets with a suitable generating functional [31,32]. This generating functional of gauge transformations should be constructed from the first class constraints (of different levels of consistency process), as well as gauge parameters and their derivatives [33] and [34].
The problem of constructing the gauge generator is not an easy task, even for the simplest case of Einstein-Hilbert theory. This has been the subject of a sires of papers [35]- [37]. It turns out that the transformations due to diffeomorphism are not projectable, i.e. the gauge generator can not be written directly in terms of the diffeomorphism parameters. However, it is illustrated [38] that the arbitrary parameters in the gauge generator can be redefined in terms of the diffeomorphism parameters, where the relations depend on the dynamical variables. This algorithm can be followed for a special class of models where the canonical Hamiltonian is of the form H c = ΣN µ H µ . Fortunately, after imposing strongly the second class constraints, the canonical Hamiltonian of bi-gravity and multi-gravity fall into this class.
In this paper, we use the above algorithm to construct the generating functional of gauge transformations for bi-gravity and multi-gravity. As we will see in the following sections, for both cases the variables N µ can be set as the lapse and shift functions of the reference metric f µν ; so the above procedure goes on directly. The noticeable point is that both metrics should undergo similar diffeomorphism transformations with the same parameters. On the other hand, the first class constraints of the system contains mixtures of canonical variables of both metrics. Hence, it is instructive to have a single generating functional for gauge transformations of both metrics. We will show that our gauge generator gives the correct transformations for g ij , i.e. the spatial components of the second metric in bi-gravity. The same thing is true for the spacial part of component fields g (k)ij in multi-gravity. However, the gauge transformations of the dependent lapse and shift functions (components of g µν or g (k)µν 's) should be derived in some completed ways from the gauge transformations of canonical variables.
In section 2 we will review the main features of the Hamiltonian structure of bi-gravity in ADM variables given in ref. [20]. Then we use this structure in section 3 to establish the gauge generator of diffeomorphisms. In section 4 we will consider a class of multi-gravity theories where a series of component metrics g (k)µν interact with a reference metric f µν . After a brief review of the Hamiltonian structure, we will do the same thing for this model. Section 5 is devoted to our concluding remarks.
Hamiltonian structure of bi-gravity
We start by introducing the HR bi-gravity model given by the following action [8], β n e n (k).
In Eq. (1) β n are free parameters, m is a mass parameter and M g and M f are two different Plank masses. The matrix k is defined as k ≡ g −1 f and e n (k) are elementary symmetric polynomials [10]. We consider the minimal model where β 0 = 3, β 1 = −1, β 2 = β 3 = 0, β 4 = 1.
Decomposition of g µν and f µν in ADM approach are as follows By applying the following redefinition the Lagrangian density would become linear in lapses N and M and shifts M i as follows [8] where the momentum fields are defined as follow in which Z ij and Y ij are extrinsic curvatures on g and f metrics respectively. We also have correspond to the Hilbert-Einstein action of the metric g µν , i.e.
We have similar expressions for R in which u, v, u i and v i are 8 undetermined Lagrange multipliers. Consistency of the primary constraints P M , P N , P M i leads to second level constraints C, D and R i while consistency of P n i gives where In this level, P n i , U i are second class constraints and n k would be determined by strongly imposing the constraint relation U l = 0. Consistency of R i is satisfied identically. Hence, R i and P M i are first class constraints. Assuming {C, D} = Γ, we see that physically acceptable result comes out [20] on the sector Γ = 0 of the phase space. So consistency of Γ gives where {} * means Dirac brackets and In the total Hamiltonian, one combination of the Lagrange multipliers u and v would be obtained from consistency of Ω and one other combination remain undetermined. Thus, we have four undetermined gauge parameters which should be related to diffeomorphism transformation. One may changes the lapse variables toN , M so that In this configuration we see that D ′ and R i are first class constraints. On the other hand, consistency of Γ gives Ω =NF and consistency of Ω gives the Lagrange multiplier of PN .
Gauge generator for bi-gravity
Now we are able to derive the generator of diffeomorphism for HR bi-gravity. To do this, we use the method given in ref. [38] concerning a system with the canonical Hamiltonian H = M µ H µ in which the momenta P µ conjugate to M µ are primary constraints. Assuming the secondary constraints H µ to be first class we may have Then the generating functional of gauge transformations is proposed as where ξ µ are gauge parameters. The gauge variation of every physical variable χ turns out to be δχ = {χ, G(t)}. Note that each pair of contracted indices in Eqs. (21) and (22) and hereafter include a special integration as well. For the special case of general relativity with the well-known Hilbert-Einstein action, i.e.
the generating functional (22) leads to the standard form of general coordinate transformation For the current case of bi-gravity, the final form of the action reads Here we have where we have assumed all the second class constraints as strongly vanishing functions. In order to find the coefficients C σ µν , let us consider the Poisson brackets among the first class constraints, i.e.
Comparing Eqs. (21) and (27) shows that the coefficients C σ µν are the same as general relativity, i.e.
Inserting these coefficients in Eq. (22) we find where Now the problem is how we can relate the gauge parameters ξ µ to the diffeomorphism parameters ǫ µ . In this case, considering the reference metric f µν , we choose relations (24) as introduced for General Relativity. As is well-known, the infinitesimal transformations due to diffeomorphism of the metric components are In terms of the ADM variables, this corresponds to Similar variations should be considered for N, N i and g ij of g µν . The gauge variations M, M i and f ij may be resulted directly under the Poisson brackets of the corresponding variable with the generating functional G(t) as follows Note that in obtaining the Poisson brackets of M, M i and f ij with G(t) we have considered terms containing P M , P M i and p ij in the expressions (29) and (30), respectively. Then using Eqs. (20) and (11) for D ′ and R i and then Eqs. (9) and (10) for D and C we have finally This is the well-known variations obtained for Einestain-Hilbert theory [38], as It is straightforward to see that under redefinitions (24) variations (35-37) reduce to standard variations (32)(33)(34). We should also be sure about the gauge variations of the variables N, N i and g ij . For the components g ij we have Again using Es. (20), (9) and (10) for D ′ and Eq. (11) for R i we have where we have used the equality N M = − E F due to strongly vanishing ofN in Eq. (19). Using Eqs. (3), (19) and (20) we find the following result Comparing Eq. (42) with Eq. (37) for δf ij we see that under similar combinations of the coefficients of the last two terms we have Hence, the same generating functional which gives δf ij also results to correct variation for δg ij with the same relationship between the gauge parameters ξ µ and the diffeomorphism parameter ǫ µ . So, for δg ij we find the standard result The generating functional G(t) should also give correct result for variations of N and N i under diffeomorphism. However, δN and δN i should be calculated indirectly in terms of the variations of other variables. Let us begin with the second class constraintN = 0 which implies As is seen we need to calculate δE and δF in terms of the variations of the canonical variables.
In order to find variations δN i let us vary Eq. (3) to find In this equation we should compute δn i and δD i j which in turn depends on the canonical variables as well as n i . Remember that under imposing the constraints U i in Eq. (15) we can express n i in terms of the canonical variables. Adding all these points together, we have a long way to calculate δn i and δD i j in Eq. (46) as well as δE and δF in Eq. (45). These arguments show that the variables N and N i are not independent variables. So, their gauge transformations need not to be derived from particular expressions in the generating functional. In other words, there is no room to modify the gauge generator G(t) in order to get the gauge transformations of δN and δN i . Hence, the only remaining task is to check that under the gauge variations of the canonical variables, the dependent expressions δN and δN i comes out to have the correct form. We have not done this explicitly, however, there is no reason that it may be violated. We will give more explanations about this point in the last section.
Gauge generator for multi-gravity
Let us first review the Hamiltonian formalism of the multi-gravity model [29] with N − 1 interacting component metrics g (k)µν and one reference metric g (N )µν ≡ f µν . The Lagrangian reads where the matrix K (k) is g −1 (k) f , m is a mass parameter and β (k) n are free parameters. As before, let us consider the following (N − 1) redefined shift variables where N i The momenta P (k) , P i(k) , P and P i conjugate respectively to N (k) , n i (k) , M and M i are primary constraints. Consistency of primary constraints gives secondary constraints φ, φ (k) , R i and S i(k) where R i = R (N ) i and S i(k) are analogous to Eq. (14). Direct calculation shows that {φ (k) , φ (k ′ ) } ≈ 0 for all k and k ′ , and the only non-vanishing Poisson brackets among second level constraints are ψ k = {φ, φ (k) }. The physics of the system proceeds correctly if we restrict the dynamics to the subspace defined by the new constraint ψ k . This implies consistency of φ (k) 's are satisfied identically.
To proceed we should consider consistency of ψ k 's.
Defining the modified lapse functionsN and assuming the canonical Hamiltonian reads Hence, consistency of ψ k 's gives the last level constraintsN (k) ≈ 0 which are second class with their corresponding momentaP (k) . In this way, we have 8 first class constraints (P, P i , R i , φ ′ ) for generating the space-time diffeomorphism. It turns out that there are (N − 1) × 6 second class constraints (S i (k) , P i(k) ) and (N − 1) × 4 second class constraintsN (k) , ψ (k) , φ (k) andP (k) . This corresponds to 2 × (5N − 3) dynamical degrees of freedom which describes a system with N − 1 massive gravitons and one massless graviton. Now let us go through constructing of the gauge generator for multi-gravity system. Imposing strong equalities for vanishing the second class constraints the canonical Hamiltonian reads The gauge generator is the same as Eq. (22), where the coefficient C σ µν should be derived from the Poisson brackets of constraints φ ′ and R i as follows One can read from Eq. (56) Inserting these coefficients in Eq. (22) we find where In order to see whether the gauge generator (58) works well, we first calculate the gauge variations of the components of the reference metric f µν . Using the relations (24) between the gauge parameters ξ µ and diffeomorphism parameters ǫ µ the result is the same as given in Eqs. (37), (40) and (42). Then let us calculate the gauge transformations of the spatial part of the component metrics g (k)ij as follows It is not difficult to check that under redefinition (43) the variations δg (k)ij take the standard form (similar to Eq. (34)). Hence, the same generating functional gives simultaneously the diffeomorphism transformation of all g (k)ij 's, as well as the reference metric f ij . However, we may wish to derive the gauge variation δN (k) and δN i (k) . As mentioned in the last paragraph of the previous section, the variables N (k) and N i (k) are dependent to other variables through Eqs. (48) and (54), so we have Then one needs to take into account the equations defining G k ′ , G −1 k ′ k and D i (k)j in order to find their variations in terms of the canonical variables. Therefore, similar to the case of bi-gravity these calculations are too lengthy (see our discussions in the last section).
Conclusions
The main goal of this paper is completing the Hamiltonian analysis of a series of modified gravities which contain one or more massive gravitons together with one single massless graviton. In fact, the main focus in the literature is just on investigating the existence or absence of the Boulware-Deser ghost through counting the dynamical variables in the Hamiltonian framework.
However, the Hamiltonian analysis has more capacities than this simple task. Especially, the Hamiltonian structure of a given model may help us to investigate the gauge symmetries of the system. This goal is achieved through constructing the generating functional of gauge transformations by using the first class constraints of the system.
For general relativity and all its covariant extensions, the main gauge symmetry is the diffeomorphism which contains four infinitesimal arbitrary fields. Hence, the constraint structure should necessarily contain a multiple of four first class constraints. Of curse, this should be the case if we take into account all the components of the metric (or metrics), i.e. we should consider the full phase space of the theory which include the lapse and shift functions and their corresponding momenta. Hence, those analysis which omit lapses and shifts in advance, or consider them from the very beginning as Lagrange multipliers, are not capable to recognize precisely the needed first class constraint which generate the gauge symmetry. In general, construction of the gauge generating functional is not an easy task. In ref. [39] we can find instructions for doing this. The problem is even more complicated for general covariant theories with diffeomorphism as the gauge symmetry of the system.
Fortunately, the algorithm given by [38] was capable to solve our problem here. We found suitable forms for gauge generators of bi-gravity and multi-gravity which give correct gauge transformations for all spatial parts of the component metrics as well as the reference metric. However, as a consequence of the Hamiltonian analysis, lapses and shifts of the component metrics turn out to be dependent variables. Therefore, there exist clear and straightforward instructions to obtain gauge variations of these lapses and shifts, though there should be done a lot of cumbersome calculations. However, since the gauge symmetry is clearly known from a covariant Lagrangian observation, there is no reason to be in doubt about the result. We think it is just satisfactory to have a gauge generating functional which gives the corrects gauge transformations for spatial components of the metrics which constitute the canonical variables of the system. | 4,422.6 | 2020-01-29T00:00:00.000 | [
"Physics"
] |
LysM protein BdLM1 of Botryosphaeria dothidea plays an important role in full virulence and inhibits plant immunity by binding chitin and protecting hyphae from hydrolysis
Botryosphaeria dothidea infects hundreds of woody plants and causes a severe economic loss to apple production. In this study, we characterized BdLM1, a protein from B. dothidea that contains one LysM domain. BdLM1 expression was dramatically induced at 6 h post-inoculation in wounded apple fruit, strongly increased at 7 d post-inoculation (dpi), and peaked at 20 dpi in intact shoots. The knockout mutants of BdLM1 had significantly reduced virulence on intact apple shoots (20%), wounded apple shoots (40%), and wounded apple fruit (40%). BdLM1 suppressed programmed cell death caused by the mouse protein BAX through Agrobacterium-mediated transient expression in Nicotiana benthamiana, reduced H2O2 accumulation and callose deposition, downregulated resistance gene expression, and promoted Phytophthora nicotianae infection in N. benthamiana. Moreover, BdLM1 inhibited the active oxygen burst induced by chitin and flg22, bound chitin, and protected fungal hyphae against degradation by hydrolytic enzymes. Taken together, our results indicate that BdLM1 is an essential LysM effector required for the full virulence of B. dothidea and that it inhibits plant immunity. Moreover, BdLM1 could inhibit chitin-triggered plant immunity through a dual role, i.e., binding chitin and protecting fungal hyphae against chitinase hydrolysis.
Introduction
Botryosphaeria dothidea is a fungal pathogen that infects hundreds of woody plants (Xiao et al., 2013;Marsberg et al., 2017).Apple ring rot caused by B. dothidea, also called white rot, is one of the most important diseases in apple production and has seriously affected the development of the apple industry in China (Guo et al., 2009).This pathogen commonly causes fruit rot, warts, rough skin, and cankers on apple stems (Li et al., 2009;Tang et al., 2012).With the release and availability of genome data (Liu et al., 2016;Marsberg et al., 2017;Wang et al., 2018;Hu et al., 2019;Liang et al., 2021;Rao et al., 2021;Yu et al., 2021) and recently improved gene disruption methods (Dong and Guo, 2020), research on gene function in B. dothidea is accelerating (Dong et al., 2021;Zhang et al., 2021).
During interactions of plants and pathogens, plants have evolved two layers of immune systems.The first layer involves cell-surface-localized pattern recognition receptors (PRRs), which recognize conserved pathogen-associated molecular patterns (PAMPs) to activate pattern-triggered immunity (PTI) (Jones and Dangl, 2006).This layer of the immune system is associated with a broad range of immune responses, including the generation of reactive oxygen species (ROS), the secretion of chitinases, and the induction of defense genes (Miya et al., 2007;Shimizu et al., 2010;Bozsoki et al., 2017).In contrast, pathogens secrete effectors to overcome PTI for successful colonization in hosts by perturbing host defenses (Boller and He, 2009).Plants have evolved a surveillance system to recognize these effectors and activate effector-triggered immunity, including hypersensitive cell death and defense-gene activation (Jones and Dangl, 2006;Wang et al., 2022).Over the past several decades, many typical PAMPs, such as fungal cell wall chitin, bacterial flagellar peptide flg22, and many effectors, including LysM motif-containing proteins, RXLR, and CFEM, have been characterized (Thomma et al., 2011;Wang et al., 2022).
LysM effectors are secreted proteins that do not carry any annotated domains, other than a different number of LysM domains; These domains are carbohydrate binding modules that appear in many prokaryotic and eukaryotic (Garvey et al., 1986;de Jonge and Thomma, 2009;Kombrink et al., 2017).Ecp6 was the first LysM effector characterized to contribute to the virulence of the tomato leaf mold pathogen Cladosporium fulvum (de Jonge et al., 2010).Later, it was found that LysM effectors also contribute to the virulence of many fungal pathogens, including the wheat Zymoseptoria tritici/Mycosphaerella graminicola pathogen (Mg1LysM and Mg3LysM), the rice blast fungus Magnaporthe oryzae (Slp1), the Brassicaceae anthracnose fungus Colletotrichum higginsianum (ChELP1 and 2), the vascular wilt fungal pathogen Verticillium dahliae, and the fruit pathogen Penicillium (Marshall et al., 2011;Mentlak et al., 2012;Lee et al., 2014;Takahara et al., 2016;Kombrink et al., 2017;Levin et al., 2017).According to several reports, LysM effector proteins competed to bind fungal cell wall chitin to prevent the elicitation of chitin-triggered host immunity and/ or protected hyphae from degradation by plant chitinases (Rovenich et al., 2016;Kombrink et al., 2017).For example, ChELp1 and ChELp2 of C. higginsianum and Ecp6 of C. fulvum suppress chitin-triggered defense responses by sequestering chitin fragments (de Jonge et al., 2010;Sańchez-Vallet et al., 2013).Mg1LysM and Mg3LysM of M. graminicola protect fungal hyphae against plant chitinase (Marshall et al., 2011).Mg3LysM of M. graminicola and Vd2LysM of V. dahliae could suppress chitin-triggered defense responses and protect fungal hyphae against hydrolysis by plant chitinase (Marshall et al., 2011).With a similar function, LysM effectors have been identified in mycoparasitism in insects (Cen et al., 2017) and have also been found to contribute to circumventing plant defense responses to facilitate arbuscular mycorrhizal symbiosis (Zeng et al., 2020).Although LysM effectors in many fungi have been widely characterized, little is known about their roles in woody fungal pathogens, including B. dothidea.
Previously, we identified five candidate LysM effectors in B. dothidea (Zhang et al., 2021).Here, we analyzed these proteins in B. dothidea using bioinformatics tools and studied the function of BdLM1 in B. dothidea.We analyzed the expression of BdLM1 during the infection process using qRT-PCR, tested the ability of BdLM1 to suppress programmed cell death and promote pathogen infection in Nicotiana benthamiana by infiltration, and further investigated the role of BdLM1 in vegetative growth and pathogenicity through gene disruption.The results of this study illustrated that BdLM1 plays a dual role in the interaction between B. dothidea and plants.
Botryosphaeria dothidea
Our previous study showed that there are five putative LysM effectors in B. dothidea (Zhang et al., 2021).Here, the structural domains of five putative LysM effectors were further analyzed using the NCBI Conserved Domain Search Tool (https:// www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi) (Bethesda, MA, USA).The SP was predicted using the online signalP-5.0tool (http://www.cbs.dtu.dk/services/SignalP/)(DTU, Copenhagen, Denmark).Some LysM effectors from other fungi in JGI (https://genome.jgi.doe.gov/portal/) and GenBank were compared, and the phylogenetic tree was generated with MEGA 7.0 (Sudhir Kumar, Arizona State University, Knicks, AZ, USA) using the neighbor-joining method.A Poisson model was used for substitution of amino acids and pairwise deletion was used for gaps or missing data treatment.The statistical strengths were assessed by bootstraps with 1000 replicates.
Functional verification of signal peptides
To confirm the secretion activity of BdLM1, a yeast secretion trap assay was used following the description by Zhang et al. (2021).Specifically, fusion of the predicted SP of BdLM1 to the N-terminal of the secretion-defective invertase gene (suc2) in the vector pSUC2, was transformed into yeast strain YTK12 using a T2001Frozen-EZ Yeast Transformation II Kit (Zymo Research, Irvine, CA, USA).YTK12 was cultured on yeast minimal tryptophan dropout medium (CMD-W medium, 0.67% yeast N base without amino acids, 0.075% tryptophan dropout supplement, 2% sucrose, 0.1% glucose, and 2% agar) and YPRAA medium (1% yeast extract, 2% peptone, 2% raffinose, and 2 µg of antimycin A per liter).The coding sequences of the SP of Avr1b and the first 25 amino acids of Mg87 were used as the positive and negative controls, respectively.The primers for vector construction were listed in Table S1.
Agrobacterium tumefaciens-mediated infiltration assay in N. benthamiana
To determine whether BdLM1 regulated the plant immune response, an A. tumefaciens-mediated infiltration assay in N. benthamiana was performed using a previously described method (Zhang et al., 2021).Both the ORF (without SP) sequences and the full coding gene of BdLM1 were amplified from the cDNA of B. dothidea isolates ZY7 or HTLW03 and cloned into the plasmid pGR107 with a 3× flag-tag fused at the N-terminus using the ClonExpress II One-Step Cloning Kit (Vazyme, Nanjing, China), according to the manufacturer's instructions.After verification using PCR with the pGR107-F/R primers (Table S1) and sequencing, the generated construct was then transformed into A. tumefaciens strain GV3101 by electroporation.
The assays of A. tumefaciens-mediated transient gene expression in N. benthamiana were performed using a previously described method (Zhang et al., 2021).Specifically, A. tumefaciens cells carrying BdLM1 were cultivated overnight in a Luria-Bertani medium containing 50 mg/mL kanamycin and rifampicin in a shaker at 28°C and 180 rpm.The A. tumefaciens cells were harvested, washed three times, and then resuspended in infiltration buffer to a final OD 600 of 0.5.After being kept at room temperature for 3 h, the A. tumefaciens cells carrying BdLM1 were initially infiltrated via needleless syringes into the leaves of 4-6-week-old N. benthamiana plants.A total of 15 leaves from five tobacco plants, were used.Infiltrations of buffer and A. tumefaciens cells carrying pGR107-GFP were used as the negative controls.After 24 h of initial infiltration, the same infiltration site was challenged with A. tumefaciens cells carrying BAX.The entire assay was repeated at least once.Cell death symptoms on infiltrated leaves were observed and photographed 6 d after initial infiltration.Western blotting was performed as described by Zhang et al. (2021).
To determine the immune response of the plant, infection by P. nicotianae was further tested after BdLM1 infiltration in N. benthamiana using a previously described method (Wang et al., 2019;Yang et al., 2019).In brief, the ORF of BdLM without SP was amplified from the cDNA of B. dothidea isolate HTLW03, cloned into the plasmid pSuper with a GFP-tag or the plasmid pGR107-GFP, and transformed into A. tumefaciens strain GV3101 by electroporation.Totally, 12 N. benthamiana leaves were collected 36 h after agroinfiltration and kept on filter paper with sterile double-distilled H 2 O in Petri dishes, and the plates were kept in plastic boxes.The infiltrated region was inoculated with a P. nicotianae mycelial plug (0.5 mm in diameter).The lesion was photographed at 60 hpi and the area was measured.Total DNA was extracted from leaf disks (3 cm in diameter) at infection sites 60 hpi with P. nicotianae.The biomass of P. nicotianae in inoculated leaves was determined with quantitative PCR (qPCR) using the N. benthamiana actin gene and the P. nicotianae elongation factor (EF1a) gene as internal controls (Table S1).The H 2 O 2 content in N. benthamiana leaves was tested at 12 hpi with P. nicotianae after being infiltrated with GFP or BdLM1, using a previously described method (Wang et al., 2019;Yang et al., 2019).In addition, callose deposition and the expression of the PR protein were assayed at 48 hpi, as previously described (Dong et al., 2021).NbPR1 and NbNPR1 were determined with qPCR using the elongation factor (EF1a) gene as an internal control (Table S1).The results of qPCR were analyzed using the 2 −DDct method (Livak and Schmittgen, 2001).The experiment contained three replicates.The assay was repeated once.
Confocal microscopic analysis
To study the subcellular localization of BdLM1 in plants, BdLM1 was cloned and inserted into the pCAM35s-GFP plasmid at the Xba I and Sal I sites with the primers listed (Table S1), generating the fusion vector pCAM35s-GFP-BdLM1.The construct pCAM35s-GFP-BdLM1 and the empty vector pCAM35s-GFP were transformed into Agrobacterium strain GV3101 and then infiltrated into the leaf epidermis of N. benthamiana.At 60 hpi, N. benthamiana leaf pieces (0.2 × 0.2 cm in size) were mounted in water on glass slides for observation.The fluorescence was imaged using a Leica TCS SP8 confocal microscopy system (Leica, Wetzlar, Germany).GFP fluorescence was excited using 488-and 552-nm laser lines.
RNA extraction and qRT-PCR analysis
To detect the expression pattern of BdLM1 in apple during infection by B. dothidea, the mycelium and fruit tissues (2 × 2 cm) were collected from 36 inoculation sites at 0, 6, 12, 24, 36, 48, and 72 hpi.Similarly, the mycelium and bark tissues (0.5 × 0.5 cm) from 18 inoculation sites were collected at 0, 1, 3, 7, 20, and 30 dpi.The total RNA of each sample was extracted using EASY spin plus plant RNA extraction kit (Aidlab Biotech.Beijing, China).The purity and concentration of RNA were checked using a Nanodrop 2000 Spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).First-strand cDNA was synthesized using Reverse Transcriptase M-MLV (Takara, Dalian, China) following the manufacturer's instructions.The B. dothidea actin gene was used as an internal control.PCR was performed in qPCR Tower 2.0 (Analytik, Jena, Germany) using TB Green Premix DimerEraser ™ qPCR mix (Takara, Dalian, China), with primers listed in Table S1.Relative expression values were calculated using the 2 −DDCt method (Livak and Schmittgen, 2001).Means from three replicates were used.The experiments were repeated once with a different set of biological samples.
Generation of gene deletion and complementary transformants
For gene deletion and complementation, polyethylene glycol (PEG)-mediated homologous recombination was performed as previously described by Dong and Guo (2020).Specifically, we constructed a gene homologous recombination (GHR) plasmid containing a hygromycin resistance gene (hph) with flanking sequences of BdLM1.The 5' and 3' flanking fragments of size 1000 bp were amplified from the genomic DNA of B. dothidea HTLW03.The two fragments were ligated to the 5' and 3' ends of the 1800 bp hph gene and introduced into the pMD19-T vector using the ClonExpress II One Step Cloning Kit (Vazyme, Nanjing, China).The recombinant plasmid was introduced into B. dothidea HTLW03 protoplasts using PEG.The generated gene deleted transformants were verified with PCR using the primer pairs listed in Table S1 and a Southern blot.The complementary fragment of BdLM1, including approximately 1600 bp promoter, the ORF, and 500 bp terminator, was amplified from the genomic DNA of B. dothidea and inserted into the pMD19-T-NEO plasmid at the Hind III site.The generated transformants were verified by phenotype characteristics.
Morphological characteristics and pathogenicity assay
Mycelial plugs (5 mm in diameter) of the WT strain HTLW03 and its transformants from the edge of a growing colony were transferred to new PDA plates.Three replicated plates per strain were used and incubated at 26°C in the dark for 48 h.Colony characteristics were examined, and the colony diameter was measured.Furthermore, melanin was observed after incubation for 5 and 10 d.
To induce conidia formation, the aerial mycelia of 3-d-old colonies on PDA medium were scraped off with a scalpel and incubated at 26°C under a near-UV light for 10 d.The mature pycnidia were collected in a 1.5 mL microcentrifuge tube with 0.5 mL of sterile ddH 2 O and crushed with a pestle.The concentration of the conidia suspension was measured with a hemocytometer.The length and width of 50 conidia per isolate were measured under a compound microscope (Olympus Model BX41TF).In addition, conidial germination was tested on water agar at 26°C in the dark for 4-5 h.The percentage of conidial germination was estimated by examining 100 conidia per replicate, with three replicates for each isolate.Each experiment was repeated once.
The pathogenicity of the WT and its transformants was tested on intact shoots, wounded shoots and fruit of apple (Malus domestica Borkh.'Fuji') as previously described (Dong et al., 2021;Tang et al., 2012;Zhang et al., 2021).Symptoms on intact apple shoots were observed and the severity of the disease was recorded at 30 dpi as described by Dong et al. (2021).The length of the lesion on wounded shoots was measured at 5-7 dpi, and the diameters of the lesion on wounded apple fruit were measured at 2 dpi.Each experiment included three apple fruits or five shoots.The pathogenicity test was repeated once.
Heterologous protein production in Escherichia coli
Prokaryotic expression of BdLM1 was performed as described by Tian et al. (2021), with some modifications.Specifically, the opening reading frame of BdLM1 amplified with primers listed in Table S1, was ligated into the pET-SUMO vector with a 6 × His-tag, and transformed into E. coli BL21 (DE3) pLysS (ZOMANBIO, Beijing, China).BdLM1 expression was induced with 0.5 mM isopropyl b-D-1-thiogalactopyranoside at 26°C for 20 h.After E. coli cells were harvested through centrifugation at 7500 rpm for 10 min, the precipitate was resuspended in 50 mL cell lysis buffer (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 20 mM imidazole), incubated at 4°C for 15 min with stirring, and centrifuged at 12,000 rpm for 25 min.The resulting cleared supernatant was immediately placed on ice for further purification.
For the purification of BdLM1, His60 Ni Superflow resin (Clontech, Mountain View, CA, USA) was used.After being equilibrated with wash buffer (50 mM Tris-HCl pH 8.0, 150 mM NaCl, and 20 mM imidazole), the protein preparation was loaded onto the column.The target protein was eluted with elution buffer (50 mM Tris-HCl pH 8.0, 150 mM NaCl, and 500 mM imidazole), and the purity of the elution was tested on a sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) gel, followed by Coomassie brilliant blue staining.Furthermore, the protein of the elution was concentrated to the required concentration.
Chitin binding assay
The assay was performed as described by Tian et al. (2021), with some modifications.In brief, 500 µL of protein solution containing 30 mg/mL E. coli-produced BdLM1 protein was incubated with 5 mg chitin, chitosan, cellulose, or xylan (Yuanye, Shanghai, China) in a 100 rpm shaker at 4°C for 6 h.The samples were centrifuged at 13,000 g for 5 min.The supernatants were collected and concentrated to a volume of approximately 100 µL.The pellets were washed three times with incubation buffer, and then resuspended in 100 µL demineralized water.Then, 50 µL of the pellet solution or the supernatant were individually incubated with 50 µL of SDS-PAGE protein loading buffer (2×; 200 mM Tris-HCl, pH 6.5, 0.4 M dithiothreitol, 8% sodium dodecyl sulfate, 6 mM bromophenol blue, and 40% glycerol) at 95°C for 10 min.Samples were analyzed with a Western blot using anti-His antibodies.Photos were taken using Azure Biosystems (Azure, Dublin, CA, USA) in a custom setting.
Reactive oxygen species measurement
ROS production measurements were performed as described by Tian et al. (2021).For each treatment, four N. benthamiana leaf disks (Ø = 0.5 cm) from 2-week-old N. benthamiana plants, were placed into a 96-well microtiter plate, and rinsed with 200 µL fresh demineralized water for 24 h.Replaced water by 50 µL fresh demineralized water, the plate was incubated for 2 h at room temperature.Meanwhile, mixtures of (GlcNAc)6 (Sigma-Aldrich, St. Louis, MO, USA) and the BdLM1 protein were incubated for 2 h.Then (GlcNAc)6 was added to a final concentration of 40 µM in the absence or presence of 50 µM BdLM1 protein in a measuring solution containing 200 µM luminol (Biotopped, Beijing, China) and 20 µg/mL horseradish peroxidase (Biotopped, Beijing, China).Similarly, flg22 was added to a final concentration of 1 µM in the absence or presence of 50 µM BdLM1.Chemiluminescence were measured every 1 min over 40 min in a Tecan Infinite F200 Microplate Reader (Tecan, Männedorf, Switzerland).
Hyphal protection against chitinase hydrolysis
The assay was performed using a previously described method (Tian et al., 2021).Specifically, Fusarium oxysporum f. sp.lycopersicum conidia were harvested from a 4-d-old culture on CMC medium (15 g of Carboxymethyl cellulose, 1 g of NH 4 NO 3 , 1 g of KH 2 PO 4 , 0.5 g of MgSO 4 •7H 2 O, 1 g of Yeast Extract) filtrated with Miracloth (Merck, KgaA Darmstadt, Germany), and adjusted to a concentration of 10 6 spores/mL with potato dextrose broth.Conidia suspensions in aliquots of 50 µL were incubated overnight at room temperature.BdLM1 protein was added to a final concentration of 20 µM.After 2 h of incubation, 2 µL of chitinase from Streptomyces griseus (Yuanye, Shanghai, China) was added to the appropriate wells.Sterile water was added as a control.Further incubated for 4 h, hyphal growth was inspected with an Olympus BX41 microscope.
Statistical analysis
Statistical analysis of the data in this study was performed using Microsoft Office and SPSS (IBM, Armonk, NY, USA).To determine whether the effects of treatment were statistically significant, an analysis of variance was first conducted.When treatment effects were significant, multiple mean comparisons were performed using Duncan's test with a confidence level of 0.05.
BdLM1 in B. dothidea is a typical LysM protein with secretion activity
Our previous study showed that there were five proteins containing the LysM domain in B. dothidea (Zhang et al., 2021).Here, we first compared the LysM proteins from B. dothidea with those from other plant pathogenic fungi.Phylogenetic analysis showed that the five candidate LysM proteins from B. dothidea (Bdo_02296, Bdo_03965, Bdo_10607, Bdo_10805, and Bdo_05438) were divided into four groups (Figure 1A).Bdo_10805, which contained 189 amino acids and a LysM motif, was designated as BdLM1.It clustered into a group different from those well-known effectors including Ecp6, Slp1, ChELp1, and Vd2LysM (Figure 1A).BdLM1 had a signal peptide (SP) of 19 amino acids in the Nterminal, suggesting that it may be a secreted protein, and the amino acids from 61 to 95 constituted a typical LysM domain (Figures 1A, B).In addition, the amino acid sequences of BdLM1 from isolates ZY7 and HTLW03 were identical (Figure 1B), despite the difference of three nucleotide acids (Figure 1C).
As BdLM1 was predicted to have a SP, we tested its secretory activity using the yeast secretion trap assay described by Zhang et al. (2021).The SPs of BdLM1 from either HTLW03 or ZY7 could restore the growth of invertase-deficient yeast on YPRAA medium, similar to the positive control Avr1b (Figure 2).These results suggest that the SPs of BdLM1 can guide the secretion of the truncated invertase and that BdLM1 has secretory activity.
High BdLM1 expression during B. dothidea infection of apple
To examine the expression profile of BdLM1 during the infection of B. dothidea, we extracted RNA from apple shoots or fruit at various times post-inoculation and quantified its expression using qRT-PCR.In intact shoots, BdLM1 expression was low at 1 d post-inoculation (dpi), strongly increased at 7 dpi, peaked at 20 dpi, and decreased at 30 dpi (Figure 3A).In wounded apple fruit, BdLM1 expression was highest at 6 h post-inoculation (hpi) and significantly decreased to low levels at 12, 24, and 72 hpi (Figure 3B).These results indicate that BdLM1 plays a crucial role in the infection process of B. dothidea.
BdLM1 gene is important for the vegetative growth and virulence of B. dothidea
In order to determine the biological function of BdLM1 in B. dothidea, we generated the BdLM1 knockout transformants by homologous recombination as previously described by Dong and Guo (2020) (Figure 4A).BdLM1 knockout transformants were identified using polymerase chain reaction (PCR).As expected, PCR products of approximately 1.6, 1.9, and 1.6 kb for upstream (with 1F and 1R), downstream (with 2F and 2R), and ORF fragments (with 3-F and 3-R), respectively, were amplified (Figures 4B-D).In Southern blotting using a hygromycin B phosphotransferase (hph) gene probe, the WT showed no hybridization signal, while the two knockout transformants showed a unique hybridization band (Figure 4E).The two BdLM1 knockout transformants, DBdLM1-1 and DBdLM1-2, were selected for further study.We also generated two complementary BdLM1 transformants (C-1 and C-2).
Subsequently, we assessed the growth, conidia production, and conidial germination of the wild type (WT), BdLM1 knockout mutants, and complementary transformants.The two BdLM1 knockout mutants exhibited significantly faster growth than the WT and complementary transformants (Figures 4F, G).Additionally, the two knockout mutants produced less melanin in Yeast invertase secretion assay of the predicted signal peptide of BdLM1.The signal peptide sequences of PsAvr1b and MG87 were used as the positive and negative controls, respectively, to assay the predicted signal peptide of BdLM1.CMD-W (minus Trp) plates were used to select yeast strain YTK12 carrying the pSUC2 vector.YPRAA media were used to indicate invertase secretion.5-d cultures but a similar quantity in 10-d cultures with the WT and complementary strains (Figure S1), indicating that the loss of BdLM1 delayed melanin production in B. dothidea.However, no significant differences in the formation of pycnidia, the production of conidia, or the germination rates of conidia were observed between the knockout mutants, WT, and complementary strains on potato dextrose agar (PDA) (Figures 4F, H, I).
To examine the effect of BdLM1 on pathogenicity, we inoculated wounded and intact apple shoots and wounded fruits with the WT and its transformants.All tested isolates produced lesions on apple (Figure 5).However, the BdLM1 knockout mutants displayed a significant decrease in disease severity index on intact apple shoots by 20% (Figure 5A), in lesion length on wounded detached apple shoots by 40% (Figure 5B), and in lesion size on wounded apple fruit by 40% (Figure 5C).These results indicate that BdLM1 is required for the full virulence of B. dothidea.
BdLM1 suppresses the immunity of N. benthamiana
To investigate the function of BdLM1 in pathogen-host interactions, we first determined whether BdLM1 could induce programmed cell death (PCD) or suppress BAX-induced programmed cell death (BT-PCD) through Agrobacteriummediated transient expression in N. benthamiana.Leaves that were challenged with the BAX protein 24 h after infiltration with BdLM1 with or without SP from isolate ZY7 or HTLW03 did not exhibit symptoms of PCD, while leaves infiltrated with GFP or buffer showed PCD (Figures 6A-D).Western blot analysis confirmed the expression of GFP, BAX, and BdLM1 in N. benthamiana leaves after infiltration (Figure 6E).
We further investigated the effects of BdLM1 on the infection of Phytophthora nicotianae in the leaves of N. benthamiana.Leaves transiently expressing BdLM1 showed significantly bigger lesions (Figures 7A, B) and a significantly higher relative P. nicotianae biomass than leaves expressing GFP (Figure 7C).Moreover, DAB staining showed a significantly lower level of H 2 O 2 accumulation in N. benthamiana tissue inoculated with P. nicotianae after infiltration with BdLM1 than with GFP (Figures 7D, E).Meanwhile, a significant lower quantity of callose deposition was observed in N. benthamiana tissues infiltrated with BdLM1 than in those infiltrated with GFP (Figures 7F, G).In addition, the qRT-PCR assay indicated the downregulated expression of the pathogenesis-related genes NbPR1 and NbNPR1 in N. benthamiana transiently expressing BdLM1 (Figure 7H).These results indicate that BdLM1 inhibits plant immunity and promotes P. nicotianae infection.
Furthermore, we investigated the localization of BdLM1 by fusing the synthetic green fluorescent protein (sGFP) to the Cterminus of BdLM1.We infiltrated Agrobacterium tumefaciens cells carrying BdLM-sGFP into N. benthamiana leaves and observed them under a laser confocal microscope.BdLM1 from ZY7 or HTLW03, with or without SP, localized to the nucleus and cytoplasmic membrane of N. benthamiana (Figure S2).These results indicate that BdLM1 probably localizes to nucleus and cytoplasmic membrane of N. benthamiana.
BdLM1 binds chitin, suppresses reactive oxygen species (ROS) production, and protects hyphae against chitinase hydrolysis
Previous studies have shown that effectors containing the LysM motif in fungal plant pathogens bind chitin to inhibit plant immunity (de Jonge et al., 2010;Takahara et al., 2016).To investigate how BdLM1 contributes to B. dothidea virulence during colonization, we first evaluated its substrate-binding characteristics using a polysaccharide precipitation assay following the methods described by Kombrink et al. (2017).The heterologously expressed BdLM1 protein in Escherichia coli bound chitin beads and slightly bound chitosan but not the plant cell wall polymers cellulose or xylan (Figure 8A).
Previously, LysM effectors from various fungal plant pathogens have been shown to suppress the chitin-induced ROS production of N. benthamiana leaf disks and have the ability to perturb chitininduced host immune responses (de Jonge et al., 2010;Tian et al., 2021).To determine whether BdLM1 has this ability, the occurrence of ROS burst induced by chitin or flg22 was assessed in N. benthamiana leaf disks.This was done by treating the leaf disks with 40 mM chitin or 1 mM flg22, with or without the effector protein BdLM1, as previously demonstrated (de Jonge et al., 2010).Remarkably, pre-incubation of 40 mM chitin or 1 mM flg22 with 50 mM BdLM1 prior to the addition to leaf disks led to a significant reduction of the ROS burst (Figures 8B, C), demonstrating its ability to suppress plant immune responses induced by chitin and flg22.Some LysM proteins have been shown to protect fungal hyphae against chitinase hydrolysis (Marshall et al., 2011;Tian et al., 2021).To evaluate the possible role of BdLM1 in hyphal protection, its ability to protect the hyphae of Fusarium oxysporum f. sp.lycopersicum was tested.As expected, while the addition of chitinase dramatically hydrolyzed F. oxysporum hyphae, BdLM1 protected the hyphae from hydrolysis by chitinases from Streptomyces griseus (Figure 8D).
Discussion
Although LysM effectors have been extensively characterized in fungal pathogens causing herbaceous plant diseases (Buist et al., 2008;Kombrink et al., 2011), they have rarely been reported in the woody plant fungal pathogen B. dothidea, which is well known to cause significant economic losses in agriculture (Guo et al., 2009;Marsberg et al., 2017).Based on bioinformatics analysis, we identified five candidate LysM effectors in B. dothidea, four of which are closely related to known LysM effectors, such as Mg3LysM, Slp1, and LtLysm.Interestingly, BdLM1, different from those previously studied LysM effectors, is phylogenetically clustered in one group with LysM effectors from Macrophomina phaseolina, Neofusicoccum parvum, and Lasiodiplodia theobromae, among others.The functions of the LysM protein in this group have not yet been documented.This study focused on characterizing the LysM protein BdLM1 from B. dothidea through gene knockout and infiltration expression.Our results revealed that BdLM1 knockout mutants showed a significant decrease in virulence in apple.Additionally, BdLM1 suppressed BT-PCD, reduced H 2 O 2 accumulation and callose deposition, promoted the infection of P. nicotianae, and significantly downregulated the plant pathogenesisrelated gene PR1 in N. benthamiana.Furthermore, BdLM1 bound chitin, suppressed plant immunity induced by chitin and flg22 and protected fungal hyphae against chitinase hydrolysis.Our results suggest that the LysM effector BdLM1 plays a crucial role in the full virulence of B. dothidea and in suppressing plant immunity.
Previously characterized LysM effectors is typically induced during the early stages when pathogens need to evade recognition by the host for successful tissue colonization (Fradin and Thomma, 2006).Similarly, BdLM1 expression peaked at about 6 hpi on wounded apple fruit during early infection.In contrast, its expression is reduced at 1 dpi when the B. dothidea infection site is generated (Dong et al., 2021) and peaks around 20 dpi on intact shoots during mid to late infection while the infecting hypha penetrates the phellem in the second layer and expands in the phelloderm (Dong et al., 2021).The latter expression pattern is similar to that of the soil-borne vascular plant pathogen V. dahliae, with the expression of Vd2LysM peaking at about 1 week after inoculation, during xylem colonization, before wilting and the appearance of necrosis (Kombrink et al., 2017).Thus, BdLM1 has variable expression patterns in different apple tissues, including fruit and shoots.
Several LysM proteins have been identified in a single fungal species, such as M. graminicola, V. dahliae, and Penicillium expansum (Marshall et al., 2011;Kombrink et al., 2017;Levin et al., 2017).In the three LysM effectors of M. graminicola, only Mg3LysM knockout strains were dramatically changed, including loss of pathogenicity in leaf and asexual sporulation (Marshall et al., 2011).Only lineage-specific Vd2LysM of strain VdLs17 in four LysM effectors from V. dahliae was required for full virulence in tomatoes (Kombrink et al., 2017).Additionally, four putative PeLysM effectors do not contribute to the virulence of P.
expansum and PeLysM3 has a potential role in growth processes.Similarly, TAL6 in T. atroviride has been illustrated to be involved in self-signaling processes during fungal growth (Levin et al., 2017).Thus, only some of the LysM effectors from one fungus contribute to virulence.In this study, BdLM1 was required for full virulence of B. dothidea, affected penetration and extension, and was involved in mycelial growth of B. dothidea.
LysM effectors have been shown to affect the chitin-induced plant immune system by either binding chitin or protecting fungal hyphae against chitinase (Marshall et al., 2011;Mentlak et al., 2012;Takahara et al., 2016;Kombrink et al., 2017).Similar to V. dahliae Vd2LysM, R. irregularis RiSLM, M. graminicola Mg1LysM, Mgx1LysM, and Mg3LysM (Marshall et al., 2011;Mentlak et al., 2012;Takahara et al., 2016;Kombrink et al., 2017), BdLM1 can protect hyphae against chitinase hydrolysis (Figure S3), but C. fulvum Ecp6, M. oryzae Slp1, and C. higginsianum ChELP1 and ChELP2 do not possess such activity (de Jonge et al., 2010;Mentlak et al., 2012;Takahara et al., 2016).It has been reported that this ability is not determined by LysMs number in proteins but by chitin-induced polymerization, which leads to contiguous LysM effector filaments anchored to chitin in the cell wall of fungi to protect them (Sańchez-Vallet et al., 2020;Tian et al., 2021).Previous studies have shown that the expression of the apple LysM protein MdCERK1-2 is induced by B. dothidea and that MdCERK1-2 and MdCERK1 can bind chitin, suggesting that MdCERK1-2 and MdCERK1 may play a role in apple immune defense responses as a PRR (Zhou et al., 2018;Chen et al., 2020).Thus, we suppose that BdLM1 may compete with LysM proteins MdCERK1-2 and MdCERK1 for chitin binding in apple, leading to the suppression of plant immunity (Figure S3).Interestingly, unlike previous findings with Ecp6 (de Jonge et al., 2010), BdLM1 inhibited flg22induced plant immunity (Figure S3).The core effector necrosisinducing secreted protein 1 (NIS1) of multiple pathogens could inhibit ROS triggered by both chitin and flg22 through commonly interacting with the PRR-associated kinases BAK1 and BIK1 (Irieda et al., 2018).Meanwhile, BdLM1 also inhibited BT-PCD.Based on these results, it appears that BdLM1 may play a broad role in suppressing plant immunity probably through interaction with BAK1 or BIK1 (Wang et al., 2022;Liu and Tang, 2023).LysM effectors are characterized by one to several LysM domains, but many have two or three LysM domains (Marshall et al., 2011;Mentlak et al., 2012;Takahara et al., 2016;Levin et al., 2017;Harishchandra et al., 2020).Similar to Mg1LysM and MgxLysM (Marshall et al., 2011), BdLM1 contains a single LysM domain.For the LysM effectors containing only one LysM domain, protein interactions have revealed that two monomers of Mg1LysM or MgxLysM form a chitin-independent homodimer through the bsheet at the N-terminus of Mg1LysM (Sańchez-Vallet et al., 2020;Tian et al., 2021).Furthermore, Mg1LysM homodimers have been reported to undergo ligand-induced polymerization in the presence of chitin and then develop a polymeric structure that can protect fungal cell walls (Sańchez-Vallet et al., 2020).Collectively, we suspect that the woody fungal pathogen source LysM effector differs from other characterized LysM effectors.Further structural analyses of BdLM1 and the mechanisms of its interaction with other LysM effector proteins in B. dothidea will be conducive.
Conclusion
In this study, BdLM1 from the woody plant pathogen fungus B. dothidea was shown to be a LysM effector.BdLM1 showed different expression patterns on wounded apple fruit and intact shoots and was required for the full virulence of B. dothidea.BdLM1 inhibited plant immunity induced by the mouse protein BAX, chitin, and flg22.
FIGURE 2
FIGURE 2 FIGURE 1 Bioinformatic analyses of the candidate effector containing the LysM domain and BdLM1 in Botryosphaeria dothidea.(A) Phylogenetic tree of LysM proteins.The amino acid sequences of LysM proteins of fungi were obtained from the NCBI database and used to generate the phylogenetic tree using MEGA 7 with the neighbor-joining method (1000 replicates).The LysM domain was predicted using CDD/SPARCLE (https://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi).(B) Amino acid sequence comparison of BdLM1 in HTLW03 and ZY7 strains using MultAlin (http://multalin.toulouse.inra.fr/multalin/multalin.html).Signal peptide prediction was performed with the SignalP 5.0 Server (http://www.cbs.dtu.dk/services/SignalP/).The green box indicates the putative BdLM1 signal peptide while the pink box indicates the putative LysM domain.(C) Nucleotide acid sequence comparison of BdLM1 in the HTLW03 and ZY7 strains.
FIGURE 3Relative expression levels of BdLM1 in Botryosphaeria dothidea during the infection stages.(A) Expression in intact apple shoots.The shoot tissues inoculated with wild type HTLW03 were harvested at 0, 1, 3, 7, 20, and 30 d post-inoculation (dpi) for RNA extraction.(B) Expression in wounded apple fruit.The fruit tissues inoculated with wild-type HTLW03 were harvested at 0, 6, 12, 24, 36, 48, and 72 h post-inoculation (hpi) for RNA extractions.The relative transcript levels of BdLM1 at different time points after inoculation were normalized by the actin gene and calibrated against that of mycelia.The relative transcript level of BdLM1 was calculated using the 2 -DDCT method.The assays were performed with two independent biological repetitions and three replicates each.Error bars represent the standard error.Asterisks indicate statistical significance according to the Student's t-test (*P < 0.05).
FIGURE 5 BdLM1 knockout transformants of Botryosphaeria dothidea show reduced virulence on apple shoots and fruit.(A) Symptoms and disease severity on intact shoots inoculated with the wild type (WT), knockout transformants, and complementary strains measured 30 d after inoculation; Disease severity was recorded on a scale from 0 to 4 based on the number of warts on inoculation sites using the method described by Dong et al. (2021).The disease severity index (DSI) was calculated through the formula: [sum (class frequency × score of rating class)]/[(total number of inoculation site) × (maximal disease index)] × 100.(B) Symptoms and lesion length on wounded shoots measured 5 d after inoculation; (C) Symptoms and lesion length on wounded apple fruit measured 2 d after inoculation.Five shoots and three apple fruit were used for each treatment and the entire experiment was repeated once.Error bars represent standard errors calculated from six replicates.The asterisk indicates the significant difference according to the Student's t-test (*P <0.05).
FIGURE 6 BdLM1 suppresses cell death triggered by BAX in Nicotiana benthamiana.(A) Suppression of BAX-triggered programmed cell death (BT-PCD) in N. benthamiana after infiltration with BdLM1 from ZY7 strain without signal peptide (SP).(B) Suppression of BT-PCD in N. benthamiana after infiltration with BdLM1 from ZY7 strain with SP. (C) Suppression of BT-PCD in N. benthamiana after infiltration with BdLM1 from HTLW03 strain without SP.(D) Suppression of BT-PCD in N. benthamiana after infiltration with BdLM1 from HTLW03 strain with SP.The representative photo was acquired 5 d after the last infiltration.(E) Western blotting was used to confirm the expression of GFP, BAX, and BdLM1 in (A-D), and equal loading is indicated by Ponceau S staining.Flag tag was added to BdLM1 or GFP, and GFP tag was added to BAX in this study.
FIGURE 7 Transient expression of BdLM1 in Nicotiana benthamiana inhibits plant immunity and increases Phytophthora nicotianae infection.(A) Symptoms formed on tobacco leaves inoculated with P. nicotianae mycelial plug 36 h after transient expression of BdLM1.Photographs were taken under ultraviolet lights, lesion area was measured and biomass was assayed at 60 h post-inoculation (hpi).BdLM was ligated into the plasmid pSuper with a GFP-tag.(B) Lesion area.(C) Relative biomass of P. nicotianae.Phytophthora nicotianae biomass in inoculated leaves was determined with quantitative PCR (qPCR) using the N. benthamiana actin gene and the P. nicotianae elongation factor (EF1a) gene as internal controls.The data are the averages (and standard errors) of the values from two independent biological replicates.The asterisk indicates the significant difference according to the Student's t-test (*P <0.05).(D) The reactive oxygen burst 48 h after transient expression of GFP or BdLM1.Bars = 200 µm.(E) Quantification of DAB staining using ImageJ software.The data are the averages (and standard errors) of the values from two independent biological replicates.The asterisk indicates significant differences according to the Student's t-test (*P <.05).(F) Callose accumulation 48 h after transient expression of GFP or BdLM1.(G) Statistical results of callose formation in N. benthamiana leaves.The data are the averages (and standard errors) of the values from two independent biological replicates.The asterisk indicates significant difference according to the Student's t-test (*P <0.05).Bars = 200 µm.(H) Expression levels of pathogenesis-related (PR) genes in N. benthamiana 48 h after transient expression of BdLM1.The data are the averages (and standard errors) of the values from two independent biological replicates.The asterisk indicates the significant difference according to the Student's t-test (*P <0.05).
BdLM1 decreased H 2 O 2 accumulation and callose deposition, and downregulated resistant gene expression in N. benthamiana.Furthermore, BdLM1 bound chitin and protected fungal hyphae against degradation by chitinase.These findings indicate that the LysM effector BdLM1 contributes to the full virulence of B. dothidea, inhibits plant immunity induced by various factors, and has a dual function in inhibiting chitin-triggered plant immunity by binding chitin and protecting fungal hyphae against chitinase hydrolysis.The author(s) declare financial support was received for the research, authorship, and/or publication of this article.This work was supported by National Key R&D Program of China (Grant No. 2016YFD0201100) and the Project of Yazhouwan Scientific and Technological Administration of Sanya (SYND-2022-36).
FIGURE 8BdLM1 binds chitin, suppresses chitin-and flg22-induced immune responses, and protects the hyphal growth of Fusarium oxysporum against chitinase hydrolysis.(A) BdLM1 binds chitin and chitosan.Escherichia coli-produced BdLM1 was first incubated with chitin, chitosan, cellulose, and xylan for 6 (h) After centrifugation, pellets and supernatants were analyzed using polyacrylamide gel electrophoresis followed by Western blot.(B) BdLM1 inhibited the reactive oxygen species (ROS) burst induced by chitin in N. benthamiana.ROS production in Leaf disks of N. benthamiana after the addition of 40 µM chitin with or without pre-incubation with 50 µM BdLM1 for 2 (h) The Figure is representative of two independent experiments with similar results.Error bars represent standard errors from four replicates.(C) BdLM1 inhibited ROS burst induced by flg22 in N. benthamiana.Production of ROS in leaf discs of N. benthamiana after the addition of 1 µM flg22 with or without pre-incubation with 50 µM BdLM1 for two hours.The Figure is representative of two independent experiments with similar results.Error bars represent standard errors from four biological replicates.(D) BdLM1 protects the hyphal growth of Fusarium oxysporum f.sp.lycopersicum against chitinase hydrolysis.Micrographs of F. oxysporum f.sp.lycopersicum grown in vitro with or without 2 h preincubation with B. dothidea BdLM1, followed by the addition of chitinase or water.Microscopic pictures were taken approximately 4 h after chitinase addition.Bars = 50 mm. | 9,405.2 | 2024-01-08T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Open-Ended Learning: A Conceptual Framework Based on Representational Redescription
Reinforcement learning (RL) aims at building a policy that maximizes a task-related reward within a given domain. When the domain is known, i.e., when its states, actions and reward are defined, Markov Decision Processes (MDPs) provide a convenient theoretical framework to formalize RL. But in an open-ended learning process, an agent or robot must solve an unbounded sequence of tasks that are not known in advance and the corresponding MDPs cannot be built at design time. This defines the main challenges of open-ended learning: how can the agent learn how to behave appropriately when the adequate states, actions and rewards representations are not given? In this paper, we propose a conceptual framework to address this question. We assume an agent endowed with low-level perception and action capabilities. This agent receives an external reward when it faces a task. It must discover the state and action representations that will let it cast the tasks as MDPs in order to solve them by RL. The relevance of the action or state representation is critical for the agent to learn efficiently. Considering that the agent starts with a low level, task-agnostic state and action spaces based on its low-level perception and action capabilities, we describe open-ended learning as the challenge of building the adequate representation of states and actions, i.e., of redescribing available representations. We suggest an iterative approach to this problem based on several successive Representational Redescription processes, and highlight the corresponding challenges in which intrinsic motivations play a key role.
Reinforcement learning (RL) aims at building a policy that maximizes a task-related reward within a given domain. When the domain is known, i.e., when its states, actions and reward are defined, Markov Decision Processes (MDPs) provide a convenient theoretical framework to formalize RL. But in an open-ended learning process, an agent or robot must solve an unbounded sequence of tasks that are not known in advance and the corresponding MDPs cannot be built at design time. This defines the main challenges of open-ended learning: how can the agent learn how to behave appropriately when the adequate states, actions and rewards representations are not given? In this paper, we propose a conceptual framework to address this question. We assume an agent endowed with low-level perception and action capabilities. This agent receives an external reward when it faces a task. It must discover the state and action representations that will let it cast the tasks as MDPs in order to solve them by RL. The relevance of the action or state representation is critical for the agent to learn efficiently. Considering that the agent starts with a low level, task-agnostic state and action spaces based on its low-level perception and action capabilities, we describe open-ended learning as the challenge of building the adequate representation of states and actions, i.e., of redescribing available representations. We suggest an iterative approach to this problem based on several successive Representational Redescription processes, and highlight the corresponding challenges in which intrinsic motivations play a key role.
INTRODUCTION
Robots need world representations in terms of objects, actions, plans, etc. Currently such representations are carefully designed and adapted to the robot's task (Kober et al., 2013). But a general purpose robot capable of solving an unbounded number of tasks cannot rely on representations hardwired at design time, because each may require a different representation. To achieve the vision of a robot that can solve an open-ended series of tasks in an increasingly efficient way, we consider an alternative paradigm: that the robot should discover the appropriate representations required to autonomously learn each task.
Representational redescription is the ability to discover new representations based on existing ones. It is a key ability of human intelligence (Karmiloff-Smith, 1995) that remains a challenge in robotics (Oudeyer, 2015). In this paper, we propose a unifying conceptual framework for addressing it. We assume an agent endowed with low-level perception and action capabilities which receives external rewards when it faces a task. We also assume the agent is endowed with reinforcement learning (RL) capabilities efficient enough to let it learn to solve a task when cast as a Markov Decision Process (MDP). From these assumptions, the main challenge in our framework is determining how an agent can discover the state and action representations that let it cast tasks as MDPs, before solving them by RL (Zimmer and Doncieux, 2018).
In MDPs, states and actions are primitive components considered given, and they are generally defined by the human designer having a particular task and domain in mind (see Figure 1). To make a step toward openended learning, we propose a conceptual framework for representational redescription processes based on a formal definition of states and actions. Then we highlight the challenges it raises, notably in terms of intrinsic motivations.
THE REPRESENTATIONAL REDESCRIPTION APPROACH
Our Representational Redescription approach is depicted in Figure 2. We consider an agent endowed with low-level perception and action capabilities, and which faces an openended sequence of tasks. The agent receives some external rewards from these tasks. The problem for this agent is to determine how to use this reward to learn the corresponding task. In an MDP, an RL algorithm explores the possible outcomes of an action when executed in a particular state. As pointed out by Kober et al. (2013), there is a need to appropriately define the state and action spaces for an efficient learning process. To do so, the possible alternatives are either to rely on a single generic state and action space or to build them on-the-fly when required. In this work, we do the latter and make the following assumptions: ASSUMPTION 1. A single state and action space cannot efficiently represent all the decision processes required to solve the tasks an open-ended learning system will be confronted to. To solve the task k defined through a reward value r k (t), the agent needs to build an MDP M k .
ASSUMPTION
2. An open-ended learning process needs to build these MDPs on-the-fly.
ASSUMPTION 3. The agent is endowed with some RL algorithms to allow it to learn to solve the task, once the underlying MDP has been fully defined.
Markov Decision Processes
Decisions in robotics can be modeled with MDPs using < S k , A k , p k , R k >, where k is a task identifier 1 , S k is the state space, A k is the action space, p k : S k × A k × S k → R is a transition function, where p k (s t , a t , s t+1 ) gives the probability to reach s t+1 from s t after having applied action a t and R k : S k → R is the reward function. A policy π k : S k → A k is a process that determines which action to apply in any state.
In the proposed framework, the observed reward r k (t) is distinguished from the reward function of the MDP R k (t). The agent may not know to what state the observed reward r k (t) can be associated. It is actually part of the proposed openended learning framework to interpret observed rewards and associate them to states in order to build the reward function R k (t) required to learn how to maximize them.
The notations used here have been intentionally kept as simple as possible. This framework can be easily extended to more complex cases, including semi-MDPs, stochastic policies or other definitions of the reward function.
States
DEFINITION 1. A state is a description of a robot context that respects the constraints of its decision process.
Following (Lesort et al., 2018), a good state representation should be (1) Markovian (i.e., the current state summarizes all the necessary information to choose an action), (2) able to represent the robot context well enough for policy improvement, (3) able to generalize the learned value-function to unseen states with similar features, and (4), low dimensional for efficient estimation (Böhmer et al., 2015). State representation learning approaches learn low dimensional representations without direct supervision, i.e., exploiting sequences of observations, actions, rewards and generic learning objectives (Lesort et al., 2018).
To bootstrap the open-ended learning process, we define S 0 as the state space containing the set of possible sensorimotor values. This first state space may not be low dimensional, Markovian, or structured enough for efficient exploration, thus motivating the search for better adapted state spaces.
Reward Functions and Goals
A reward function may contain different kinds of information: an indication of success in fulfilling a Human user defined task, or in reaching an autonomously defined action goal (see next section). It may also contain guidance to help reach the goal (reward shaping).
Besides reward functions defined in R, the proposed framework requires, for the description of actions, the definition of boolean reward functions that will be called goal functions: DEFINITION 2. A goal function, denotedR, does not contain any shaping term and tells whether the goal associated to this reward function is achieved or not.
A goal function is a specific reward function aimed at defining the notion of success or failure required for action definition. The task to solve does not need to be described with such a boolean function.
DEFINITION 3. Goal states are states s for whichR(s) = True.
Actions
In the proposed framework, actions are not systematically predefined, but can be built on-the-fly. The design of the corresponding algorithms requires to define what an action actually is. The proposed definition relies on the notion of goal function to add a purpose to a policy. Actions are framed within different abstraction levels depending on the granularity of the policy, as in the options framework (Sutton et al., 1999). Actions are one of the main components of an MDP. An MDP M k needs an action space A k . A k is an action space defined at an abstraction level k. It relies on policies of level k − 1, defined in an MDP k − 1. They can be used to build policies at the level k that can, in turn, be used to build new actions for another MDP at the level k + 1.
DEFINITION 4. Actions a ∈ A k are the primitives of MDP M k . An action a is a policy π relying on actions available at a lower level and built to reach a goal state associated to a goal functionR. The action succeeds if the trajectory of the robot controlled by this policy converges to a goal state ofR; otherwise, it fails. An action is then fully defined by the triplet: {π,R, t max } where t max is the maximum amount of time after which the action is considered failed if no goal state is reached.
If the level on top of which an MDP M k is built is, itself, an MDP, actions a ∈ A k can be considered as macro-actions or options.
The goal state of an action is frequently defined relative to a particular initial state s init , where s init is the state of the agent when the action is triggered, e.g., "Turning 90deg" or "moving forward 1m." The definition of an action is hierarchically recurrent: an action a k relies on a policy π that also relies on a set of lower level actions {a l , l < k}. To stop the recurrence, a specific set of actions A 0 is defined, that corresponds to the lowest level accessible by the robot, i.e., motor commands. They are also actions, as motor commands always have a goal (reaching a particular velocity or position, for instance) that a low-level control process aims at reaching and eventually staying at. As suggested by Harutyunyan (2018, Chapter 5), we assert that it may not be necessary, or even desirable, to have the same time-scale and discounting for lower and higher level actions.
FIGURE 2 |
Overview of an open-ended learning process. The agent designer does not know the different tasks the agent will be facing, but designs the agent to let it build the MDP to interpret a reward in its environment and find out how to maximize it.
Representational Redescription
In the proposed framework, open-ended learning needs to build MDPs on-the-fly, including the state and action spaces. Considering that the process starts from initial state and action spaces (S 0 , A 0 ), this particular feature is captured by the concept of representational redescription.
DEFINITION 5. A representational redescription process is a process that builds the state and action spaces enabling the definition of an MDP able to (1) solve a given task defined by observed reward values (2) in a particular domain and (3) with a particular decision or learning process. To this end, it relies on the representations of states and actions that have been previously acquired and can thus be described as a process transforming existing representations into new ones that are more fitted to the context.
Motor Skills: Controlling Transitions Between States
In an MDP, the set of provided actions is built to allow the robot to move in the state space. If a state space is built on-the-fly, the agent should be able to control it and move from one state to another. With the proposed definitions, the open-ended learning process needs to build actions to reach each part of the state space. The notion of motor skill is defined to capture this process.
DEFINITION 6. A motor skill is an action generator: ξ k : S (i) × S (g) → A k , where S (i) , S (g) ⊂ S 2 k . It is an inverse model defined in a particular action space A k to reach a target state from an initial state.
ξ (s i , s g ) is an action that, starting from s i ∈ S (i) , reaches s g ∈ S (g) with the highest possible probability. The state s g is the goal state of the corresponding action, and the corresponding reward function is intrinsic (see section 3.8).
Open-Ended Learning
On the basis of the proposed definitions, we can define an open-ended learning process as follows: An open-ended learning process builds the MDPs required to solve the tasks that the agent faces. Task k is defined through an observed reward value r k (t). Starting from an initial state space S 0 , an initial action space A 0 and a decision or learning process P, the open-ended learning process aims at building (1) state spaces, (2) action spaces and (3) motor skills to control the state with appropriate actions. State and action spaces need to fulfil the following features: 1. The state space should help interpret the reward occurences, i.e., learn R k to model the observed r k ; 2. The action space should allow control of the state space through dedicated motor skills; 3. The state and action spaces should make the agent's state trajectory as predictable as possible; 4. From the state and action spaces, P should be able to converge to a policy maximizing r k .
Intrinsic Motivations
Task-based rewards are not enough to drive a representational redescription process. There is a need for other drives that push the agent to explore and create new knowledge. This is the role of intrinsic motivations (Oudeyer and Kaplan, 2009;Baldassarre and Mirolli, 2013). In the context of open-ended learning through representational redescription, we propose the following definition: DEFINITION 8. An intrinsic motivation is a drive that complements drives associated with external task-based rewards to build appropriate state and actions spaces as well as motor skills.
Intrinsic motivations play a critical role at different stages of the proposed representational redescription process, for instance: • To organize learning and select in what order to learn skills and build state spaces; • To acquire relevant data for state representation learning (before building an appropriate MDP); • To build the skills required to control the state space (focusing learning on areas that are within reach and ignoring the rest).
CHALLENGES
This section recasts the challenges of open-ended learning with the proposed conceptual framework.
CHALLENGE 1. Interpreting observed reward: Building an appropriate state space to interpret an externally provided reward, i.e., build a state space S k that allows easy modeling of the observed reward value r k .
CHALLENGE 2. Skill acquisition: Controlling the displacements in an acquired state space S k by building the appropriate action space A k and skill ξ k : S (i) × S (g) → A k , where S (i) , S (g) ⊂ S 2 k , to give the agent the ability to move from one state to another as accurately as possible.
To address Challenge 1, state representations can be learned from known actions (Jonschkowski and Brock, 2015) and, likewise, to address Challenge 2, actions can be learned when the state space is known (Rolf et al., 2010;Forestier et al., 2017). CHALLENGE 4. Dealing with sparse reward: available state and action spaces may not allow to easily obtain reward r k (t) associated to Task k. This is particularly true at the beginning of the process, when starting from (S 0 , A 0 ): this is the bootstrap problem. The challenge is to design an exploration process that converges toward reward observations in a limited time.
A possibility to address the bootstrap problem is to rely on a motor babbling approach (Baranes and Oudeyer, 2010;Rolf et al., 2010). Another possibility would be to rely on a direct policy search including an intrinsic motivation toward behavior diversity and followed by a process to extract adapted representations from it (Zimmer and Doncieux, 2018).
The next challenges are related to the unsupervised acquisition of a hierarchy of adapted representations. CHALLENGE 5. Detecting task change: In the case where tasks are not explicitly indicated to the robot, detecting task change from k to k + 1 on the basis of observed rewards r.
The efficiency of a learning system is influenced by the order of the tasks it is facing (Bengio et al., 2009). CHALLENGE 6. Ordering knowledge acquisition and task resolution: An open-ended learning system needs to be able to select what to focus on and when. Does it keep learning representations for task k (even if r k has momentarily disappeared), or does it focus on a new task k + 1 ? CHALLENGE 7. Identifying the available knowledge relevant to build the new representations MDP k : as the set of available MDPs grows, it becomes a challenge to figure out what knowledge can help to build a new and adapted representation, i.e., {MDP l , π l } l≤k = {< S l , A l , p l , R l >, π l } l≤k .
CHALLENGE 8. Using transfer learning for speeding up state and action spaces learning along time: as the number of tasks and domains the agent can deal with grows, it becomes interesting when facing a task k + 1 to reuse the knowledge available to avoid learning MDP k+1 and π k+1 from scratch.
DISCUSSION
In contrast to many works in multitask learning (Zhao et al., 2017;Riedmiller et al., 2018), we assume here that each task should be solved with its own state and action representation, and learning these representations is a central challenge. We adopt a hierarchical perspective based on representational redescription which differs from the hierarchical RL perspective (Barto and Mahadevan, 2003) from the fact that we do not assume that the lowest level is necessarily described as an MDP and we assume that each intermediate level may come with its own representation.
The proposed framework is related to end-to-end approaches to reinforcement learning (Lillicrap et al., 2015;Levine et al., 2016), but instead of black box approaches, it emphasizes knowledge reuse through the explicit extraction of relevant representations.
Open-ended learning is expected to occur in a lifelong learning scenario in which the agent will be confronted with multiple challenges to build the knowledge required to solve the tasks it will face. It will not be systematically engaged in a task resolution problem and will thus have to perform choices that cannot be guided by a reward. Intrinsic motivations are thus a critical component of the proposed open-ended learning system. They will fill in multiple needs of such a system: (1) a drive for action and state space acquisition (Péré et al., 2018), (2) a selection of what to focus on (Oudeyer et al., 2007) and (3) a bootstrap of the process in the case of sparse reward (Mouret and Doncieux, 2012).
AUTHOR CONTRIBUTIONS
This article is the result of a joint work within the DREAM project. Each author has participated to the discussions that have lead to the proposed formalism. SD has coordinated the discussions and the writing. | 4,890.2 | 2018-09-25T00:00:00.000 | [
"Computer Science"
] |
Coherent Optical DFT-Spread OFDM
We consider application of the discrete Fourier transform-spread orthogonal frequency-division multiplexing (DFT-spread OFDM) technique to high-speed fiber optic communications. The DFT-spread OFDM is a form of single-carrier technique that possesses almost all advantages of the multicarrier OFDM technique (such as high spectral efficiency, flexible bandwidth allocation, low sampling rate and low-complexity equalization). In particular, we consider the optical DFT-spread OFDM system with polarization division multiplexing (PDM) that employs a tone-by-tone linear minimum mean square error (MMSE) equalizer. We show that such a system offers a much lower peak-to-average power ratio (PAPR) performance as well as better bit error rate (BER) performance compared with the optical OFDM system that employs amplitude clipping.
Introduction
The high-throughput data transmission over long-haul fiber optic systems is of considerable current interest. To maximize the spectral efficiency, polarization multiplexing and coherent detection have become the key enabling technologies for high-speed fiber optic communication systems [1]. However, physical impairments such as the chromatic dispersion (CD), the polarization mode dispersion (PMD), and the polarization dependent loss (PDL), become more severe as the bandwidth and data rate increase. The orthogonal frequency-division multiplexing (OFDM) technique has been widely adopted to cope with the frequency-selective fading of multipath channels in wireless communications; and it has been recently introduced to fiber optic systems for high-speed data transmission [2]. In OFDM systems, the frequency-domain equalization is employed with a single-tap equalizer at each tone, which significantly reduces the computational complexity compared with the time-domain equalization in single-carrier systems. However, one major disadvantage of the OFDM system is the high peak-to-average power ratio (PAPR). To address this issue, the DFT-spread OFDM has been developed as an alternative wireless access technique and it has been adopted as the uplink air interface of 3rd generation partnership project long term evolution (3GPP LTE) [3].
In [4] it is observed that the impact of nonlinearity on a link with periodical dispersion compensation is significantly larger than that on a link without inline dispersion compensation, which makes the application of OFDM to the existing infrastructure questionable. In addition, it is also noted that in a periodic dispersion map, reducing PAPR at the transmitter might significantly improve the nonlinear tolerance of the transmission link. In order to avoid the high cost associated with mitigating the nonlinear impairments caused by the high PAPR in optical OFDM systems, in this paper, we consider the coherent optical DFT-spread OFDM to lower PAPR at the transmitter. In Section 2 we describe the optical DFT-spread OFDM system that employs polarization division multiplexing (PDM) and coherent detection.
The receiver demodulation is discussed in Section 3. In Section 4 we present simulations results. Section 5 concludes the paper. Fig. 1. We transmit and receive signals on both polarizations which effectively results in a 2 × 2 multiple-input multiple-output (MIMO) system. Consider one of the two data streams at the transmitter. The bit sequence is first mapped to the quadrature amplitude modulation (QAM) symbols. In the traditional OFDM system, an inverse discrete Fourier transform (IDFT) is directly applied to the QAM symbols. In the DFT-spread OFDM system, on the other hand, a DFT is first applied to the QAM symbols, and then followed by the IDFT operation. After inserting the cyclic prefix (CP) symbols, the electrical signal is passed through the digital-toanalog converter and the low-pass filter, and then upconverted to the optical signal. The optical signal traverses in the long-haul fiber link comprising of multiple spans to reach the destination. At the receiver the optical signal is downconverted to the electrical signal, which is then low-pass filtered and passed through the analog-to-digital converter. After removing the CP and performing the DFT on the signals at each polarization, the two coupled frequency-domain received signals over M subcarriers are obtained. We then perform a tone-by-tone MIMO equalization to decouple the two signal streams. Finally, the IDFT is performed on each decoupled signal stream to recover the corresponding transmitted QAM symbol stream.
Optical OFDM system has similar block diagram as Fig. 1 but without the DFT or IDFT modules marked as dark blocks. Intuitively, the OFDM system has a high PAPR since after IDFT at the transmitter multiple input QAM symbols could be phase-synchronously added together and therefore causes high signal amplitude (peak power). The high PAPR of the signal decreases the system power efficiency and makes the transmission more susceptible to nonlinear impairments of the fiber link. On the other hand, the DFT and IDFT cancel each other and thus the DFT-spread OFDM system is essentially a single-carrier technique which in general has much a lower PAPR. However, note that the optical DFT-spread OFDM considered here is fundamentally different from the traditional single-carrier scheme [5], in that the former exhibits the advantages of both the single-carrier system (i.e., low PAPR) and the OFDM system (i.e., tone-by-tone single tap equalizer).
MIMO Channel Model for PDM
For the long-haul optical fiber transmission, the fiber link comprises n E fiber spans. We consider a periodic dispersion map on existing 10Gb/s infrastructure. In this case, the high PAPR caused by dispersion is trivial and we therefore focus on reducing the PAPR at the transmitter. We consider three typical distortions in the fiber channel, CD, first-order PMD, and PDL.
The 2 × 2 transfer function at subcarrier m corresponding to a DFT-spread OFDM symbol is given by [5] H m = e jφ e jΦ D (fm) T m , where φ is a common phase error (CPE) noise owing to the phase noises that varies for different OFDM symbols. In (1) Φ D (f m ) is the phase dispersion owing to CD and given by where t s is the actual time duration of one DFT-spread OFDM symbol not including the cyclic prefix; c is the speed of the light; D is the CD parameter in unit of ps/pm/km; L is the total length of the multi-span fiber link; and f c is the carrier frequency of the laser. Here we assume that the periodic dispersion map is employed and the CD is completely compensated for at each span. Hence T m in (1) is the Jones matrix with the dispersion compensation fiber (DCF) inserted at each stage, given by where k l denotes the attenuation factor of PDL; τ l is the differential group delay (DGD); θ l is the uniformly random rotation angle [6]. For each span, the DGD is a random variable following the Maxwellian distribution. However, in the same span, the DGDs of different subcarriers are the same.
The Linear MMSE Coherent Receiver
Let F be the M × M DFT matrix with its element given by Denote the transmitted QAM symbols on all subcarriers along the kth polar- As shown in Fig. 1, a DFT operation is first applied to s (k) , to obtain which is effectively the input to a traditional OFDM system. At the receiver, we assume the frequency and time offset can be perfectly estimated and compensated before detection. Then the received signal on the mth subcarrier is given by where m ] T , and v m ∼ N c (0, σ 2 I).
To demodulate the symbol vector s (k) , we first estimate the DFT-spread symbol x (k) and then recover the data symbol vector s (k) by an IDFT. In particular, a linear MMSE estimate of x m based on y m in (6) is given by [7] x where (·) † denotes conjugate transpose, and Γ is a diagonal matrix with Note that the tone-by-tone linear MMSE equalization in (7) involves inverting 2 × 2 matrices and hence the computational complexity is not significant.
M ] T . We finally apply the IDFT onx (k) to obtain the estimated data symbolŝ
Simulation Results
In this section we provide simulation results to compare the optical DFTspread OFDM system with the optical OFDM system in terms of both the PAPR performance and the bit error rate (BER) performance. The number of subcarriers is M = 256. A long-haul fiber optic system is considered with n E = 12 cascaded spans (each of length L = 80km). The laser wavelength is λ = 1.55µm. The CD parameter is D = 17ps/nm/km. We assume that periodic dispersion map is employed and the CD is compensated for by DCF after each span. The DGD parameter is D p = 0.15ps/ √ km. The mean value of DGD is 8/(3π)D p √ n E L. A typical PDL is 0.1dB, where PDL[dB] = −20 log(k l ). We consider both QPSK and 16-QAM modulations.
The data rate is 25G symbols/s, i.e., 100Gb/s for QPSK and 200Gb/s for 16QAM. Figure 2 illustrates the PAPR performance of the two systems for QPSK and 16-QAM. It is seen that the OFDM system exhibits a much higher PAPR than the DFT-spread OFDM system. In general 16-QAM has a higher PAPR than QPSK. However, their PAPRs are similar in the OFDM system.
Amplitude clipping is a typical method to lower the PAPR in OFDM systems.
We used a clipping ratio (CR) of 3dB, defined as CR = 20 log 10 (A/P ), where P is the power of transmitted signal, and A is the maximum transmitted signal magnitude. It is seen that clipping can indeed significantly reduce the PAPR in the OFDM system; however, the PAPR in the clipped OFDM system is still much larger than that of the DFT-spread OFDM system.
On the other hand, clipping substantially degrades the bit error rate (BER) performance. As shown in Fig. 3, the DFT-spread OFDM and the OFDM systems have the same BER performance. But after the 3dB clipping on the OFDM signal, a 0.8dB loss is incurred at the BER of 10 −3 for QPSK, and for 16QAM the loss due to clipping is about 8dB. In summary, compared with the traditional optical OFDM system that employs amplitude clipping, the DFT-spread OFDM system offers both a lower PAPR and a better BER performance.
Discussions: Note that the effect of nonlinearity depends on the instanta-neous power of the signal according to the Schrodinger's equation [8]. It is shown in Fig. 2 that the PAPR performance of the DFT-spread OFDM signal is better than both the clipped and unclipped OFDM signals. This means that for a given average transmit power, the peak power of both the clipped and unclipped OFDM signals will be larger than that of the DFT-spread OFDM signals; and hence they are more susceptible to nonlinear distortion than the DFT-spread OFDM signals. On the other hand, various effective nonlinear impairment compensation methods have been proposed [9]. Hence one can envision that if these techniques are applied to the system, only linear channel distortion needs to be considered, under which we have shown the DFT-spread OFDM offers better BER performance than the clipped OFDM.
Conclusions
We have considered the optical DFT-spread OFDM system with polarization division multiplexing and coherent detection. Compared with the conventional single-carrier systems, the DFT-spread OFDM system has the advantages of flexible bandwidth allocation, high spectral efficiency and low sampling rate, and low-complexity equalization. Compared with the optical OFDM system with amplitude clipping, the DFT-spread OFDM system offers both better BER performance and a much lower PAPR, with little attendant increase in the transceiver complexity. | 2,620.8 | 2011-02-28T00:00:00.000 | [
"Business",
"Physics"
] |
Homotopy theory of monoids and derived localization
We use derived localization of the bar and nerve constructions to provide simple proofs of a number of results in algebraic topology. This includes a recent generalization of Adams' cobar-construction to the non-simply connected case, and a new model for the homotopy theory of connected topological spaces using an infinity category of discrete monoids.
the dg coalgebras BL M C(M) and CL W N(M) are weakly equivalent, i.e. there is a zig-zag of filtered quasi-isomorphisms between them.
We can then deduce the following results with minimal computation: (1) For any reduced grouplike (in particular Kan or 1-reduced) simplicial set K there is an equivalence between CG(K), the chain algebra of the loop group of K, and ΩC(K), the cobar construction on the chain coalgebra of K. See Corollary 4.2. This generalizes a classical result of Adams [1]. (2) For an arbitrary reduced simplicial set K there is an equivalence between CG(K) and a localization of ΩC(K). See Corollary 4.4. Some of these, or similar, results have appeared in the literature before: (1) was shown when K is a simplicial singular set of a topological space by Rivera-Zeinalian in [23], (2) is equivalent to the extended cobar construction of Hess-Tonks [14], and (4) is originally due to Rivera-Zeinalian [22]. However, we believe this paper significantly simplifies the existing proofs and adds conceptual clarity.
In particular, we show that the extended cobar-construction of Hess and Tonks [14] of the chain coalgebra of a simplicial set is a derived localization of the ordinary cobar-construction and clarify its dependence on the choices made.
The main theorem of this paper is a new result, which provides an entirely algebraic model for the homotopy category of connected spaces. By inverting those maps of discrete monoids which induce quasi-isomorphisms of derived localized monoid algebras one obtains an ∞-category of discrete monoids. More precisely, this ∞category is realized as a relative category in the sense of Barwick and Kan [4]. We prove in Theorem 5.2 that this ∞-category of discrete monoids is equivalent to the ∞-category of reduced simplicial sets (also viewed as a relative category with ordinary weak equivalences of simplicial sets). This is potentially of great computational utility since derived localizations of associative rings are effectively computable in a number of situations, both of algebraic and topological origin cf. [5].
As far as we know, this is the first result providing an algebraization of the homotopy category of spaces without any restrictions apart from connectivity (such as simple connectivity, rationality or being of finite type). It is ideologically similar to the well-known result of Thomason [24] constructing a closed model category structure on small categories that also models the ∞-category of spaces as well as its refinement due to Raptis [21]. However Thomason's and Raptis's constructions (while providing more structured equivalences of closed model categories) cannot be viewed as genuine algebraization results since weak equivalences of small categories are defined by appealing to the category of spaces.
1.1. Notation. We work over a commutative ground ring k that is a principal ideal domain. All tensor products are understood over k.
We denote the category of simplicial sets by sSet and its subcategory of reduced simplicial sets, i.e. simplicial sets with exactly on 0-simplex, by sSet 0 . We write qCat for the category of simplicial sets with the Joyal model structure as a model for ∞-categories; the subcategory of simplicial sets with one object is denoted by qCat 0 . To distinguish the classical weak equivalences in sSet and the categorical equivalences in qCat we will denote them by ≃ Q (for Quillen) and ≃ J (for Joyal) respectively. The geometric realization of a simplicial set K will be denoted by |K|.
We denote the category of monoids by Mon and that of simplicial monoids by sMon.
The category of unital dg algebras, free as k-modules, is denoted by dgA and the category of augmented dg-algebras by dgA /k . We denote by dgCoa conil the dg category of counital conilpotent dg coalgebras, also free as k-modules. By weak equivalences of dg coalgebras we always mean morphisms in the class generated by filtered quasi-isomorphism, the definition is recalled in Section 2.1. All our gradings are homological.
We will denote by C the normalized chain coalgebra functor with coefficients in k on sSet, cf. Chapter 10 of [19]. We also denote by C the functor that sends any monoid to its monoid algebra over k, it will be viewed as an object of dgA. [16].
For the reader's convenience we repeat some definitions. For an augmented dg algebra ǫ : A → k, set A + = ker(ǫ). Then define B(A) = ⊕ ∞ n=0 (sA + ) ⊗n with comultiplication defined by deconcatenation and counit given by the projection to (sA + ) ⊗0 k, where s denotes the suspension. We define the differential on B(A) to be the unique coderivation whose projection B(A) → sA + restricts to d sA on sA + , to sµ A (s −1 ⊗ s −1 ) on sA + ⊗ sA + and to 0 on higher tensors. The cobar construction of a coalgebra is defined analogously. Now assume that k is a field. Then the bar-cobar adjunction is a Quillen equivalence [20]. We consider the usual model structure on augmented dg algebras (so that weak equivalences are multiplicative quasi-isomorphisms). For the model structure on dgCoa conil see [20,Theorem 9.3(b)]. The key definition is that f : C → D is a filtered quasi-isomorphism if there are admissible filtrations on C and D such that the associated graded map Gr( f ) is a graded quasi-isomorphism. A filtration F on a conilpotent coalgebra C is admissible if it is increasing, compatible with comultiplication and differential, and F 0 equals the image of the coaugmentation k → C. An admissible filtration always exists. Then f : C → D is a weak equivalence in dgCoa conil if it is contained in the smallest class of morphisms containing filtered quasi-isomorphisms and closed under the 2-out-of-3 property. If k is not a field we will, somewhat abusing terminology, still refer to filtered quasiisomorphisms as weak equivalences, even though there may not be an underlying closed model category. Cofibrations in dgCoa conil are just monomorphisms.
2.2.
Localization of dg algebras. Given a dg algebra A with a collection of cycles S , its derived localization L S A is the homotopy initial dg algebra under A such that the images of all s ∈ S are invertible in homology, [5,Definition 3.3]. By [5,Theorem 3.10], L S (A) is a homotopy pushout of the form A * h k S k S , S −1 .
2.3.
Localization of ∞-categories. We will use Joyal's theory of ∞-categories as quasi-categories, see [17,18] for further background. Given any simplicial set K with a subsimplicial set W we may consider it as an object of qCat and define its localization L W K, see [7,Proposition 7.1.3]. It has the universal property that for any quasi-category C the functor category Fun(L W K, C) is equivalent to the subcategory of Fun(K, C) consisting of functors sending any map in W to an invertible map in C. See also the section on homotopy localization in [17].
We restrict attention to reduced simplicial sets. We are particularly interested in the case where W is given by a collection of 1-simplices S and will write L S K in this case. Let I be the nerve of N, the free monoid on one generator, and J the nerve of Z, the free group on one generator. There are natural maps I → J and ∐ S I → K, and L S K is equivalent to the homotopy pushout in qCat 0 of ∐ S J ← ∐ S I → K. This follows from the proof of [7, Proposition 7.1.3]: The map ∐ S I → ∐ S J is an anodyne extension, i.e. a trivial cofibration in the Quillen model structure, thus it may play the role of W → W ′ and the rest of the proof applies without changes.
2.4. Grouplike simplicial sets. Any simplicial set K may be interpreted as an object in qCat and its fundamental category π(K) is defined as the left adjoint of the nerve functor from categories to simplicial sets. If K is weakly Kan, there is an explicit construction of π(K) as the category with objects given by 0-simplices and morphisms given by 1-simplices modulo 2-simplices, see [ 2.5. Relative categories. We will also use the theory of relative categories as introduced in [4] as a model for ∞-categories. A relative category (C , W) is just a pair consisting of a category C and a class of weak equivalences W ⊂ Mor(C ).
Associated to any relative category (C , W) is a simplicial category L W C obtained by simplicial localization of C (viewed as a simplicial category) at W. There is a model structure on relative categories whose weak equivalences (C , W) → (C ′ , W ′ ) are those maps that induce weak equivalences of simplicial localizations The model category of relative categories is Quillen equivalent to the model categories of simplicial categories and quasi-categories. In particular the relative category (sSet, W Q ), where W Q denotes weak homotopy equivalences, is a model for the ∞-category of spaces.
We are not aware of a good exposition of homotopy limits and colimits in relative categories. To avoid technicalities we define the homotopy limit of a diagram in a relative category by taking the ∞-categorical limit of the corresponding diagram in the associated ∞-category. A comparison result ensures that if the relative category happens to be a model category then this recovers the usual homotopy limits and homotopy colimits. This is explained in Remark 7.9.10 of [7] or Remark 2.5.8 in [2]. In particular it follows from this that any weak equivalence of relative categories preserves homotopy limits, which we will need below.
Bar and nerve construction
We begin by considering the following diagram.
Here N is the usual nerve of a monoid, considered as a reduced simplicial set. The vertical arrows are given, respectively, by the monoid algebra and the normalized chain coalgebra, over k. For any monoid M the augmentation ǫ on C(M) is induced by M → * . Finally, B is the bar construction on an augmented dg algebra as recalled in Section 2.1.
It is a straightforward but fundamental observation that this diagram commutes: We will refine this result by considering localizations of dg algebras and simplicial sets.
There is a natural model structure on qCat 0 such that weak equivalences are categorical equivalences and cofibrations are monomorphisms.
Proof. We recall the Quillen equivalence C ⊣ N : qCat ⇆ sCat, see e.g. [18] and observe that it restricts to an adjunction sMon ⇆ qCat 0 . Then the proof of the lemma is the same as for the non-reduced case, cf. [ brations in qCat 0 are also cofibrations in qCat, so this follows from the non-reduced case (or directly by the same argument). (3) Finally we need to check that a map f : K → L of reduced simplicial sets which has the right lifting property with respect to all cofibrations is a categorical equivalence. It suffices to show that if f has the right lifting property with respect to all cofibrations between reduced simplicial sets then it has the right lifting property with respect to all cofibrations; this reduces the problem to the non-reduced case. So let A → B be a cofibration. But any maps A → K and B → L factor through the reduced simplicial setsĀ = A/A 0 andB = B/B 0 , andĀ →B is a cofibration. Thus the right lifting property with respect toĀ →B provides a right lift with respect to A → B.
The following lemma is essentially [23, Proposition 7.3]. We provide a direct proof.
Proof. We reduce this lemma to three claims.
(1) C sends categorical equivalences between weak Kan complexes to weak equivalences.
(2) C sends pushouts along disjoint unions of inner horn inclusions to trivial cofibrations.
(3) There is a functor Gx ∞ sending each reduced simplicial set A to a reduced weak Kan complex. For each reduced simplicial set A there is a natural map A → Gx ∞ A which is a colimit of pushouts along disjoint unions of inner horn inclusions.
If we have these claims we may take any categorical equivalence A → B and using (3) replace it by a zig-zag A → Gx ∞ A → Gx ∞ B ← B. C sends the middle map to a weak equivalence by (1). The outer maps are sent to direct limits of trivial cofibrations, thus they are trivial cofibrations themselves, and C(A) ≃ C(B).
To prove (1) it suffices to show that homotopy equivalences in qCat 0 are sent to filtered quasi-isomorphisms. In fact we will show that homotopies of maps in qCat 0 are sent to homotopies between maps of dg coalgebras.
Let I be a Kan complex such that the functor X → X × I gives good cylinder objects in qCat. For example, we can take for I the nerve of the category with two objects and two mutually inverse morphisms between them. We denote by I + the simplicial set obtained by adding a disjoint base point.
Then a cylinder object in qCat 0 is given by the smash product K ∧ I + , i.e. K × I + /K ∨ I + . Thus any homotopy between two maps from K to K ′ in qCat 0 may be represented by a map F : K ∧ I + → K ′ . This gives a map of coalgebras C(F) : C(K ∧ I + ) → CK ′ and it suffices to show that C(K ∧ I + ) is a cylinder object in dgCoa conil k/ . For any coaugmented coalgebra (C, w) we writeC for C/w(k). Then C(K) ∐ C(K) k ⊕C(K) ⊕C(K) injects into C(K ∧ I + ) = k ⊕C(K) ⊗ C(I), thus it is a cofibration of dg coalgebras. It remains to show that C sends the projection to a filtered quasi-isomorphism. Let F 0 (C(K ∧ I + )) = w(k) and F i (C(K ∧ I + )) = F iC (K) ⊗ C(I). This is an admissible filtration and on graded pieces we have quasi-isomorphisms Gr i C(K) ⊗ C(I) ≃ Gr i C(K).
To establish (2) we consider a simplicial set K and let K ′ be defined by attaching a collection of n-simplices B i along inner horns. We need to show C( f ) : C(K) → C(K ′ ) is a filtered quasi-isomorphism. Filter C(K) by F i C(K) = ⊕ j≤i C(K) j . This is clearly an admissible filtration. To define the filtration on C(K ′ ) we denote the face of B i that is not in K by b i . I.e. the b i are the (n − 1)-simplices which are in K ′ but not in K.
Thus every n-simplex appears in the the n-th graded piece of K ′ , with the exception of the b i , which are in the n-th piece despite being (n − 1)-simplices. This is clearly compatible with differentials, we need to check the comultiplication. We check this on a basis. By definition ∆B i = k ∂ k 0 B i ⊗ ∂ n−k max B i . Applying ∂ 0 or ∂ max k times to B i gives a n − k simplex which lives in F ′ n−k unless one of those terms is of the form b j . For degree reasons this could only be ∂ 0 B i and ∂ max B i , but as we attached along inner horns both of these are in K, and thus in F ′ n−1 C(K ′ ). Thus F ′ gives an admissible filtration on C(K ′ ) which is clearly compatible with C( f ).
is an isomorphism everywhere except for degree n. In degree n the cokernel has a basis give by all B i and b i , and dB i = b i mod K, so the cokernel is acyclic.
Thus C(K) and C(K ′ ) are filtered quasi-isomorphic. Since C( f ) is a monomorphism it is a trivial cofibration. In this argument we fixed n for ease of notation but the same argument goes through if we are attaching n-simplices for different values of n simultaneously.
Claim (3) follows directly from the discussion after Definition 3.2.10 in [25]. The only change is that one defines Gx by filling all inner horns, rather than filling all horns. Proof. First we note that C has a right adjoint. It is provided by C → Hom dgCoa (C(∆ • ), C) where ∆ • is the cosimplicial simplicial set given by the nsimplex in degree n.
The fact that the adjunction is Quillen follows from Lemma 3.3 together with the observation that C preserves cofibrations, which are just monomorphisms in both categories.
Remark 3.5. The reason for assuming that k be a field in 3.3 and 3.4 is that the category of dg coalgebras is only known to have a closed model category structure (with filtered quasi-isomorphisms as weak equivalences) under this assumption. Consequently, it is also needed for establishing dg Koszul duality as a Quillen equivalence between dgA /k and dgCoa conil in [20]. This result should generalize to more general commutative rings, but there are technical difficulties in implementing it. We will establish Koszul duality as an equivalence of relative categories; this suffices for our purposes. Lemma 3.6. Let X be a complex of free k-modules such that for any field F and a map k → F the complex X ⊗ k F is acyclic. Then X is acyclic to begin with.
Proof. It is well-known that a k-module is zero if and only if its localization at every maximal ideal of k is zero; together with the exactness of the localization functor for modules over a commutative ring this implies that it suffices to assume that k is local. Let its unique maximal ideal be generated by x ∈ k. Then we have the following homotopy pullback square, cf. [10,Proposition 4.13]: Here X →X (x) is the Bousfield localization of X with respect to the functor − ⊗ k/(x) (it agrees with the completion of X at the ideal (x) ∈ k). Since k/(x) and k[x −1 ] are both fields, we have that X ⊗ k[x −1 ] andX (x) , and thus alsoX (x) ⊗ k[x −1 ], are acyclic and then so is X.
Proposition 3.7. The relative categories (dgA /k , W A ) and (dgCoa conil , W C ) are weakly equivalent; here W A denotes quasi-isomorphisms and W C weak equivalences of dg coalgebras.
Proof. We will prove that for any augmented dg algebra A there is a quasiisomorphism ΩB(A) → A and for any conilpotent dg coalgebra C the natural map C → BΩ(C) is a weak equivalence. If k is a field this follows immediately from the results recalled in Section 2.1.
Let F be a field supplied with a map k → F. Then by construction BΩ( But it follows from Lemma 3.6 that two complexes of free k-modules are quasi-isomorphic if they are quasi-isomorphic after tensoring with any field; thus ΩB(A) → A is a quasi-isomorphism.
The statement for dg coalgebras follows by applying the same argument to the graded pieces of the natural filtrations on C and BΩ(C), see the proof of Theorem 6.10 in [20].
This shows that ΩB and BΩ are strictly homotopic to the identity functor on dgA /k and dgCoa conil respectively in the sense of [4]. So the two relative categories are strictly homotopy equivalent, and thus weakly equivalent by Proposition 7.5 (iii) in [4].
In the following formulation we denote by W, slighty abusing the notation, a submonoid of M, the corresponding subset of 1-simplices in N(M), and the corresponding subset of the canonical basis of C(M). Proof. By definition the localization constructions in dg algebras and simplicial sets are given by homotopy colimits, see Sections 2.2 and 2.3. As B is an equivalence of relative categories by Proposition 3.7 it preserves homotopy colimits and we deduce L ′ where h stands for the homotopy pushout of dg coalgebras.
There is also a natural map η : and I, J are as in Section 2.3. We note first that η is a weak equivalence if k is a field since in that case C is a left Quillen functor by Lemma 3.4 and so, it commutes with homotopy colimits. As the tensor product commutes with the homotopy colimit it follows that η becomes a quasi-isomorphism after tensoring with an arbitrary field. Thus by Lemma 3.6 it is a weak equivalence in general.
It remains to identify the two different coalgebra localizations. We apply the isomorphic functors CN and BC to the map of discrete monoids N → Z to show that C(I) → C(J) is weakly equivalent to B(k t ) → B(k t, t −1 ). Proof. By [12,Theorem 3.5] there is a a functor D from based path connected topological spaces to discrete monoids such that X is weakly equivalent to the classifying space of D(X). Then M(K) := D(|K|).
Applying Theorem 3.8 in the case that W = M allows us to prove the following theorem that was proved for topological spaces in [23]. It is a generalization of a classical result by Adams [1].
To state the result we recall that the simplicial loop group G and the simplicial classifying space W (constructed e.g. in [13, Chapter V]) give a Quillen equivalence between reduced simplicial sets and simplicial groups. Proof. We denote a functorial fibrant replacement in the classical model structure by R Q and in the Joyal model structure by R J . Then we note that R J K is weakly Kan and grouplike, thus it is a Kan fibrant replacement for K. By Proposition 4.1 we have K ≃ Q NM(K) and R Q K ≃ J R Q NM(K) as sSet is a Bousfield localisation of qCat. As L K 1 K ≃ J K by assumption and R J L K 1 is a Kan replacement (see Section 2.4) we obtain To go beyond grouplike simplicial sets we need to refine the loop group construction. The following almost trivial example is instructive.
Example 4.3. Consider the simplicial set K with one 0-simplex and one nondegenerate 1-simplex. Topologically, K is the circle, and so its loop space is the infinite cyclic group and the dg algebra CG(K) is (quasi-isomorphic to) the ring of Laurent polynomials k[t, t −1 ] with |t| = 0.
On the other hand, The reason for this discrepancy is that K is not grouplike.
This example suggests that, even in the case when a simplicial set K is not grouplike, the chains on its loop space could still be recovered as a localization of ΩCK. This is indeed true: Proof. First we will show that L 1+K 1 ΩC(K) ≃ ΩCL K 1 (K) by commuting localization past Ω and C.
Since Ω is an equivalence of relative categories it commutes with colimits. As in the proof of Theorem 3.8 we may express the localization of a coalgebra as a homotopy pushout along ∐ C(I) → ∐ C(J) or equivalently along ∐ B(k t ) → ∐ B(k t, t −1 ). Again from the proof of Theorem 3.8 we know that this localization commutes with C. Thus we have ΩCL K 1 K ≃ ΩL K 1 C(K) ≃ L 1+K 1 ΩC(K). Here for the last step we use that ΩB(k t, t −1 ) ≃ k t, t −1 . The equivalence from C(I) to B(k t ) sends an element x ∈ K 1 to s −1 x − 1 in ΩC(K), cf. the correspondence in Lemma 3.1. Then s −1 x − 1 is sent to x by the natural transformation from ΩB to the identity. Thus localizing K at K 1 corresponds to localising ΩCK at 1 + K 1 .
For the left hand side we note that CG(K) ≃ CGL K 1 (K) since G preserves the (classical) weak equivalence between K and L K 1 (K), and thus we deduce the result from Corollary 4.2 applied to L K 1 (K).
Remark 4.5. This result throws some light on a construction of Hess and Tonks [14]. For a simplicial set K that is not necessarily grouplike they consider an extended cobar constructionΩC(K), see [14,Section 1.2], and then show that CG(K) ≃ΩC(K) (in fact, they construct an explicit chain equivalence between these dg algebras).
Unravelling the extended cobar construction in the special case of a chain coalgebra we see thatΩC(K) may be constructed as the dg algebra obtained from ΩC(K) by adding inverses for all the cycles 1+ s −1 x for x ∈ K 1 . As ΩC(K) is cofibrant over its subalgebra generated by these cycles, this is a derived localization, see [5,Remark 3.11]. Therefore we obtain that CG(K) ≃ L 1+K 1 ΩC(K) ≃ΩC(K) by Corollary 4.4, recovering the result of [14].
The construction ofΩC for a dg coalgebra C depends on the choice of a basis for C 1 and [14] does not address the question whether different choices lead to quasiisomorphic dg algebras. For C = C(K) there is a natural basis in C 1 (K) given by 1-simplices and with this basis the quasi-isomorphism CG(K) ≃ΩC(K) does hold.
The following example shows that it will not hold with a wrong choice of basis. 1 2 ] × k and this is not isomorphic to k unless k has characteristic 2.
4.2.
Chain coalgebras detect weak homotopy equivalences. Next, we deduce the main result of [22] as follows: Proof. The "only if" follows from Lemma 3.3 and Lemma 3.6.
To show the converse we assume that f * : C(K) ≃ C(K ′ ). By Corollary 4.2 this implies that we have a quasi-isomorphism CG( f ) : CG(K) ≃ CG(K ′ ). Thus H 0 (CG( f )) is bijective. By construction it is a morphism of Hopf algebras, compatible with both the composition of loops and the coproduct. Together this shows that H 0 (CG( f )) induces an isomorphism between grouplike elements in H 0 (GK) and H 0 (GK ′ ), i.e. between the fundamental groups of |K| and |K ′ |.
We finish the proof by applying Whitehead's theorem. The identity components of GK and GK ′ are connected nilpotent spaces, thus by [9] they are weakly equivalent. As all components are equivalent and f identifies the π 0 (GK) and π 0 (GK ′ ) we obtain a weak homotopy equivalence and thus a weak equivalence of simplicial monoids GK → GK ′ . This implies K ≃ Q K ′ .
Derived categories.
For the last part of this section we assume that k is a field. We recall the derived categories of second kind constructed in [20]. Specifically, for the coalgebra C(K) we consider the coderived category D co (C(K)), which is a triangulated category obtained as the localization of the homotopy category of dg comodules over C(K) at morphisms with coacyclic cone. A dg comodule is coacyclic if it is contained in the minimal triangulated subcategory that contains the total complexes of short exact sequences and is closed under infinite direct sums.
A fundamental result says that for any conilpotent coalgebra C there is an equivalence D co (C) ≃ D(ΩC), cf. [20, Theorem 6.5(a)]. Thus weakly equivalent dg coalgebras have equivalent coderived categories.
It follows directly from Lemma 3.3 that the coderived category of the chain coalgebra of a simplicial set is an invariant with respect to Joyal weak equivalences. On the other hand, there is another homotopy invariant, this time with respect to classical (Quillen) weak equivalences of simplicial sets. It is the triangulated category of ∞-local systems on a simplicial set K. This could be defined e.g. as the derived category of cohomologically locally constant sheaves on |K|, cf. [6,15].
Corollary 4.8. The derived category of ∞-local systems on K is a full subcategory of D co (CK). If K is grouplike the two categories are equivalent.
If K is grouplike, the two categories agree by Corollary 4.2. Otherwise we have D(CG(K)) ≃ D(L 1+K 1 ΩC(K)) by Corollary 4.4, so ∞-local systems are modules over a localization of ΩC(K). But by Corollary 4.29 in [5] the derived category of modules over a localized dg algebra is a full subcategory of the derived category of modules over the original dg algebra. Explicitly, ∞-local systems are equivalent to the full subcategory of K 1 -local objects in D(ΩC(K)).
5. An algebraic model for the homotopy category of spaces Finally, our results give us an algebraic model for the homotopy theory of connected topological spaces (equivalently, reduced simplicial sets). In this section we fix k = Z. This definition is completely algebraic in the sense that a monoid is an algebraic structure i.e. a set with a collection of finitary operations subject to finitely many identities [8] and the notion of a weak equivalence in Mon is also described algebraically. The definition is meaningful because of the following: This shows that (Mon, W) and (sSet 0 , W Q ) are homotopy equivalent and thus weakly equivalent, cf. the proof of Proposition 3.7. | 6,958 | 2018-09-30T00:00:00.000 | [
"Mathematics"
] |
Temperature‐Dependent Excitonic Band Gap in Lead‐Free Bismuth Halide Low‐Dimensional Perovskite Single Crystals
In this study, the optical behavior of lead‐free Bi‐based low‐dimensional perovskite single crystals (Cs3Bi2Cl9, Cs3Bi2Br9, Cs3Bi2I9, and MA3Bi2I9) is investigated by spectroscopic ellipsometry, supported by X‐ray diffraction and density functional theory calculations. All materials exhibit a strong excitonic peak resulting from photogenerated electron–hole Coulomb interactions, whereas the threshold of continuous absorption is found at higher energies. The resonances of the excitonic and continuous bands, along with exciton binding energies, are extracted through Critical Point Analysis of the ellipsometric data over a wide temperature range (from −90 °C to 90 °C), revealing subtle variations in the optical characteristics for each single crystal. These materials can be applied in optoelectronics as photodetectors because of their high stability and lower toxicity compared to their Pb‐based perovskites.
The non-toxic bismuth cation (Bi 3+ ) emerged as one of the most suitable lead counterparts for heterovalent substitution.This is primarily because Bi 3+ shares the same 6s 2 6p 0 electronic structure as the Pb 2+ cation and has a similar effective ionic radius (1.03 Å). [31] Additionally, Bi-based perovskites have shown superior moisture and thermal stability. [32]One of the promising Bi-based perovskite structures is the A 3 Bi 2 X 9 , formed by hexagonal or cubic packing of A and X cations.Whereas the trivalent Bi 3+ cations occupy only two-thirds of the octahedral cavities BiX 6 . [33]Unlike the 3D framework made up of cornersharing PbX 6 octahedra present in APbX 3 , the crystalline lattice of A 3 Bi 2 X 9 Bi-based alternatives is determined by various stackings of trigonal AB 3 layers.
In principle, there are three types of stackings present in these structures: h, hcc, and c. [34] The hexagonal (h) 6 stacking leads to the formation of rhombohedral structures with a 0D framework of face-sharing Bi-X octahedra, i.e., the framework of isolated bi-octahedral B 2 X 9 3− anions.When stacked in cubic (c) mode, they form trigonal structures, where BiX 6 octahedra share cis-vertices with the other three octahedra and thus create the 2D corrugated layers.The (hcc) 2 stacking leads to the formation of hexagonal and orthorhombic structures, where both kinds of octahedron bonding, i.e., both 0D and 2D structures are possible. [33,34]revious reports show that Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 crystallize in hexagonal structure (P6 3 /mmc) [35][36][37] with the 0D motive of bioctahedra, while Cs 3 Bi 2 Br 9 crystallizes in trigonal (P " 3m1) [38,39] and Cs 3 Bi 2 Cl 9 in either orthorhombic (Pnma) [40,41] or trigonal (P " 31c) [42] crystal systems (space groups), with the 2D motive of BiBr 6 and BiCl 6 octahedra, respectively.Due to the reduced dimensionality, A 3 Bi 2 X 9 perovskites have relatively large band gap (1.94-3.02eV) [36,41,42] and extremely low ionic migration.The thin films of Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 have been used as photoactive layers in solar cells while Cs 3 Bi 2 Br 9 and Cs 3 Bi 2 Cl 9 were not used in solar devices due to the too-wide electronic band gap. [43]However, these Bi-based perovskites exhibit low dark current noise (at pA level) and high resistivity (up to 10 12 Ω cm) [36] both very desirable properties for the construction of highly sensitive photodetectors.Additionally, these nontoxic materials showed outstanding thermal stability, fast response speeds (in milliseconds) and high signal-to-noise (onoff) ratio.Li et.al., [44] presented the vertical ITO/Cs 3 Bi 2 I 9 /Au photodetectors with an exceptional on-off ratio of 11 000, measured under −2 V bias and a white LED of 100 mW cm −2 light intensity.Moreover, these devices showed great long-term stability, preserving more than 90% of their initial response after 1000 h of exposure to humid air (50% RH).MA 3 Bi 2 I 9 also proved to be a suitable material for efficient photodetection.Hussain et.al. [45] fabricated Ag/MA 3 Bi 2 I 9 /FTO photodetector with high detectivity (1.3 × 10 12 Jones) and a fast response speed of (26.81/41.98ms) under 0 V bias and low white light intensity of 10 μW cm −2 .Furthermore, Cs 3 Bi 2 I 9 demonstrated great potential for X-ray detection. [46]Zhang et.al. [47] reported Xray detectors based on centimeter-sized Cs 3 Bi 2 I 9 single crystals, which exhibited a high sensitivity of 1652.3 μC Gy air −1 cm −2 and a very low detection limit of 130 nGy air s −1 ≈4 times higher than -Se detectors and ≈40 times lower than required for medical diagnostics.Additionally, these single crystals showed outstanding operational stability, even at a higher temperature of 100 °C.
Unlike their iodide counterparts, Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 , the research on the detection potential of Cs 3 Bi 2 Br 9 and Cs 3 Bi 2 Cl 9 is still in its infancy.Liu et.al. [48] demonstrated the detecting capability of a self-powered FTO/NiO x /Cs 3 Bi 2 Br 9 /Au UV photodetector, with a fast response speed of 3.04/4.65ms, the responsivity of 4.33 mA W −1 and detectivity of 1.3 × 10 11 Jones (measured under week UV light of 15 mW cm −2 , 405 nm).Tailor et al. [41] reported the first Ag/Cs 3 Bi 2 Cl 9 /Ag detectors with a responsivity of 17 mA W −1 and detectivity as high as 6.63 × 10 11 Jones.Although these materials show significant response to incident photons, the relationship between their optical properties and device performance is not fully explained.
Before contemplating their potential application in optoelectronics (photodetectors, scintillators, solar cells etc.), it is crucial to comprehend how the optical properties of these materials evolve concerning temperature, chemical composition, dimensionality, and how such changes could impact the performance of future A 3 Bi 2 X 9 -based devices.In this work, we report an extensive multi-material study of the band gap change for inorganic and hybrid A 3 Bi 2 X 9 perovskite single crystals (Cs 3 Bi 2 Cl 9 , Cs 3 Bi 2 Br 9 , Cs 3 Bi 2 I 9 , and MA 3 Bi 2 I 9 ) using spectroscopic ellipsometry (SE), supported by x-ray diffraction and density functional theory (DFT) calculations.We identified the interband transition energies by applying a Critical Point (CP) Analysis on the absorption spectra, to gain a comprehensive understanding of the excitonic processes in the 1-5 eV energy range, considering temperature variations that span from −90 °C to 90 °C.Within this framework, we discuss exciton and continuous band absorption, phase changes and fine differences in the optical properties arising from the chemical and structural differences of the studied materials.
We point out that, although the Cs 3 Bi 2 Br 9 sample presents more than one set of planes, only the most intense set was selected for SE measurements.On the other hand, we attempted to use a more refined optical model that take into account the anisotropy without obtaining a fitting improvement, very likely due to the very low contribution (i.e., intensity) of other planes to the optical response.
To investigate the optical properties of Bi-based crystals, we performed SE measurements at room temperature (RT) and in air on all the as-prepared samples.The experimental data fit and the extracted dielectric function, as discussed in the experimental section, are shown in Figure S7 (Supporting Information).
Figure 1i
,l shows the absorption coefficient calculated from the real ( 1 ) and the imaginary ( 2 ) parts of the dielectric function using equation: [49] where E is the energy, c is the speed of light in vacuum and h is the Planck constant.The absorption coefficient increases abruptly at a certain energy value that, when excitons are absent, or too weakly bounded, corresponds to the onset of the continuous band and characterizes the band gap (E gap ) of the system.In materials where bound excitons are present, the difference between the exciton energy (E ex ) and the E gap defines the exciton binding energy E B = E gap -E ex . [50]In all Pb-based perovskite materials like MAPbI 3 , [51] MAPbBr 3 , 11 CsPbI 3 , [52] CsPbBr 3 , 11 E ex is hardly distinguishable from E gap due to low exciton binding energies (25-50 meV [53] ).In these materials (bulk or thin layers), the exciton is easily separated in free charges, making them good candidates for solar cell devices. [50]On the contrary, in all studied Bi-based materials, we observe an isolated and narrow excitonic peak, indicated as E ex in Figure 1i,l.This peak is more prominent for the Cs 3 Bi 2 Br 9 and Cs 3 Bi 2 Cl 9 systems with respect to the two iodides (Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 ).We argue that this peak arises from electron-hole interactions and reflects a bound exciton owing to a large binding energy at RT, [54][55][56] while E gap is located at higher energy (Figure 1i,l).For this reason, if the material's band gap is set to the excitonic absorption onset, its value is by far underestimated.This feature was previously reported in the literature for Cs 3 Bi 2 I 9 [57] and MA 3 Bi 2 I 9 [58] single crystals, but only for Cs 3 Bi 2 Cl 9 [41,42,59] and Cs 3 Bi 2 Br 9 [60][61][62][63] nanosystems.For the two latter materials, it is possible to find optical characterizations in the literature that refer to nanocrystals or nanoplatelets, where quantum confinement effects and surface-related phenomena might affect the data with respect to the large single crystals used here. [64]he absorption coefficient provides also information about defects.In particular, the Cs 3 Bi 2 Cl 9 (Figure 1a) and Cs 3 Bi 2 Br 9 (Figure 1b) and to a very lesser extent also MA 3 Bi 2 I 9 (d) have defects inside the gap attested by the broad band at energies just below the excitonic peak.We also calculated the Urbach tail at the band edge according to the following equation = 0 exp(E/E u ) where 0 is a constant, E is the energy and E u is the Urbach energy.The values of E u are 33 meV for Cs 3 Bi 2 Cl 9 , 20 meV for Cs 3 Bi 2 Br 9 , 96 meV for Cs 3 Bi 2 I 9 , and 34 meV for MA 3 Bi 2 I 9 .[67][68][69][70][71][72] The CPs, which depend on the densities of states in the electronic bands, and their parameters (energy position E, amplitude A, broadening Γ, and phase Φ) are extracted from the simultaneous fit of the real and imaginary parts of the dielectric function () through the following equation: where n is −1/2 for 1D, 0 for 2D, or ½ for 3D critical points and −1 when describing excitonic transitions.The E ex and E gap values calculated for Cs 3 Bi 2 Cl 9 and Cs 3 Bi 2 Br 9 , have not been yet reported in the literature and are respectively: 3.27 and 4.42 eV for Cs 3 Bi 2 Cl 9 , and 2.84 and 4.26 eV for Cs 3 Bi 2 Br 9 .On the other hand, E ex and E gap values for Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 confirm the values reported by Machulin et al. [57] (Cs 3 Bi 2 I 9 , calculated by reflection spectra at RT, 2.578 and 2.857 eV) and by Kawai et al. [58] (MA 3 Bi 2 I 9 , calculated by absorption spectra at RT, 2.49 and 2.9 eV).
Cs 3 Bi 2 Cl 9 and Cs 3 Bi 2 Br 9 exhibit a large E B of 1143 meV and 1424 meV, respectively.The value obtained for Cs 3 Bi 2 Br 9 is higher than those reported by Bass et al. [54] from UV-vis absorption measurements performed on powder (940 meV) and by Wu et al. from similar measurements on nanoplatelets (148 meV), [73] indicating a strong dependence of the optical properties of the material from the characteristics of the crystal structure.
The exciton binding energies for Cs 3 Bi 2 I 9 (334 meV) and MA 3 Bi 2 I 9 (303 meV) are close to those already reported. [57,58]hese high values of E B (not efficient charge separation resulting in low J SC [74] ), along with the wide electronic band gap (low optical absorption) and high Urbach energy (≈50 meV, resulting in high nonradiative recombination losses [75] ), are the main reasons for the low values of photo-conversion efficiency reached using Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 as active materials in solar cells (record of 3.6% [76] and 3.17%, [77] respectively).These results can be compared to those calculated with the Elliot model [78][79][80][81] used to simulate the absorption near the band edge with the equation: where E gap is the electronic band gap, E B is the exciton binding energy (i.e. E B = E gap −E ex where E ex is the energy of the exciton), A c , , , are the scaling factors.The first term describes the continuum state absorption and the second term accounts for multiple excitonic states.We neglect the excitonic transition with n>3 because the excitonic peak is well separated from continuous and the oscillator strength decreases as n 3 .The cumulative fit is in good agreement with experimental data as shown in Figure S8 (Supporting Information).The values obtained from fitting are reported in Table 2.
To further investigate the optical behavior of Bi-based crystals as a function of the temperature, being aware that other perovskitic materials are very sensitive to external factors like humidity that affect their properties, we have performed SE measurements in a pure N 2 environment, to avoid possible sample degradation. [82,83]Repeated measurements have demonstrated that all samples are stable in N 2 for several weeks.The measurements were performed in the range of −90 °C to 90 °C, to cover all the possible application fields, from x-ray and gamma detectors, which can work at low temperatures to minimize the dark current to LED and solar cells that can reach high working temperatures under sunlight. [84]he variation of the absorption coefficient as a function of temperature is reported in Figure S9 (Supporting Information).
The E ex and E gap temperature dependence of Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 that have a smaller E B were monitored with a step of 3 °C while for Cs 3 Bi 2 Cl 9 and Cs 3 Bi 2 Br 9 the step was 15 °C.This type of study was previously conducted with a similar approach for the band gap of lead bromide perovskite single crystals. [11,85]n these works, we demonstrated that a lattice phase transition is detectable through SE measurements when a change of slope appears in the energy versus temperature plot of the critical point at the lowest energy.
Figure 2 shows the variation of the second derivative of the real ( 1 ) and imaginary ( 2 ) parts of the dielectric function from which the CPs can be extracted in the range −90 °C to 90 °C (with 30 °C temperature step) for Cs 3 Bi 2 Br 9 (Figure 2a,b) and Cs 3 Bi 2 I 9 (Figure 2c,d).
By lowering the temperature, it is possible to identify a general trend for all Bi-based materials: the E ex peak amplitude (A) increases (y-axis) and the E ex broadening (Γ) decreases (x-axis) (see also Figures S11-S13, Supporting Information).These findings are consistent with expectations, as exciton-phonon interactions are smaller at lower temperatures, leading to a higher energetic localization of the excitons.Furthermore, the E ex position does not vary significantly with the temperature, in contrast with the E gap position which undergoes significant shifts.Quantifying numerically all these observations is possible through the CP analysis.
Figure 3 shows, within the CP analysis, the temperature dependence of the energy position of E ex and E gap with the EB for Cs3Bi2Cl9 (a-c), Cs3Bi2Br9 (d-f), Cs3Bi2I9 (g-i), and MA3Bi2I9 (j-l).The other CP parameters (A, Γ, Φ) trend versus temperature are reported in the Supporting Information (Figures S11-S13, Supporting Information).We observed that Cs3Bi2Cl9 and Cs3Bi2Br9 samples exhibit similar behavior, notwithstanding the differences in their crystal structure.Specifically, E gap and E ex have a linear dependence in the whole temperature range (Figure 3a,d).In agreement with our previous work, [11] we argue that a linear dependence of the energy with the temperature indicates that no phase transitions occur within the studied temperature interval.88] The temperature dependence of E ex and E gap in both I-based crystals (Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 ) differs from above and it is more complex.For Cs 3 Bi 2 I 9 , the E ex curve slope starts to slowly decrease at 45 °C going toward lower temperature and, at ≈−50 °C, there is a net change in the curve slope that we associate with a reversible phase transition (Figure 3g) from the hexagonal to the monoclinic lattice at −53 °C. [57,89]In addition, a second ex-citonic peak (E ex2 in Figure 2e,f) emerges at ≈2.68 eV at the same temperature.The E gap linearly decreases with temperature (Figure 3h), in good agreement with the literature. [57,82]Consequently, E B decreases linearly, differently from the two previous crystals (Figure 3i).This finding suggests that better performances could be achieved for Cs 3 Bi 2 I 9 -based solar cells at the typical operating temperature of 45-85 °C [90] due to the narrowing of the electronic band gap resulting in improved optical absorption and the decrease of E B causing an improved charge extraction.
The behavior of MA 3 Bi 2 I 9 E ex exhibits an even higher degree of complexity.The E ex value shows a linear increase in the range of temperature 90-25 °C (Figure 3j).However, as the sample is further cooled from 25 °C to ≈−50 °C, the slope gradually decreases down to −90 °C.Similarly to the Cs 3 Bi 2 I 9 sample, we could identify at −50 °C a slope change.This is in good agreement with Jakubas et al. [91] who report a second-order phase transition occurring at −50 °C.We note that Kamminga et al. [92] report that the MA 3 Bi 2 I 9 crystal structure gradually evolves from a hexagonal phase to a monoclinic phase by decreasing the temperature from 27 °C to −113 °C through the alignment of the methylammonium cations along the b lattice direction.The trend of E gap with respect to temperature exhibits similarity to the trend of E ex (Figure 3k) and therefore E B increases with temperature (Figure 3l) as for Cs 3 Bi 2 Br 9 , and Cs 3 Bi 2 Cl 9 .We note here that notwithstanding MA 3 Bi 2 I 9 shares similar structural characteristics with Cs 3 Bi 2 I 9 , their optical properties present some fine differences.An E ex2 peak is absent from MA 3 Bi 2 I 9 below the phase transition temperature, whereas the exciton binding energies have different temperature trends.We hypothesize the presence of weakly bound excitons at the conduction band edge that may be responsible for these differences.To better interpret the experimental measurements of the optical constants, we performed density functional theory (DFT) calculations for all the experimentally studied systems in their RT phases and post-processed the obtained electronic structure within the Bethe-Salpeter equation (BSE) [93,94] for the calculation of the real and imaginary parts of the dielectric function.We note that the BSE allows for the calculation of the absorption properties considering the interaction between electrons and holes in the excited electronic spectrum, which is important when strong excitonic features are present.A specific requirement for the studied Bi-based materials was the consideration of spin-orbit coupling effects within the calculation scheme, due to their impact on the description of the conduction band of these systems.(Figure S14, Supporting Information). [95,60]Specifically, in the case of Cs 3 Bi 2 I 9 , spin-orbit interactions split the lower conduction band into two sub-bands (see Cs 3 Bi 2 I 9 Figure S14a, Supporting Information) whereas for Cs 3 Bi 2 Br 9 they "mix" low-dispersed sub-bands in a single band (see Cs 3 Bi 2 Br 9 in Figure S14b, Supporting Information).In both cases, the qualitative differences in the conduction band introduced by spin-orbit interactions cannot be neglected when evaluating the optical transitions of these materials.Moreover, It is interesting to note that all systems share some common characteristics regarding the main orbital projec-tions in their electronic structure (Figure S15, Supporting Information): conduction bands are strongly characterized by Bi p orbitals (with a total angular momentum J = 1/2 for lower energies and J = 3/2 for higher energies), whereas the valence band maxima are mainly shaped by halide p-orbital contributions.The two iodide systems (Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 ) present very similar electronic properties (Figure S16, Supporting Information) arising from the similarity in their structural configuration for a wide range of temperatures.Hereon, only the optical properties of inorganic halides will be discussed.Figure 4 shows the imaginary part of the dielectric function ( 2 ) calculated within the BSE theory for the Cs 3 Bi 2 I 9 , Cs 3 Bi 2 Br 9 and Cs 3 Bi 2 Cl 9 systems and compares the results with experimental measurements at RT.
All curves show strong excitonic characteristics when electron-hole interactions are considered (red lines) with respect to optical calculations that do not account for excitonic effects (blue lines).The Cs 3 Bi 2 I 9 system is characterized by a main excitonic peak having a slightly higher binding energy compared to the experiment.Additionally, the curve is red-shifted with respect to the one calculated without electron-hole interactions.The case of Cs 3 Bi 2 Br 9 is more particular, as between the dominant excitonic peak and the second optical peak of the experimental curve, intermediate peaks (with the most prominent at ≈3.15 eV) appear only in the calculated spectrum.Similar features have been experimentally observed only in Cs 3 Bi 2 Br 9 nanocrystals at certain crystallographic directions, [96] but are absent from crystals of bigger dimensions.This aspect indicates a rather strong influence of the optical characteristics in this system, either from structural characteristics that are not captured in the simple trigonal model used for our calculations, or from local characteristics that are only present at the nanoscale.Nevertheless, our calculations indicate that these intermediate peaks should be intrinsic to the bulk material and independent of surface-related phenomena.It is also interesting to note that the second peak of the experimental optical spectrum in Cs 3 Bi 2 Br 9 practically coincides with the first peak of the 2 curve in the independent particle approximation (i.e., without considering electron-hole interactions in the calculation scheme).Finally, an almost excellent agreement between theoretical and experimental data is obtained in the case of Cs 3 Bi 2 Cl 9 , showing a strongly redshifted spectrum with respect to the independent particle approximation and a main excitonic peak at ∼3.32 eV.Some divergences between the experimental and theoretical data appear only for higher energy values, reflecting the limited number of bands considered for the calculation of the static dielectric matrices (see the Experimental section) with respect to the extremely dense electronic states that are present in the valence band of the material (Figure S12, Supporting Information).Overall, the BSE level of theory appears necessary for the proper estimation of the optical properties of Bi-based halide perovskites.
Conclusion
Our multiparameter analysis provides a comprehensive outlook on the behavior of the excitonic band gap and the continuous absorption onset for A 3 Bi 2 X 9 single crystals depending on the temperature.This is crucial for various optoelectronic applications.In particular, we investigated the structural and optical properties of four Bismuth halide single crystals, namely Cs 3 Bi 2 Cl 9 , Cs 3 Bi 2 Br 9 , Cs 3 Bi 2 I 9 , and MA 3 Bi 2 I 9 .XRD measurements unveiled their crystalline structure, revealing a quasi 1D orthorhombic structure for Cs 3 Bi 2 Cl 9, a quasi 2D trigonal structure for Cs 3 Bi 2 Br 9 and a quasi 0D hexagonal structure for Cs 3 Bi 2 I 9 and MA 3 Bi 2 I 9 .Strong excitonic features were observed for all mate-rials with distinct characteristics, based on the chemical composition of both anions and cations.E B values for Cs 3 Bi 2 Cl 9 , Cs 3 Bi 2 Br 9 and MA 3 Bi 2 I 9 increased with temperature, while for the Cs 3 Bi 2 I 9 the trend was diametrically opposite.We identified a phase transition from the hexagonal to the monoclinic lattice at −53 °C for Cs 3 Bi 2 I 9 , and at −50 °C for MA 3 Bi 2 I 9 .The wide electronic band gap of MA 3 Bi 2 I 9 (2.81 eV) and of Cs 3 Bi 2 I 9 (2.87 eV) and the high exciton binding energies (≈300 meV) are the main reasons for the low-efficiency values of solar cells, suggesting that focused strategies are required to improve the performances. [74]n the other hand, all Bismuth halide single crystals have a great potential for application as highly efficient photodetectors.
Preparation of A 3 Bi 2 X 9 Single Crystals: A 3 Bi 2 X 9 single crystals were prepared using the hydrothermal method.The 0.05 m perovskite solutions were prepared by dissolving the precursors CsX and BiX (molar ratio 3:2) in 20 mL of hydrohalic acids (CsCl and BiCl in HCl, etc.) in hydrothermal autoclave reactor.For detailed precursor masses, see Table S1 and Figure S1 (Supporting Information).The solutions were then heated to 200 °C and kept at constant temperature for 2 h to ensure the complete dissolution of the precursors.In the next step, the solutions were cooled down from 200 °C to 25 °C (temperature gradient 1°C h −1 ) after which millimetre-sized single crystals were obtained.The obtained crystals were then extracted from the solution and separated based on their size and geometry.Samples with the most suitable geometry were used as seeds for further growth.The seeds were placed in the previously filtered perovskite solutions previously filtered (PTFE 0.45 μm) and heated to 50 °C, after which they were slowly cooled down (1°C h −1 ) to 25 °C.The obtained Bi-based perovskite single crystals had exceptionally flat surfaces that were crucial for the optical characterization that they underwent.Microscopic photos including SEM images and a detailed scheme of the synthetic procedure are presented in Figures S2-S5 (Supporting Information).It is important to observe that excitonic bands function as chromophores, ex-
Figure 2 .
Figure 2. a,b) Second derivative of the real ( 1 ) and imaginary ( 2 ) part of the dielectric function across the temperature range from -90 °C to 90 °C (ΔT = 30 °C) for Cs 3 Bi 2 Br 9 and c,d) Cs 3 Bi 2 I 9
Figure 3 .
Figure 3. a-c) Energy position of E ex and E gap with E B =E gap -E ex vs temperature for Cs 3 Bi 2 Cl 9 , d-f) Cs 3 Bi 2 Br 9 , g-i) Cs 3 Bi 2 I 9 , and j-l) MA 3 Bi 2 I 9 .
Figure 4 .
Figure 4.The imaginary part of the dielectric function ( 2 ) calculated within the BSE theory (red lines) and the independent particle approximation (blue lines) for a) Cs 3 Bi 2 I 9 , b) Cs 3 Bi 2 Br 9 , and c) Cs 3 Bi 2 Cl 9 .Experimental data (green dots) correspond to measurements using SE at RT.
Table 1 .
Energy values of E ex , E gap and E 1 at RT extracted through the Critical Points analysis for Cs 3 Bi 2 Cl 9 , Cs 3 Bi 2 Br 9 , Cs 3 Bi 2 I 9 , and MA 3 Bi 2 I 9 .E B (the exciton binding energy) is calculated as E gap -E ex .
Table 2 .
Energy values of E gap, E B at RT extracted through the Elliot Analysis for Cs 3 Bi 2 Cl 9 , Cs 3 Bi 2 Br 9 , Cs 3 Bi 2 I 9 , and MA 3 Bi 2 I 9 .The E ex (the exciton energy) is calculated as E gap -E B . | 6,679.4 | 2023-12-24T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Compression of FASTQ and SAM Format Sequencing Data
Storage and transmission of the data produced by modern DNA sequencing instruments has become a major concern, which prompted the Pistoia Alliance to pose the SequenceSqueeze contest for compression of FASTQ files. We present several compression entries from the competition, Fastqz and Samcomp/Fqzcomp, including the winning entry. These are compared against existing algorithms for both reference based compression (CRAM, Goby) and non-reference based compression (DSRC, BAM) and other recently published competition entries (Quip, SCALCE). The tools are shown to be the new Pareto frontier for FASTQ compression, offering state of the art ratios at affordable CPU costs. All programs are freely available on SourceForge. Fastqz: https://sourceforge.net/projects/fastqz/, fqzcomp: https://sourceforge.net/projects/fqzcomp/, and samcomp: https://sourceforge.net/projects/samcomp/.
SequenceSqueeze results
The evaluation machine used by the competition was an Amazon m2.xlarge instance with a separate 300GB mounted file-system for contest data and temporary storage. Amazon define this instance type as having 6.5 EC2 compute units (2 64 The plots below have been generated from the table of results at www.sequencesqueeze.org. All entries from authors are shown, not just the best one. Entries that fail to uncompress without mismatch are omitted, except where an entrant had no programs that were 100% lossless -these were marked appropriately.
The cluster on the far left are the two reference based encoders -Fastqz and Samcomp. These include the time taken for the entire fastq → compress → decompress → fastq process, so this includes the bowtie2 alignment time. Fasqtz is demonstrably faster at performing alignments.
Figure S2
0 Despite many varied techniques, it is clear from compression times that there is a limit on compressibility, requiring exponentially more CPU to achieve only a linear and small improvement to ratio. Asymmetry of gzip (competition baseline) is clear. Most others are symmetric.
In the above plot fqzcomp appears to be the only program matching gzip on decompression speed. We believe this is likely due to both being I/O bound on the AWS test system. Our own tests show gzip to be faster at decompression.
Zooming up between ratio 0.17 and 0.19 more clearly shows the tradeoff between time vs ratio for the nonreference based compressors. From these the Pareto frontier consists of A.J. Pinho's IEETA entry, D. Jones' Quip program and J. Bonfield's fqzcomp. Programs may have been modified since the entry closed.
(For example, Fqzcomp is 10-40% faster depending on options used.) Figure S4 0 A similar picture is seen with compression ratio vs memory usage. Compression and decompression memory usage is largely symmetric, so we show only compression memory uage. Note that these memory figures are as quoted by the SequenceSqueeze web site, which erroneously listed them as the number of 1KB blocks; they are instead the number of 256-byte blocks.
Once again we see we rapidly reach a cliff, requiring exponential growth in memory for a linear decrease in size. The two reference based compression programs have the requirement of loading the reference genome into memory.
Bowtie2 alignment usage
Alignments for Samcomp and other SAM based aligners were produced using bowtie2 with the following script.
Bowtie2 was considerably slower than the built-in aligner used by fastqz, but aligns more data.
Fastqz alignment benchmarks
The following results were obtained on the complete SRR062634 file (6, Producing the alignment adds significant time to the preprocessing stages. However in full slow compression mode this reduces the overall time spent due to the data volume presenting to the ZPAQ stage being smaller.
Fqzcomp parameter space
Fqzcomp has separate parameters controlling the compression level for sequence names (identifiers), basecalls and quality values. Additionally for base-call compression it may use a single or double stranded model and it may optionally encode using a single model or with a pair of low + high order models. This gives a considerable search space to explore.
To choose appropriate low, mid and high compression ratio parameters we produced charts with consistent name ("n") and quality ("q") parameters, along with consistent choices of single vs double ("b") strand and single or paired ("+") model, but varied the sequence ("s") order to chart lines of compression ratio vs time.
We tested this using two Illumina data sets (shallow and deep) and a 454 data set.
"s*" refers to -s1 to -s8 parameters except on slower compression modes where -s6 to -s8 was used (visible in the lines that contain just 3 data points). The model used for predicting base-calls is order 7 + x where x is the value after -s. E.g. -s1 uses an order-8 model and -s8 uses an order-15 model.
"+" refers to -s1+ to -s8+ parameters, indicates the use of an additional shorter order-7 model. No context mixing is used. Instead the program encodes using either the order-7 model or the order-8 to order-15 model (as indicated by the -snum), depending on which appears to have the most extreme probability bias (for any base type, not just the one being encoded).
"b" refers to the -b parameter, specifying that updates to the sequence model should take place on both strands. | 1,176 | 2013-03-22T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Polyols from Microwave Liquefied Bagasse and Its Application to Rigid Polyurethane Foam
Bagasse flour (BF) was liquefied using bi-component polyhydric alcohol (PA) as a solvent and phosphoric acid as a catalyst in a microwave reactor. The effect of BF to solvent ratio and reaction temperatures on the liquefaction extent and characteristics of liquefied products were evaluated. The results revealed that almost 75% of the raw bagasse was converted into liquid products within 9 min at 150 °C with a BF to solvent ratio of 1/4. The hydroxyl and acid values of the liquefied bagasse (LB) varied with the liquefied conditions. High reaction temperature combining with low BF to solvent ratio resulted in a low hydroxyl number for the LB. The molecular weight and polydispersity of the LB from reactions of 150 °C was lower compared to that from 125 °C. Rigid polyurethane (PU) foams were prepared from LB and methylene diphenyl diisocyanate (MDI), and the structural, mechanical and thermal properties of the PU foam were evaluated. The PU foams prepared using the LB from high reaction temperature showed better physical and mechanical performance in comparison to those from low reaction temperature. The amount of PA in the LB has the ability of increasing thermal stability of LB-PU foams. The results in this study may provide fundamental information on integrated utilizations of sugarcane bagasse via microwave liquefaction process.
Introduction
In recent years, there has been an increased desire for more effective utilization of lignocellulosic biomass waste due to its significant potential to enhance environmental stewardship and aid economic development. Lignocellulosic biomass composed of cellulose, hemicellulose and lignin is a valuable and worldwide-accessible bioresource which can provide alternative chemicals via proper conversion processes. Recent achievements in biomass thermochemical conversion techniques have stimulated great interest in the integrated utilizations of lignocellulosic biomass for the production of hydroxyl-rich biopolyols. Various biomass conversion technologies have received increasing attention. Thermochemical methods such as pyrolysis and liquefaction have great potential to produce biofuels and valuable bio-chemicals [1,2]. Studies have shown that liquefaction provides an efficient pathway to convert solid biomass into liquid products [3,4].
Sugarcane is an important crop cultivated around the world and plays a vital role in aural-based agricultural economics. The major by-product of the sugarcane industry is sugarcane bagasse (SB) [5]. Usually, SB is used as a source of heat and electricity in sugar producing mills. With the development of lignocellulosic biomass utilization technologies, SB was also applied in the preparation of oriented strand board as well as the feedstock for the isolation of cellulosic fibers [6][7][8]. Recently, many previous studies have explored the potential of liquefaction sugarcane bagasse using traditional heating sources with long liquefaction times and low energy efficiency [9][10][11][12][13]. Sugarcane bagasse is mainly composed of approximately 50% cellulose, 25% hemicellulose and 22% lignin [14]. The components of sugarcane bagasse are known to have high functional hydroxyl groups because of the high cellulose and hemicellulose contents. Thus, liquefied bagasse (LB) is considered an excellent raw material for bio-based polyols, which have been used to manufacture epoxy resins [15]. These studies have shown the potential of developing bio-based products from underutilized bagasse. These studies have also indicated the need for further improvement in liquefaction technology such as lower acid catalyst content, shorter reaction time, lower reaction temperature and lower organic solvent ratio before an economically viable conversion technology can be realized.
The application of microwave heating in lignocellulosic biomass liquefaction was first applied in wood liquefaction [16]. Microwave irradiation can directly couple microwave energy with the molecules that are present in the reaction mixture with dipolar polarization and ionic conduction. The main advantage of microwave over conventional heating sources is that the irradiation penetrates and simultaneously heats the materials at the molecular level and therefore, reduces reaction times from hours to minutes [17][18][19][20]. Microwave energy has been applied in the liquefaction of various lignocellulosic biomasses such as pine [21], bamboo [22] and agricultural crop residues [23]. However, no publications have reported on the microwave liquefaction of sugarcane bagasse.
Moreover, Ethylene glycol, the monomer of polyethylene glycol (PEG), has a loss tangent (tan δ) value of 1.35, which is among the highest tan δ value of common solvents [21]. The higher the tan δ value of a solvent, the better absorption and more efficient heating capability the solvent has under microwave irradiation. Therefore, liquefaction using microwave energy as the heating source and PEG as a solvent has great potential to reduce reaction time and advance the commercialization of this process. The application of microwave energy to biomass liquefaction offers not only faster heating and energy efficiency but also space savings and precise process control [23].
Polyurethane foams are versatile engineering materials and have been successfully used in a variety of applications (e.g., automotive industry, refrigerators, insulating panels, construction) [24]. Commercially, polyurethane (PU) foam is synthesized by the reaction of polyol and dissocyanate with a combination of a blowing agent, catalyst, and surfactant. Currently, polyols and isocyanate used for PU foam production are mainly petroleum-derived. With increasing concern for the depletion of fossil fuels, global warming and other environmental impacts from petroleum-based products, the replacements of petroleum-based polyols with sustainable bio-polyols from renewable biomass for the production of PU foams have been receiving increasing attention.
Biopolyols obtained by liquefaction have high hydroxyl functionalities and great potential in the production of PU foams [25]. A large variety of lignocellulosic biomass such as pine wood [21], bamboo [22,26], wheat straw [27] and soybean straw [28] have been liquefied into liquid polyols for the preparation of PU foams. The incorporation of biomass components in polymeric compositions of PU foams can reduce manufacturing costs and provide a certain degree of biodegradability. Thus, the goal of this study was to determine the potential of microwave-assisted liquefaction of sugarcane bagasse as the raw material for rigid polyurethane foam. The specific objectives were to determine the effects of bagasse flour (BF)/poly alcohol (PA) ratio (w/w) on the properties of LB polyols and PU foams. Figure 1 shows the experimental scheme. Sugarcane bagasse was liquefied at different liquefaction conditions using microwave energy. The liquefied products were characterized and applied in the preparation of polyurethane foams. The proposed reaction mechanism is presented in Figure 2. According to our previous results [3,29], the liquefaction of sugarcane bagasse was proposed to be the cleavage of several specific linkages in the bagasse, such as glycosidic bonds that are dominate linkages between the sugar units of cellulose and hemicellulose, and dominate linkages in lignin. The cleavage of glycosidic bonds in cellulose and ethers bonds in hemicellulose resulted in the production of carbon six and carbon six sugar derivatives, and the cleavage of the β-O-4, 4-O-5, and dibenzodioxocin linkages in lignin resulted in the aromatic products. Since PEG and glycerol used as the co-solvent in the liquefaction system, glycoside may also be produced by glycolysis. The content of the liquefied residue (unreacted sugarcane bagasse) was defined as the percent of the weight of the dry residue to that of the dry raw material charged. The residue content was used as an indicator of the liquefaction extent. Figure 3 shows the changes of the residue content as a function of the BF/PA ratio at different reaction temperatures. As expected, the residue content decreased as the BF/PA ratio decreased, and the liquefaction efficiency was enhanced with an increase of liquefaction temperature. However, liquefaction temperature interacted with BF/PA ratio to impact the residue content. At a high BF/PA ratio (i.e., 1/2), the temperature had little effect on residue content as BF/PA ratio decreased to 1/3, the residue content decreased significantly for both liquefaction temperatures. The decrease in residue content leveled off with a decrease in the BF/PA ratio at 125˝C; while the residue content gradually decreased to the lowest residue content at 150˝C. It is evident that higher temperature resulted in less residue content for all BF/PA ratios. As the amount of PA in the mixture increased, the difference between the residue content in the same BF/PA ratio under the two temperatures became gradually notable. Thus, it could be concluded that the effect of temperature on the liquefaction rate of LB is greatly dependent on the BF/PA ratio. Hydroxyl value of the polyols is an important parameter that needs to be monitored during polyols and polyurethane foam production. The hydroxyl value of the polyols was determined as 426-505 mg KOH/g. The synthesized polyol from SB in this study had a higher hydroxyl value than the liquefaction of bamboo as reported in the literature indicating that the biopolyols synthesized in this study had good reactivity with isocyanate [26].
Liquefaction and Characteristics of LB
The effect of BF/PA ratio on acid value and hydroxyl value of the liquefied products (polyols) from 125 and 150˝C are shown in Figure 4. It was observed that with an increase in the PA amount in the mixture, the hydroxyl value of the polyols gradually increased. Moreover, high temperature contributed to smaller hydroxyl values, i.e., the hydroxyl value of the polyols prepared at 125˝C was higher than that of the polyols prepared at 150˝C. This indicates that although the PA in the mixture can provide the hydroxyl group of the polyols, a loss of hydroxyl groups occurred during the liquefaction reaction which could be largely attributed to the alcoholysis reaction of bagasse in PA and to the formation of ethers. The oxidation and recondensation reactions among the liquefaction solvents and the decomposed bagasse components could also take place during the liquefaction to consume hydroxyl groups and influence the hydroxyl value. With decreasing BF/PA ratio, the hydroxyl value slightly increased. This result is in accordance with the finding of the work on the liquefaction of soybean straw [28]. The higher hydroxyl value for the polyols from the reaction with high solvent loading may be due to the higher biomass conversion and that extra solvent in the reaction mixture avoided the recondensation reactions. The LB bagasse was acidic due to the acid catalyst used in the liquefaction system as well as the acid substance decomposed from the wood components, mainly cellulose and hemicellulose [21]. The acid value of the LB was in the range of 11.8-18.9 mg KOH/g. The increase of acid value could be attributed to the increase of the PA, which contained 3.5 wt.% H 3 PO 4 , and also to the increase of acidic substances produced with the decomposition of bagasse components and the oxidation of alcohols as the liquefaction proceeded [30]. This provided further evidence that the BF/PA ratio and temperature can both enhance the liquefaction rate.
The molecular weight and the polydispersity of the liquefied product are presented in Table 1. It is observed that, along with the increase of the PA amount in the mixture, the weight-average molecular weight (Mw) and polydispersity (Mw/Mn) leveled off. There was a slight decrease of Mw and Mw/Mn when the BF/PA ratio decreased from 1/2 to 1/3. The statistical analysis did not reveal any significant differences. It was also seen that the Mw of LB polyols acquired under higher temperature (150˝C) was generally smaller than that of LB polyols acquired under lower temperature (125˝C). This may be because microwave radiation under higher temperature shows a stronger ability to break chemical bonds such as dominating β-O-4 linkages in lignin and glycosidic bond in cellulose. Figure 5 shows the FTIR spectra of the acetone-soluble fraction of LB. A broad peak around 3400 cm´1 represents the OH groups either from cellulose or from unreacted PA. The peak around 2870 cm´1 represents the C-H symmetric stretching in aliphatic methyl. A shoulder at 1735 cm´1 is primarily due to the carbonyl stretch in unconjugated ketone, ester or carboxylic groups in hemicelluloses [31,32]. From the growing shoulder of the 1735 cm´1 peak, it can be inferred that hemicellulose is peeled off from adjacent lignin or cellulose into solution [33,34]. As the BF/PA ratio decreases, the absorbance band of hemicellulose was merged into the prominent peak at 1645 cm´1 which is associated with the adsorbed water [35]. The increase in the intensity of this peak in the acetone-soluble fraction of LB with respect to the increase in PA loading is due both to the removal of hemicellulose from BF [36] and the increase of the PA content in LB. The spectra also showed that the acetone soluble fraction of LB had characteristic bands of benzene rings (1515 cm´1, 1453 cm´1, 1242 cm´1), especially the bands of syringyl rings (1347 cm´1, 1242 cm´1), which indicated that the acetone-soluble fraction contained the derivatives from lignin components. These bands showed no significant difference in the spectra of the polyols from different BF/PA ratio reactions revealing that the BF/PA ratio had no significant influence on the structure of the phenolic components in the polyols. This is mainly because lignin in the SB could easily undergo decomposition at the initial reaction stage in the microwave liquefaction system [37] and further increasing the PA composition would not affect the lignin depolymerization mechanism.
There were also weak bands of plane deviational vibration of -OH in carboxylic groups at 1409 cm´1 and out-of-plane deforming vibration of O-H in carboxylic groups at 882 cm´1. The carboxylic acids could be the degradation products of cellulose or hemicelluloses. The intensive band at 1040 cm´1 arises from the aromatic C-H in-plan deformation for guaiacyl type in lignin. The peak became weaker with an increase in PA. This is because the concentration of guaiacyl unit decreases in the fraction. Aromatic C-H out of bending occurs at 843 cm´1 [38]. The spectra of acetone soluble fraction of LBs prepared in 125˝C were similar to those prepared at 150˝C.
Characteristics of PU Foam
Polyurethane (PU) foams were prepared from LB polyols and isocyanate with catalysts and additives. The LB could be used as the polyol component because it comprises the skeleton and active functional groups of polyols as indicated by the high hydroxyl number. Since the LB was a by-product of the sugar industry, foam developed using LB was thought to be sustainable and its cost was speculated to be lower than the petro-polyol. The LBs from different BF/PA ratios at different reaction temperatures were used for the PU foam synthesis in order to verify their capability for foam production. All the PU foams synthesized in this study were rigid type and the foam became darker in color with the addition of LB polyols ( Figure 6). The hydroxyl values of the LB polyols were generally less than that of PA. Therefore, the usage of methylene diphenyl diisocyanate (MDI) in preparing PU foams based on LB polyols was less than that based on PA. The density and physical properties of the PU foam samples prepared from liquefied bagasse polyols are shown in Table 2. The density increased as the PA content increased under both temperature levels. Also, higher temperature (150˝C) resulted in higher PU foam densities (in the range of 0.032-0.043 g/cm 3 ). In contrast, the density of the PU foams prepared under lower temperature (125˝C) was lower (in the range of 0.031-0.037 g/cm 3 ). The results indicate that the density of PU foams could be adjusted by controlling certain liquefaction conditions. As shown in Table 2 at 125˝C, the compressive strength (CS) and the modulus of elasticity (MOE) of the foams increased from 0.190 to 0.340 MPa and from 1.1 to 3.02 MPa, respectively, as the BF/PA ratio decreased from 1/2 to 1/4. When increasing the reaction temperature to 150˝C and keeping the other conditions the same, the CS and the MOE of the foams increased from 0.32 to 0.48 MPa and from 2.0 to 5.1 MPa, respectively, with a decrease in BF/PA ratio. Clearly, higher reaction temperature favors more robust foam. These changes may be due to the interaction of the hydroxyl values of LB polyols and the liquefied bagasse residue in the polyols. As the PA content in the liquefied mixture increased, the hydroxyl value of the LB polyols also increased ( Figure 4). The CS of the foam was influenced by the amount of PA in the liquefied mixture via the change of the hydroxyl value of the polyols. In terms of the interrelationships, a higher degree of cross-linking of the foam was due to higher hydroxyl values of the polyols. This is the reason why the CS and MOE of PU foams increased as the PA content of the liquefied polyols increased. However, the liquefied temperature had an inverse effect on the CS as that of the hydroxyl values of the polyols. To this point, the change of the CS followed that of the densities. This indicates that the impact of density was greater than the impact of liquefied temperature on the CS of PU foam.
Furthermore, due to the increase of the extent of bagasse liquefaction, the morphological properties of the residue were more homogeneous and had a higher surface area. This allows for better adhesion between the bagasse residue and PU. Hence, better CS can be acquired. In addition, it should be noted that at a fixed ratio of isocyanate index with the different weight percentage of bagasse residue content (Figure 3), unreacted components during liquefaction could not provide effective strength for the foams. The higher the bagasse residue content, the lower the economic cost for the PU foams, but the mechanical properties were poor. Therefore, the balance point between the economic cost and mechanical properties should be reached based on a practical application. Based on the results in the present study, in terms of obtaining PU foams with high physical and mechanical properties, the preferred liquefaction conditions should be at 150˝C, 9 min with a BF/PA ratio of 1/4. The resilience rates of the PU foams based on LB polyols from 1/2, 1/3 and 1/4 BF/PA ratio at 150˝C were 82.73, 83.27 and 82.49%, respectively; no significant difference was found. Figure 7 shows the differential scanning calorimetry (DSC) curves of PU foams based on LB polyols prepared at 150˝C, and a slight change was observed. It was expected that the PU foams with greater cross-linking densities needed more thermal energy to initiate chain movements. As discussed previously, the hydroxyl values of the polyols increased with the PA content in the liquefied mixture. Higher hydroxyl values of the polyols were conducive to a higher degree of cross-linking in molecular chains of PU [39]. Hence, the change of the T g follows the change of the PA amount in the liquefied mixture. Thermal decomposition behaviors of PU foams from liquefied bagasse at 150˝C are presented in Figure 8. From the thermogravimetry (TG) curves, it was observed that foams from BF/PA ratio of 1/4 was the most thermally stable followed by 1/3. PU foam from BF/PA ratio of 1/2 had the lowest thermal stability. This indicates that crosslinking density of foams increased with an increase in PA amount. With an increase in PA amount, the extent of liquefaction increased, which can be one of the reasons for the increase in crosslinking density (Figure 3). In the process of liquefaction when using H 3 PO 4 as a catalyst and the corresponding increase of its amount with an increase of PA (as discussed above), it was possible that the cellulose in the insoluble residues was changed into cellulose phosphate, which has the characteristic of thermal durability. Moreover, due to liquefaction, more active OH groups on BF were possibly exposed in the polyols prepared with more PA. Thus, the insoluble residues created in the greater PA liquefaction process could act as a more efficient cross-linking agent [40]. These two factors could induce the direction of thermal stabilities of LB-PU foams. From the curves of differential thermogravimetry (DTG), it is evident that the decomposition mainly occurred in three successive stages above 100˝C. The loss below 100˝C was attributed to the evaporation of moisture content and the release of volatile components. The initial decomposition started at 177.32˝C, and the rate of weight loss began to gradually increase to a maximum at about 302˝C. This suggests that decomposition started at the urethane bond. Urethanes are known to be relatively thermally unstable materials, primarily due to the presence of urethane bond decomposition which occurs somewhere between 150 and 220˝C depending on type of the substituents on the isocyanate and polyol side [41]. Moreover, because of the existence of liquefied residue in the samples, the degradation of bagasse constitutions, i.e., hemicellulose and cellulose are at this temperature range [39,40]. The second stage, a shoulder in the DTG curve around 361.51˝C, could result in the degradation of isocyanate, which did not react with polyol or water (see Figure 5. peak at 2270 cm´1). The third stage (415.53-565.18˝C) largely attributed to the degradation of lignin and char residue from the second stage [38]. In conclusion, the amount of PA in the LB has the ability of increasing thermal stability of LB-PU foams.
Materials and Chemicals
Bagasse was obtained from a local sugarcane processing mill near Baton Rouge, Louisiana, USA. Particles in the size range from 0.90 mm to 1.1 mm were used as the raw material for liquefaction experiments. The bagasse particles were dried at 105˝C for 24 h before use. The polyhydric alcohol (PA) used in the reactions was a mixture of polyethylene glycol (PEG, average molecular weight 400) and glycerol at a fixed weight ratio of 70/30. Phosphoric acid was used as the acidic catalyst. These chemicals were purchased from a commercial source. Methylene diphenyl diisocyanate (MDI; MR-100, Huntsman Industries Ltd., Alvin, TX, USA (NCO group content 30.03%)) and silicone surfactant (SH193; Toray Dow Corning Silicone Ltd., Tokyo, Japan) were used for the preparation of polyurethane foam. Deionized water was used as the blowing agent. All chemicals used were of reagent grade.
Liquefaction of Bagasse
The reactions were carried out in a Milestone MEGA 1200 laboratory microwave oven equipped with a temperature sensor that could be directly inserted into a sealed 100 mL PTFE (microwave transparent) reaction vessel. Oven-dried BF was pre-mixed with the solvent at a mass ratio of 1:2, 1:3, and 1:4. Phosphoric acid was added in the amount of 3.5 wt.% of the solvent. Each vessel was filled with 15˘0.04 g of mixture. Eight vessels of mixture were pulse radiated at a microwave frequency of 2.45 GHz and maximum power of 1000 W for 9 min. at two levels of temperature (125, 150˝C). The liquefaction temperature was monitored by a fiber-optic probe that was immersed at the center of a vessel. After radiation, the samples were cooled to room temperature. The resultant LB, including all of the components in the eight vessels, was collected for subsequent application and analysis.
Ten grams of LB was dissolved in 200 mL of dioxane/water binary solvent (4/1, v/v), stirred for more than 4 h, and then the solution was vacuum-filtrated through glass filter paper. The solid residues were dried in an oven at 105˝C to a constant weight, and the residue content was calculated based on the following equation: where W is the weight of the LB polyols, and W 2 and W 3 are the dry weights of the filter paper with the residues and the weights of the filter paper, respectively.
Preparation of LB-PU Foam
The foams were prepared by a one-step method. A mixture of 25 g LB, 0.5 g deionized water, 0.25 g silicone and 0.25 g dibutyltine dilaurate was thoroughly premixed in a 150 mL paper cup with a mechanical stirrer for 1 min. Afterwards, a calculated amount of MDI (based on isocyanate index 110) was added to the pre-mixture. Then, the combination was stirred at room temperature with a high-speed agitator at a stirring speed of 3600 rpm for 3 min. The resultant mixture was immediately poured into an open cylindrical mold with a diameter of 20 cm and a height of 30 cm and allowed to freely rise at room conditions. The resulting foams were allowed to cure at room conditions for 1 h before being removed from the mold. The properties of the foams were measured after they were further conditioned at room conditions for two days. Three foams were prepared and tested for each liquefaction condition. The samples were prepared at an isocyanate index of 110. The isocyanate index ratio was determined as follows: where MMDI is the number of moles of isocyanate groups per gram of MDI, MLB is the number of moles of hydroxyl groups per gram of liquefied bagasse polyols; WMDI, WLB and WWATER are the weights (g) of MDI, LB polyols and water, respectively.
Acid Number
A method described by Kurimoto et al. was employed to determine the acid value [42]. A mixture of 10 g of LB and 200 mL of acetone/water (4:1, v/v) was titrated with 0.1 N potassium hydroxide standard solution to the equivalence point using a pH meter. The acid value (mgKOH/g) of the sample was calculated using the following equation: where C is the titration volume of the sodium hydroxide solution at the equivalence point (mL), B is the sodium hydroxide solution, and W is the weight of the LB sample (g).
Hydroxyl Number
The method detailed by Kurimoto et al. (1999) was also used to determine the hydroxyl value [41]. LB (1 g) was esterified with phthalate reagent (25 mL) for 1 h at 110˝C followed by addition of 20 mL pyridine/deionized water (1:1, v/v), and then the mixture was titrated with 0.1 N potassium hydroxide standard solution to the equivalence point. The phthalate reagent was a mixture of 150 g phthalic anhydride. The hydroxyl value (mg KOH/g) was calculated by the following equation: where A is the volume of the potassium hydroxide solution required for titration of a LB sample (mL), B is the volume of blank solution (mL), N is the normality of the sodium hydroxide solution, and W is the weight of LB sample (g).
Gel Permeation Chromatography (GPC) Analysis
Measurement of molecular weight by the GPC measurements were performed on a Waters-Wyatt GPC system equipped with multi-angle laser light scattering and differential refraction index detectors. Two Jordi Flash Gel Mixed Bed columns (250ˆ10 mm) were used in series. Tests were conducted at ambient temperature using tetrahydrofuran (THF) as the mobile phase at a flow rate of 1.0 mL/min.
Differential Scanning Calorimeter (DSC)
The glass transition temperature (T g ) was measured on a Q20 differential scanning calorimeter, using a sealed aluminum capsule. The foam samples approximately 6 mg were investigated in nitrogen atmosphere from´40 to 120˝C, at a heating rate of 10˝C/min and a nitrogen flow rate of 50 mL/min. After 20˝C/min. programmed cooling, the samples were reheated at the same heating rate. The T g values were determined by analyzing the DSC curves from the second runs.
Thermal Gravimetric Analysis (TGA)
TGA of the LB-PU foams samples was carried out using a TA Instrument Q 50. Samples with weights of approximately 5 mg were tested in each experiment. Nitrogen was used as the carrier gas at a flow rate of 40 mL/min. The heating rate was 10˝C/min. from room temperature to 700˝C. Curves of the foam samples were generated by a TA Instruments Universal Analysis 2000 version 4.7A Build 4.7.0.2.
Mechanical Properties of Foams
Measurements of the compressibility of the foams were based on Japanese Industrial Standard [43]. The foams were cut into 50 mm 3 specimens. The specimens were conditioned for two days at 23˝C and 50% relative humidity and then were measured and weighed to determine sample density. The compressive properties of the foams were then measured by means of a universal testing machine (INSTRON-4411). The measurements were made in the direction perpendicular to the foam rise direction at a constant crosshead speed of 5 mm/min. The compressive strength (CS) and modulus of elasticity (MOE) of the foams were determined at 10% stress and 25% strain. For each compression run, three pieces of each sample were used.
Resilience rate refers to the ability of a material to recover its original shape after it has been deformed. It is a measure of elastic characteristics and resilience and was determined using the same INSTRON universal testing machine as described above. A spring index of 100% indicates that the object behaves like a perfect elastic body. When the compression or tension force is released, the object will recover all of the deformation to its original dimensions.
A 60-mm diameter cylindrical probe was used to compress the samples to achieve a deformation of 80% of their original dimension at a loading rate of 30 mm/min. For each compression run, three pieces of each sample were used. Resilience rate was calculated by dividing the thickness after withdrawal of the compressive force by the initial thickness.
Data Analysis
The effects of liquefied temperature and BF and PA ratio on LB polyols properties and foam properties were evaluated by analysis of variance at the 0.05 level of probability. Significant effects were further characterized by the Duncan test.
Conclusions
Bagasse was subjected to a microwave-assisted liquefaction system. The influence of BF/PA ratio as well as the reaction temperature on liquefaction yield and characteristics of liquefied products was evaluated. With a decrease in BF/PA ratio, the residue content decreased. The BF/PA ratio and temperature had co-effects on residue content. Both the hydroxyl and acid values inversely increased with the BF/PA ratio at two different temperature levels. The Mw and Mw/Mn first increased and then leveled off with a decrease in BF/PA ratio. Polyurethane foams were prepared from the obtained liquefied polyols. The results revealed that with the addition of PA in the liquefaction mixer, the density, mechanical properties (compressive strength and MOE), glass transition temperature, and thermal stability of the PU foams increased. The results in this study indicated the microwave liquefied bagasse products have potential for the fabrication of polyurethane foams. | 6,913.2 | 2015-12-01T00:00:00.000 | [
"Materials Science"
] |
Dual Inference for Improving Language Understanding and Generation
Natural language understanding (NLU) and Natural language generation (NLG) tasks hold a strong dual relationship, where NLU aims at predicting semantic labels based on natural language utterances and NLG does the opposite. The prior work mainly focused on exploiting the duality in model training in order to obtain the models with better performance. However, regarding the fast-growing scale of models in the current NLP area, sometimes we may have difficulty retraining whole NLU and NLG models. To better address the issue, this paper proposes to leverage the duality in the inference stage without the need of retraining. The experiments on three benchmark datasets demonstrate the effectiveness of the proposed method in both NLU and NLG, providing the great potential of practical usage.
Introduction
Various tasks, though different in their goals and formations, are usually not independent and yield diverse relationships between each other within each domain. It has been found that many tasks come with a dual form, where we could directly swap the input and the target of a task to formulate into another task. Such structural duality emerges as one of the important relationship for further investigation, which has been utilized in many tasks including machine translation (Wu et al., 2016), speech recognition and synthesis (Tjandra et al., 2017), and so on. Previous work first exploited the duality of the task pairs and proposed supervised (Xia et al., 2017) and unsupervised (reinforcement learning) (He et al., 2016) learning frameworks in machine translation. The recent studies magnified the importance of the duality by revealing exploitation of it could boost the learning for both tasks.
Natural language understanding (NLU) (Tur and De Mori, 2011;Hakkani-Tür et al., 2016) and natural language generation (NLG) (Wen et al., 2015;Su et al., 2018) are two major components in modular conversational systems, where NLU extracts core semantic concepts from the given utterances, and NLG constructs the associated sentences based on the given semantic representations. Su et al. (2019) was the first attempt that leveraged the duality in dialogue modeling and employed the dual supervised learning framework for training NLU and NLG. Furthermore, Su et al. (2020) proposed a joint learning framework that can train two modules seamlessly towards the potential of unsupervised NLU and NLG. Recently, Zhu et al. (2020) proposed a semi-supervised framework to learn NLU with an auxiliary generation model for pseudo-labeling to make use of unlabeled data.
Despite the effectiveness showed by the prior work, they all focused on leveraging the duality in the training process to obtain powerful NLU and NLG models. However, there has been little investigation on how to leverage the dual relationship into the inference stage. Considering the fast-growing scale of models in the current NLP area, such as BERT (Devlin et al., 2018) and GPT-3 (Brown et al., 2020), retraining the whole models may be difficult. Due to the constraint, this paper introduces a dual inference framework, which takes the advantage of existing models from two dual tasks without re-training (Xia et al., 2017), to perform inference for each individual task regarding the duality between NLU and NLG. The contributions can be summarized as 3-fold: • The paper is the first work that proposes a dual inference framework for NLU and NLG to utilize their duality without model re-training.
• The presented framework is flexible for diverse trained models, showing the potential of practical applications and broader usage.
• The experiments on diverse benchmark datasets consistently validate the effectiveness of the proposed method.
Proposed Dual Inference Framework
With the semantics space X and the natural language space Y, given n data pairs sampled from the joint space X × Y, the goal of NLG is to generate corresponding utterances based on given semantics. In other words, the task is to learn a mapping function f (x; θ x→y ) to transform semantic representations into natural language. In contrast, the goal of NLU is to capture the core meaning from utterances, finding a function g(y; θ y→x ) to predict semantic representations given natural language utterances. Note that in this paper, the NLU task has two parts: (1) intent prediction and (2) slot filling. Hence, x is defined as a sequence of words (x = {x i }), while semantics y can be divided into an intent y I and a sequence of slot tags y S = {y S i }, (y = (y I , y S )). Considering that this paper focuses on the inference stage, diverse strategies can be applied to train these modules. Here we conduct a typical strategy based on maximum likelihood estimation (MLE) of the parameterized conditional distribution by the trainable parameters θ x→y and θ y→x .
Dual Inference
After obtaining the parameters θ x→y and θ y→x in the training stage, a normal inference process works as follows: where P (.) represents the probability distribution, and x and y stand for model prediction. We can leverage the duality between f (x) and g(y) into the inference processes (Xia et al., 2017). By taking NLG as an example, the core concept of dual inference is to dissemble the normal inference function into two parts: (1) inference based on the forward model θ x→y and (2) inference based on the backward model θ y→x . The inference process can now be rewritten into the following: where α is the adjustable weight for balancing two inference components. Based on Bayes theorem, the second term in (1) can be expended as follows: where θ x and θ y are parameters for the marginal distribution of x and y. Finally, the inference process considers not only the forward pass but also the backward model of the dual task. Formally, the dual inference process of NLU and NLG can be written as: where we introduce an additional weight β to adjust the influence of marginals. The idea behind this inference method is intuitive: the prediction from a model is reliable when the original input can be reconstructed based on it. Note that this framework is flexible for any trained models (θ x→y and θ y→x ), and leveraging the duality does not need any model re-training but inference.
Marginal Distribution Estimation
As derived in the previous section, marginal distributions of semantics P (x) and language P (y) are required in our dual inference method. We follow the prior work for estimating marginals (Su et al., 2019).
Language Model
We train an RNN-based language model (Mikolov et al., 2010;Sundermeyer et al., 2012) to estimate the distribution of natural language sentences P (y) by the cross entropy objective.
Masked Prediction of Semantic Labels A semantic frames x contains an intent label and some slot-value pairs; for example, {Intent: "atis flight", fromloc.city name: "kansas city", toloc.city name: "los angeles", depart date.month name: "april ninth"}. A semantic frame is a parallel set of discrete labels which is not suitable to model by autoregressiveness like language modeling. Prior work (Su et al., 2019 simplified the NLU task and treated semantics as a finite number of labels, and they utilized masked autoencoders (MADE) (Germain et al., 2015) to estimate the joint distribution. However, the slot values can be arbitrary word sequences in the regular NLU setting, so MADE is no longer applicable for benchmark NLU datasets.
Considering the issue about scalability and the parallel nature, we use non-autoregressive masked models (Devlin et al., 2018) to predict the semantic labels instead of MADE. The masked model is a two-layer Transformer (Vaswani et al., 2017) illustrated in Figure 1. We first encode the slot-value pairs using a bidirectional LSTM, where an intent or each slot-value pair has a corresponding encoded feature vector. Subsequently, in each iteration, we mask out some encoded features from the input and use the masked slots or intent as the targets. When estimating the density of a given semantic frame, we mask out a random input semantic feature three times and use the cumulative product of probability as the marginal distribution to predict the masked slot.
Experiments
To evaluate the proposed methods on a fair basis, we take two simple GRU-based models for both NLU and NLG, and the details can be found in Appendix D. For NLU, accuracy and F1 measure are reported for intent prediction and slot filling respectively, while for NLG, the evaluation metrics include BLEU and ROUGE-(1, 2, L) scores with multiple references. The hyperparameters and other training settings are reported in Appendix A.
Datasets
The benchmark datasets conducted in our experiments are listed as follows: • ATIS (Hemphill et al., 1990): an NLU dataset containing audio recordings of people making flight reservations. It has sentence-level intents and word-level slot tags.
• SNIPS (Coucke et al., 2018): an NLU dataset focusing on evaluating voice assistants for multiple domains, which has sentence-level intents and word-level slot tags.
• E2E NLG (Novikova et al., 2017): an NLG dataset in the restaurant domain, where each meaning representation has up to 5 references in natural language and no intent labels.
We use the open-sourced Tokenizers 2 package for preprocessing with byte-pair-encoding (BPE) (Sennrich et al., 2016). The details of datasets are shown in Table 1, where the vocabulary size is based on BPE subwords. We augment NLU data for NLG usage and NLG data for NLU usage, and the augmentation strategy are detailed in Appendix C.
Results and Analysis
Three baselines are performed for each dataset: (1) Iterative Baseline: simply training NLU and NLG iteratively, (2) Dual Supervised Learning (Su et al., 2019), and (3) Joint Baseline: the output from one model is sent to another as in Su et al. (2020) 3 . In joint baselines, the outputs of NLU are intent and IOB-slot tags, whose modalities are different from the NLG input, so a simple matching method is performed (see Appendix C).
For each trained baseline, the proposed dual inference technique is applied. The inference details are reported in Appendix B. We try two different approaches of searching inference parameters (α and β): Table 2: For NLU, accuracy and F1 measure are reported for intent prediction and slot filling respectively. The NLG performance is reported by BLEU, ROUGE-1, ROUGE-2, and ROUGE-L of models (%). All reported numbers are averaged over three different runs.
The results are shown in Table 2. For ATIS, all NLU models achieve the best performance by selecting the parameters for intent prediction and slot filling individually. For NLG, the models with (α=0.5, β=0.5) outperform the baselines and the ones with (α * , β * ), probably because of the discrepancy between the validation set and the test set. In the results of SNIPS, for the models mainly trained by standard supervised learning (iterative baseline and dual supervised learning), the proposed method with (α=0.5, β=0.5) outperform the others in both NLU and NLG. However, the model trained with the connection between NLU and NLG behaves different, which performs best on slot F-1 and ROUGE with (α * , β * ) and performs best on intent accuracy and ROUGE with (α=0.5, β=0.5).
In summary, the proposed dual inference technique can consistently improve the performance of NLU and NLG models trained by different learning algorithms, showing its generalization to multiple datasets/domains and flexibility of diverse training baselines. Furthermore, for the models learned by standard supervised learning, simply picking the inference parameters (α=0.5, β=0.5) would possibly provide improvement on performance.
Conclusion
This paper introduces a dual inference framework for NLU and NLG, enabling us to leverage the duality between the tasks without re-training the large-scale models. The benchmark experiments demonstrate the effectiveness of the proposed dual inference approach for both NLU and NLG trained by different learning algorithms even without sophisticated parameter search on different datasets, showing the great potential of future usage.
A Training Details
In all experiments, we use mini-batch Adam as the optimizer with each batch of 48 examples on Nvidia Tesla V100. 10 training epochs were performed without early stop, the hidden size of network layers is 200, and word embedding is of size 50. The ratio of teacher forcing is set to 0.9.
B Inference Details
During inference, we use beam search with beam size equal to 20. When applying dual inference, we use beam search to decode 20 possible hypotheses with the primal model (e.g. NLG). Then, we take the dual model (e.g. NLU) and the marginal models to compute the probabilities of these hypotheses in the opposite direction. Finally, we re-rank the hypotheses using the probabilities in both directions (e.g. NLG and NLU) and select the top-1 ranked hypothesis.
To make the NLU model be able to decode different hypotheses, we need to use the auto-regressive architecture for slot filling, as described in Appendix D.
C Data Augmentation NLU → NLG As described in 3.2, the modality of the NLU outputs (an intent and a sequence of IOB-slot tags) are different from the modality of the NLG inputs (semantic frame containing intent (if applicable) and slot-value pairs). Therefore, we propose a matching method: for each word, the NLU model will predict an IOB tag ∈ {O, B-slot, I-slot}, we simply drop the I-and B-and aggregate all the words with the same slot then combine it with the predicted intent.
For example, if given the word sequence:
D Model Structure
For NLU, the model is a simple GRU (Cho et al., 2014) with a word and last output as input at each timestep i and a linear layer at the end for intent prediction based on the final hidden state: The model for NLG is almost the same but with an additional encoder for encoding semantic frames, where slot-value pairs are encoded into semantic vectors for basic attention, the mean-pooled semantic vector is used as initial state. We borrow the encoder structure in Zhu et al. (2020) for our experiments. At each timestep i, the last predicted word and the aggregated semantic vector from attention are used as the input: | 3,246.6 | 2020-10-08T00:00:00.000 | [
"Computer Science"
] |
Comparison of whole body retention of I-131 in case humans thyroid cancer between model prediction and measurement
This paper undertakes the attempt to evaluate the conformity of the iodine biokinetic model with the experimental data. Experimental data come from several publications. Most of them describe the 131I retention among patients subjected to the therapy of the thyroid cancer. In these experimental studies 286 patients participated and administrated activity range was from 350 kBq to 7400 MBq. Computer simulations have been conducted based on four models. Two of them were designed for nuclear medicine (model developed by Committee on Medical Internal Radiation Dose and model developed by Johansson in 2003) and two of them for the radiological protection (current ICRP model and the model proposed by Legget in 2010). Simulations were conducted using the SAAM II software. The best correlation with experimental results was obtained using the Johansonns model, while the worst using MIRD model. Leggets and ICRP models were ranked in the middle. For iodine uptake at 10%, the model proposed by Johansson provides the reality of almost entire duration of the experiment (120 hours) very well. Correspondence to: Brudecki Kamil, Institute of Nuclear Physics, Polish Academy of Sciences, Radzikowskiego, Poland; Tel: (+48 12) 662-8390; E-mail<EMAIL_ADDRESS>
Introduction
Iodine has about 36 unstable isotopes. One of them is the radioactive 131 I, which origin is often purely anthropogenic. It has a half-life of 8.0252 (6) days, emitting βradiation and the immediately following γ radiation. Because of 131 I decay there is stable 131 Xe created. 131 I is obtained in the process of the nuclear fission of the uranium or plutonium in nuclear reactors, or by thermal neutron activation of 130 Te in 130 Te(n,γ) 131 Te 131 I reaction.
The first treatment using 131 I was conducted in 1942 for the hyperthyroidism affliction [1]. It is currently estimated that 90% of all treatments in nuclear medicine take place using this isotope [2,3]. It was used primarily for the diagnosis and treatment of the thyroid gland diseases, where natural affinity of this organ to iodine is used. Radiotherapy is most often used for the treatment of hyperthyroidism and thyroid cancer. 131 I is also used in tests of kidney and the bladder functions. Taking into consideration the wide use of 131 I it is very important for the theoretical models to predict the iodine behaviour in the human body to describe reality as precisely as possible. This is of importance for the widely-understood safety in the nuclear medicine. The goal of this paper is to evaluate the conformity of several models of iodine biokinetic with the experimental data obtained from the patients treated with 131 I.
Material and methods
The process of obtaining our results consisted of three steps. The first one was to find the experimental data. Next one, using biokinetic models and SAMM II software we simulated 131 I retention in human body directly, in the same conditions as in the case of obtaining experimental data. Finally, we compared the results from previous steps and drew conclusions.
Experimental data
The experimental data were taken from five selected publications. In most cases, they describe the retention of 131 I in patients' bodies subjected to the therapy of the thyroid cancer. In two of them retention has been marked based on the measurements of 131 I in patients' urine [4,5]. Patients were divided into three groups depending on the activity given (3700 MBq, 5550 MBq and 7400 MBq). In the first study 83 patients participated (22 males and 61 females with age ranging from 22 to 79). Urine samples were collected 6, 12, 18, 24, 48, 72, 96 and 120 hours after the oral administration of iodine. Then activity have been measured in a well counter Capintec CRC25. In the second study 59 patients participated. Urine samples were collected 24, 48, 72, 96 and 120 hours after the oral administration of iodine. Activities have been measured in a beta -gamma probe model 491-40.
Another two publications describe whole body retention of 131 I on the basis of external dose rate and effective half-life. In the first study 27 patients participated and in the second -69 [6,7]. The effective half-life was determined as 24.7 h and 18 h in the first and the second publication, respectively.
In the last publication [8] 131 I effective half-life was estimated based on thyroid in vivo measurements. In this study 48 patients participated and the measurements were performed in a whole-body counter equipped with 6 NaI(Tl) detectors. The estimated iodine-131 half-life was 21 hours. Detailed results from the publications mentioned above are presented in Table 1.
131
I whole body retention at time t was simulated on the basis of four biokinetic models. Two of them were designed for nuclear medicine and two of them for the radiological protection.
First biokinetic model used in our study was the model developed by Committee on Medical Internal Radiation Dose (MIRD) [9]. It is simple, monoexponential model. It includes specific uptakes in intestilne, liver, stomach and thyroid but does not include recycling of iodine due to the metabolism of secreted thyroid hormones. Thyroid uptake (u) was assumed on three levels: 5%, 15% and 25%. Parameters of this model are presented in Table 2.
Second model used in presented study was the model developed by Johansson in 2003 [10]. This model also was designed for nuclear medicine purposes but it is more detailed than previous one. It includes recirculation of organically bound iodine and uptakes in GI track, kidneys, urinary bladder, salivary glands and thyroid. This model is also age-dependent and assumes any thyroid uptake (u). The structure and parameters of the Johanssons model are presented in Figure 1.
In the case of models dedicated for radiological protection we decided to use current ICRP model and the model proposed by Legget in 2010 [11]. Figure 2 shows three compartment model presented by Riggs in 1952 and still being used by ICRP as its primary biokinetic model [12]. The structure of the model proposed by Legged is presented in Figure 3. and a baseline coefficients for adults are listed in Table 3. Both models describe the three most important subsystems of the iodine cycle in a human body: circulation of inorganic iodine, circulation of extrathyroid organic iodine and thyroidal iodine (synthesis, storage and secretion of thyroid hormones). The model proposed by Legget was evaluated against experimental data. The models predictions were compared with experimental data after intravenously injection in blood, urine, salivary glands plus gastric secretion and thyroid. Experimental data were very precise, but unfortunately, they describe iodine retention in a very short period-3 hours [13][14][15][16].
Results and discussion
Results of the comparison of the experimental data with computer simulations for each model are presented in Figure 4-6. The best consistency with the experimental results was obtained using the Johansonns model, while the worst using MIRD model. Leggets and ICRP models were ranked in the middle. The time, in which each models prediction lie in-between empirical range (average + 2 sigma), related to total simulation time (120 h) is approximately equal: 27%, 45%, 65%, 24%, 33%, 43%, 23% and 29% respectively for Johansonns model u=30%, u=20%, u=10%, MIRD model u=25%, u=15%, u=5%, Riggs model and Leggett model. For the models tested also correlation analysis was performed, relating experimental data and the results of simulations (Figure 7). The highest Pearson product-moment Table 2. Biological parameters of the fractional distribution functions a(t) = a 1 e −λ1t + a 2 e −λ2t + a 3 e −λ3t + a 4 e −λ4t for the Medical Internal Radiation Dose Committee model [9]. For iodine uptake at 10% the model proposed by Johansson provides the reality of almost the entire duration of the experiment (120 hours) very well. The greatest variation of the measured quantities reaches less than 6%. The situation looks a little worse for iodine uptake at 20%, there are very good predictions we get through the first 48 to 72 hours. The maximum deviation of measurements amounted to approx. 13% after 120 hours after administration of the radioiodine.
The model proposed by MIRD and models for radiological protection, are much worse in the correct prediction of measured results. In general, these three models have the tendency to underestimating of the retention of iodine in the first 24 hours and overestimating it after 48 hours. In the first 24 hours, the biggest differences were 25%, 18% and 15% for the models proposed by MIRD (u = 15%), Rigs and Legget. After 48 h, the largest differences were 29%, 19% and 15%, each for the models proposed by the Rigs, Legget and MIRD (u = 15%). It is very important that models designed for radiological protection are applied for a fully healthy people, but are not used, as in the presented studies, for patients with thyroid cancer. This may explain discrepancies between modelling and experimentally measured values. Even though these models were developed for radiation protection, they may be used in nuclear medicine under certain conditions, especially the model proposed by Legget which describes the retention of iodine in all the key organs and therefore it is more specific than the Johansson's model.
From the presented results, it is also apparent in the case of tumour diseases of the thyroid iodine uptake and on average has a range from 10% to 20%, while the level of uptake of iodine is recommended by the ICRP is 30%. | 2,200.8 | 2017-01-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Generating Classification Rules from Training Samples
In this paper, we describe an algorithm to extract classification rules from training samples using fuzzy membership functions. The algorithm includes steps for generating classification rules, eliminating duplicate and conflicting rules, and ranking extracted rules. We have developed software to implement the algorithm using MATLAB scripts. As an illustration, we have used the algorithm to classify pixels in two multispectral images representing areas in New Orleans and Alaska. For each scene, we randomly selected 10 per cent of the samples from our training set data for generating an optimized rule set and used the remaining 90 per cent of samples to validate the extracted rules. To validate extracted rules, we built a fuzzy inference system (FIS) using the extracted rules as a rule base and classified samples from the training set data. The results in terms of confusion matrices are presented in the paper. Keywords—Fuzzy membership functions; classification; rule extraction; multispectral images
I. INTRODUCTION
Many methods have been used to classify pixels in multispectral images using training samples.These include parametric methods such as the maximum likelihood, support vector machines, decision trees, neural networks, fuzzy-neural systems, and fuzzy inference systems.In supervised classification methods during the learning phase, a model is built to map an input feature vector to output classes, and during the classification phase the model is used to classify an unknown sample.The maximum likelihood classification algorithm assumes normal distribution and uses the mean vector and covariance matrix of each class to find the posterior probability.It then assigns a pixel to the class with the higher posterior probability.The Support Vector Machine (SVM) partitions the feature space by using hyper-planes that maximize the distance between the two classes in the feature space [1].It has been shown that the SVM algorithm yields higher classification accuracy for small datasets compared to conventional classifiers [2].Neural networks provide a nonparametric method for classification.Neural network models learn from training samples.During the learning process weights are updated using a gradient descent method such that the mean squared error between the desired and actual outputs is minimized [3].During the decision-making phase the model is used to classify pixels based on their spectral signatures.
Fuzzy-neural systems have been used to classify pixels in Landsat images [4].Fuzzy logic provides a tool to process information using linguistic rules.Fuzzy logic in the form of approximate reasoning provides decision support and expert systems with powerful reasoning capabilities.In fuzzy logic class memberships based on a degree of compatibility with the concepts presented are used [5].A fuzzy inference system (FIS) provides a method to classify pixels in Landsat images.However, the potential of fuzzy inference systems has not been fully explored by the remote sensing community as of yet.The main task in implementing a FIS is to develop a rule base.Classification rules can be generated from training samples or can be obtained from expert's knowledge.These classification rules then can be used to build the FIS.Several methods to generate classification rules from training samples have been reported in the literature.They include extracting classification rules using fuzzy membership functions, decision trees, neural networks, and black-box models.Wang and Mendel [6] suggested a method to extract fuzzy rules from data samples using fuzzy membership functions.They have used the method for a time-series prediction problem, where the output function is a continuous function.Chiu [7] developed a method called subtractive clustering to efficiently extract rules from a high dimensional feature space.The method was able to produce a much simpler fuzzy classifier and could be used to extract rules for function approximation as well as pattern classification.Kulkarni and McCaslin [8] have generated classification rules from neural network models and have built a FIS to classify pixels in Landsat images.Fung et al. [9] developed a costefficient method to quickly extract rules from SVMs trained with thousands of samples.Their algorithm forms rule sets that can be easily understood by humans, and only needs simple multivariable optimization problems to be solved.Sicat et al. [10] developed the FIS using farmer's knowledge for agricultural land sustainability classification using fuzzy models.Reshmidevi et al. [11] have developed a fuzzy rule base system for land suitability in agricultural watersheds.They have considered two types of attributes: continuously measured attributes and thematic attributes, and the crop suitability index as the output of the fuzzy rule-based system.They have used heuristic information and farmer's knowledge aggregated through field surveys as the basis for the fuzzy rulebase.Cay and Iscan [12] have developed a fuzzy expert system for land reallocation in land consolidation.They developed a rule base system using farmer's knowledge obtained from survey questions.Meng and Pei [13] have suggested a method to extract linguistic rules from data sets using fuzzy logic and genetic algorithms.They have formalized linguistics based on complex data summaries and used a genetic algorithm to www.ijacsa.thesai.orgoptimize the number of parameters of membership function of linguistic values.Kulkarni and Khan [14] generated rules to classify Likert-scale survey data by using a multi-layered feed forward neural network.Kulkarni and Shrestha [15] have generated rules using induction trees and built a FIS using the extracted rules.
In this paper we have used the method similar to that suggested by Wang and Mendel [6] for classification of pixels in a Landsat images.In rule extraction the main concerns are the number of extracted rules and the quality of those rules.Technically, each training sample generates a rule, and we get a large number of rules.It is important to note that the generated rules often contain redundant and conflicting rules.Also, a rule set with a large number of rules results in a model that often over-fits the data samples.Generally, rule generation is a two-step process.During the first step all possible rules are generated.In the second step, the rule set is optimized.The suggested algorithm for rule generation is as follows: First, the training data is fuzzified.From the fuzzified data, rules are generated.The generated rules may contain redundant and conflicting rules which are then eliminated.The remaining rules are ranked.
As an illustration, we have considered Landsat scenes from areas in New Orleans and Alaska.We selected training set areas interactively by displaying the scenes.We extracted classification rules from training samples.We built a FIS for each scene using the extracted rule as the rule base and classified all pixels.The outline of the paper is as follows.Section II describes a method for generating classification rules from training samples and optimizing the rule set.Section III provides implementation and results of Landsat data analysis.Section IV provides discussions and results.
II. RULE GENERATION AND OPTIMIZATION
The proposed method for extracting classification rules from data samples and finding the optimized rule set by eliminating conflicting and redundant rules is shown in Fig. 1.The process includes five steps.The first two steps are concerned with rule generation and the last three steps deal with optimization.To illustrate the method, we have chosen a classification problem with two features and three classes, and the training set contains fifty samples from each class.The method can be extended to multiple features and multiple classes.The steps are explained below.
Step-1 Fuzzify Data: We assume a set of desired inputoutput data pairs as shown in (1).
,
xx represents features, and y represents the corresponding class.For each feature the domain interval is 0 through 10.We divided the domain interval with three fuzzy sets {low, medium, high}.We used trapezoidal membership functions as shown in Fig. 2.
Step-2 Rule Generation: We fuzzified the input values and generated classification rules.Let the input vector 2.3, 3.5 represent class C 1 .From the membership functions shown in Fig. 2, membership values are given by ( 2), and the corresponding rule can be stated as If 1 x is low and 2 x is medium then the class is We generate a rule using the highest membership values.The firing strength of a rule is given by (3).
Each sample pair generates a rule, and the total number of generated rules is equal to the number of samples.The extracted rules contain duplicate and conflicting rules.Step-3 Eliminate duplicate rules: To eliminate repeated rules, extracted rules are mapped onto the Fuzzy Associative Memory (FAM) banks as shown in Fig. 3.In this example www.ijacsa.thesai.orgthere are three classes and there are 50 samples in each class.There are 150 rules generated as each sample generates a rule.We used three FAM banks, one for each class.Each cell in a FAM bank represents a rule, and the value in the cell represents the count of that rule.It can be seen from Fig. 3 that a rule is as follows: If 1 x is low, and 2 x is low, then class is C 1 .The count for the rule is 32.That means 32 samples satisfied this rule.Looking at the FAM bank in Fig. 3, we can see that by eliminating repeated rules, we get a rule set of only 10 rules.The extracted rules are shown in Table I.Step-4 Remove Conflicting Rules: To optimize the generated rules, it is necessary to remove conflicting rules if there are any.Two rules are considered to be conflicting when their antecedent parts are identical while the consequent parts are not the same.The conflicting rule with the highest count is retained, and the other rule is discarded.It can be seen from Table I that Rules 4 and 5 are conflicting rules.For Rule 4 the count is 1, while for Rule 5, the count is 45.Therefore Rule 4 is eliminated.This process is repeated until there are no more conflicting rules.
Step-5 Rank Rules and Select a Subset: After eliminating repeated rules, the remaining rules are organized in descending order from the highest to lowest based on their count.A subset from the ranked rules is then selected using the count as the criterion.Rules with a low count can be excluded.In our example, we removed the rules that represent less than three percent of samples.The final rule set is shown in Table II.
III. IMPLEMENTATION AND RESULTS
In this research work we developed software to generate classification rules from training samples using MATLAB scripts.We also evaluated the extracted rules by classifying pixels in two Landsat scenes.We built FISs with extracted rules as the rule base and classified training set data.The results are provided in this section.
A. Example-1 Landsat Scene from New Orleans
As an example, we considered a Landsat-8 scene from operational Land Imager (OLI) obtained on February 26, 2016; path # 22 and row # 39.We selected an area of the size 512x512 pixels from the full scene.The raw image is shown in Fig. 4. To extract classification rules, we selected six training set areas representing three classes: water, vegetation, and land.www.ijacsa.thesai.org The training set data contained a total of 7000 samples consists of 3400, 1800, and 1800 samples from three classes: water, vegetation, and land, respectively.We used band-2, band-3, band-5, and band-6 as features for classification.We selected these bands because they showed the maximum variance.We used randomly selected ten percent of training samples for generating classification rules.Spectral signatures for the classes are shown in Fig. 5.We used five term sets for each feature: very-low, low, medium, high, and very-high.We used trapezoidal membership functions and generated the optimized rule set using the method outlined in Section II.The extracted optimized rule set contained sixteen rules.The first ten rules of the optimized rule set are shown in Table III.We implemented a FIS with the optimized rule set as a rule base.The process of implementing the FIS is described by Kulkarni & Shrestha [15].The validation samples were classified using the FIS.The confusion matrix is shown in Table IV.We obtained classification accuracy of 96.73 percent with the FIS system that was built using extracted rules.The classified output is shown in Fig. 6.
B. Example-2 Landsat Scene from Alaska
In this example, we considered Landsat-8 OLI scene from Alaska obtained on June 6, 2016, path # 58 and row # 19.We considered a sub-scene of the size 512 x 512 pixels.The unclassified data for the Alaska scene is shown in Fig. 7. Spectral signatures for four classes are shown in Fig. 8.To extract classification rules, we selected five training set areas representing four classes: water, vegetation, ice-land, and glaciers.Each selected training area was of the size 100x100 pixels.Our training set data consisted of 50,000 training samples.We used band-2, band-3, band-5, and band-6 as features for classification as these bands showed the maximum variance.We used randomly selected ten percent training samples for generating classification rules.
To define fuzzy membership functions, we used five term sets for each feature: very-low, low, medium, high, and veryhigh.We extracted fuzzy classification rules using the method described in Section II.The optimized rule set contained twenty rules.The first ten rules are shown in Table V.We implemented a FIS with the optimized rule set as a rule base, and validation samples were classified using the FIS.The confusion matrix is shown in Table VI.The obtained classification accuracy was 91.58 percent.The classified output is shown in Fig. 9.
IV. DISCUSSIONS AND CONCLUSIONS
In this paper we have suggested an algorithm for generating and optimizing classification rules from training samples using fuzzy membership functions.Furthermore, we developed software using MATLAB scripts to implement the algorithm.As an illustration, we classified pixels from Landsat scenes for two areas in New Orleans and Alaska.We extracted classification rules from training samples for these two scenes.To validate extracted rules, we developed a FIS for each scene using extracted rules a rule base and classified samples from the training sets.The classification accuracy for New Orleans scene was 96.73 percent, and for Alaska, the accuracy was 91.58 percent.This clearly shows that extracting rules using fuzzy membership functions is a valid approach to generate a rule set that can be used develop a FIS for classifying pixels in Landsat images.In our examples we have used five term sets to define fuzzy membership functions.It is possible to use more terms sets to increase granularity, which may lead to an increase in the number of rules in the optimized rule set.It may be noted that as the number of rules in the optimized rule set increases the classification accuracy increases; however, there is a danger of overfitting training data.
The future work includes generating rules using fuzzy membership functions with seven or nine term sets for each membership function.This may increase the number of rules in the optimized rule set and may yield better classification accuracy.Furthermore there is no well-known criterion for evaluating quality of generated rules.That needs to be developed.We also plan a bench mark study to compare accuracy of the suggested algorithm with other existing rule extraction algorithms.
The author is thankful to anonymous reviewers for their valuable comments.
TABLE III .
OPTIMIZED RULE SET FOR NEW ORLEANS SCENE
TABLE IV .
CONFUSION MATRIX FOR NEW ORLEANS SCENE Fig. 6.Classified output new orleans scene. | 3,411.8 | 2018-01-01T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Micelle-Assisted Strategy for the Direct Synthesis of Large-Sized Mesoporous Platinum Catalysts by Vapor Infiltration of a Reducing Agent
Stable polymeric micelles have been demonstrated to serve as suitable templates for creating mesoporous metals. Herein, we report the utilization of a core-shell-corona type triblock copolymer of poly(styrene-b-2-vinylpyridine-b-ethylene oxide) and H2PtCl6·H2O to synthesize large-sized mesoporous Pt particles. After formation of micelles with metal ions, the reduction process has been carried out by vapor infiltration of a reducing agent, 4-(Dimethylamino)benzaldehyde. Following the removal of the pore-directing agent under the optimized temperature, mesoporous Pt particles with an average pore size of 15 nm and surface area of 12.6 m2·g−1 are achieved. More importantly, the resulting mesoporous Pt particles exhibit superior electrocatalytic activity compared to commercially available Pt black.
Introduction
Currently, platinum (Pt) is widely used as industrial catalysts in the automobile, chemical, pharmaceutical and electronic industries because of its high catalytic activity toward various catalytic THF. Finally, 5 mL volumetric flask was used to set the micelle capacity to a fixed concentration of 5 g·L −1 .
Preparation of large-sized mesoporous Pt powders. 1 mL of the polymeric micelle solution (5 g·L −1 ) was mixed with 1.73 mL of 20 mM H 2 PtCl 6 solution and stirred for 30 min at room temperature. The mixed solution was then transferred onto glass substrates. After the full evaporation of the solvent, the glass substrates were placed in a closed vessel with a small amount of DMAB powder at 28 • C. The color of the mixture on the glass substrates gradually changed from orange to black after 3 days. After that, the solid mixtures were collected and rinsed 3 to 5 times with deionized water and centrifuged. After the deionized water was evaporated, the dried black powder was calcined for 1 h at different temperatures (250 • C, 350 • C and 450 • C). The obtained products of mesoporous Pt-250, Pt-350, Pt-450 (the number represents the calcination temperature) were collected and stored for further characterization.
Characterization. The morphology of the mesoporous Pt particles was observed with a field emission scanning electron microscope (SEM, HITACHI S-4800, Tokyo, Japan) at 10 kV. The interior structure was investigated with a transmission electron microscope (TEM, JEOL JEM-1200EX, Tokyo, Japan) operated at 120 kV. The phase composition of the product was determined using wide-angle X-ray diffraction (XRD) (RIGAKU, Japan) with a Smart lab X-ray diffractometer. The hydrodynamic diameters (D h ) and zeta potential values of the polymeric micelles and the composite polymeric micelles were measured by Malvern Zetasizer Nano ZS90 (Malvern, UK). The morphology of the micelles was performed using atomic force microscope (AFM, Bruker, Billerica, MA, USA) with the non-contact mode. The thermal stability of the triblock copolymer was tested using thermogravimetric analysis (TG, TA instruments Q600 SDT, New Castle, DE, USA). The specific surface area of the mesoporous Pt particles was measured by the Brunauer-Emmett-Teller (BET, Quantachrome QuadraSorb, Boynton Beach, FL, USA) analysis method.
Electrochemical test. The electrochemical measurements investigations were performed with a CHI 600E electrochemical analyzer (CHI Instrument, Austin, TX, USA) to perform cyclic voltammograms (CVs) and chronoamperometric curves (CA) of mesoporous Pt catalysts and commercially available Pt black. A three-electrode system consisting of reference electrode (Ag/AgCl electrode), counter electrode (Pt wire), and working electrode (glassy carbon electrode, GCE). To prepare the working electrode, the sample was dispersed into a solution containing 5 wt% Nafion and deionized water, and placed into an ultrasonicator to make it into a well-mixed suspension (5 g·L −1 ). Then, 3 µL of the suspension was loaded onto the GCE and dried at room temperature. Methanol electro-oxidation measurements were carried out in 0.5 M H 2 SO 4 containing 0.5 M methanol. The electrochemical surface area (ECSA) was determined from the charge associated with the hydrogen desorption (0.21 mC·cm −2 ) between −0.2 V to 0.2 V, and it was calculated from the CVs using the equation: where, S H (A·V) is the desorption peak area, V is the sweep rate (V·s −1 ), the conversion value used for the desorption of a hydrogen monolayer is 0.21 (mC·cm −2 ) and M Pt is the mass of Pt (g).
Polymeric Micelle Solution
A stable micelle solution in methanol was prepared through a dialysis process. The triblock copolymer of PS 192 -b-P2VP 143 -b-PEO 613 was completely dissolved as unimers in THF. Then, HCl solution was added to stimulate micellization. Three-layer micelles were formed, including a PS core, a P2VP shell, and a PEO corona and this was accompanied by the change in color of the solvent from clear to turbid. This is because the hydrophobic PS unit prefers to self-assemble as PS core to reduce the interfacial energy between the PS block and the solvent. After stirring, the mixed solution was transferred into a dialysis membrane tube, which was dipped into the methanol solution. The dialysis membrane was porous; thus, the polymeric micelles of PS 192 -b-P2VP 143 -b-PEO 613 were preserved inside, while the THF was gradually replaced by methanol. The Tyndall effect is observed as a clear optical path in the solution, which confirms the presence of stable micelles in solution (Figure 1a,b). membrane was porous; thus, the polymeric micelles of PS192-b-P2VP143-b-PEO613 were preserved inside, while the THF was gradually replaced by methanol. The Tyndall effect is observed as a clear optical path in the solution, which confirms the presence of stable micelles in solution (Figure 1a,b). The hydrodynamic diameter (Dh) of the micelles was determined using dynamic light scattering measurements. In neutral solution, the Dh value of PS192-b-P2VP143-b-PEO613 micelles was approximately 56.4 nm with a size polydispersity (PDI) of 0.288. In acidic solution, Dh and PDI were measured to be 61.8 nm and 0.195, respectively. The Dh value was increased because of intra-and intersegmental electrostatic repulsive force between adjacent protonated P2VP + blocks. The shape of the micelles was changed from shrunken to swollen. The low value of PDI indicates the formation of nearly monodispersed micelles. Figure 1c-f gives the particle size distribution histograms based on the AFM measurement, which are in good agreement with those results detected from Dh. The dominant size of the micelles in acidic solution is relatively larger, which gives further evidence of pH-sensitive morphological change of the micelles. In the same concentration, the micelle density in neutral environment is obviously higher, and the micelle shows irregular contours. This might be due to the ease of aggregation of micelles under neutral conditions. On the other hand, highly regular and stable spherical micelles are observed in acidic micelle solution. Furthermore, a smaller value of approximately 30 nm was observed for micelles under acidic condition from the SEM image, because the "dried" micelles were shrunken. The hydrodynamic diameter (D h ) of the micelles was determined using dynamic light scattering measurements. In neutral solution, the D h value of PS 192 -b-P2VP 143 -b-PEO 613 micelles was approximately 56.4 nm with a size polydispersity (PDI) of 0.288. In acidic solution, D h and PDI were measured to be 61.8 nm and 0.195, respectively. The D h value was increased because of intra-and intersegmental electrostatic repulsive force between adjacent protonated P2VP + blocks. The shape of the micelles was changed from shrunken to swollen. The low value of PDI indicates the formation of nearly monodispersed micelles. Figure 1c-f gives the particle size distribution histograms based on the AFM measurement, which are in good agreement with those results detected from D h . The dominant size of the micelles in acidic solution is relatively larger, which gives further evidence of pH-sensitive morphological change of the micelles. In the same concentration, the micelle density in neutral environment is obviously higher, and the micelle shows irregular contours. This might be due to the ease of aggregation of micelles under neutral conditions. On the other hand, highly regular and stable spherical micelles are observed in acidic micelle solution. Furthermore, a smaller value of approximately 30 nm was observed for micelles under acidic condition from the SEM image, because the "dried" micelles were shrunken.
Synthesis of Mesoporous Pt Particles
Mesoporous Pt particles were prepared via several steps, as shown in Figure 2. Initially, polymeric micelle of PS 192 -b-P2VP 143 -b-PEO 613 reacted with negatively charged PtCl 6 2− to form composite micelles. Strong acidic media promoted the protonation of P2VP shells. The protonated P2VP + blocks in acidic environment provide accommodation sites for anionic PtCl 6 2− . After addition of Pt solution, the zeta potential value was changed from positive to almost zero, indicating that the absence of positive charge on the surface of the micelles. This suggests the occurrence of neutralization reaction between P2VP + and PtCl 6 2− . After stirring at room temperature for 30 min, a small volume of the PS 192 -b-P2VP 143 -b-PEO 613 /PtCl 6 2− composite micelle solution was dropped onto the glass substrate to induce rapid evaporation of the solvent. This process of solvent evaporation promoted the micelle assembly into spherical close-packing micelles. The as-prepared sample was completely dried and appeared as yellow-colored species on the glass substrate. Several pieces of the as-prepared samples were placed in a closed vessel with a little amount of DMAB powders at 28 • C. DMAB vapor acts as a reducing agent to drive Pt deposition, as suggested by the change in color of the as-prepared samples from yellow to dark, thus indicating successful Pt deposition. After reaction, the Pt samples were scratched from the glass substrate and washed 3-5 times with water to remove unreacted H 2 PtCl 6 . Different temperatures (250 • C, 350 • C and 450 • C) were chosen to investigate the effect of calcination temperature on the degradation of triblock copolymers and the morphology of the resulting mesoporous Pt product.
Synthesis of Mesoporous Pt Particles
Mesoporous Pt particles were prepared via several steps, as shown in Figure 2. Initially, polymeric micelle of PS192-b-P2VP143-b-PEO613 reacted with negatively charged PtCl6 2− to form composite micelles. Strong acidic media promoted the protonation of P2VP shells. The protonated P2VP + blocks in acidic environment provide accommodation sites for anionic PtCl6 2− . After addition of Pt solution, the zeta potential value was changed from positive to almost zero, indicating that the absence of positive charge on the surface of the micelles. This suggests the occurrence of neutralization reaction between P2VP + and PtCl6 2− . After stirring at room temperature for 30 min, a small volume of the PS192-b-P2VP143-b-PEO613/PtCl6 2− composite micelle solution was dropped onto the glass substrate to induce rapid evaporation of the solvent. This process of solvent evaporation promoted the micelle assembly into spherical close-packing micelles. The as-prepared sample was completely dried and appeared as yellow-colored species on the glass substrate. Several pieces of the as-prepared samples were placed in a closed vessel with a little amount of DMAB powders at 28 °C. DMAB vapor acts as a reducing agent to drive Pt deposition, as suggested by the change in color of the as-prepared samples from yellow to dark, thus indicating successful Pt deposition. After reaction, the Pt samples were scratched from the glass substrate and washed 3-5 times with water to remove unreacted H2PtCl6. Different temperatures (250 °C, 350 °C and 450 °C) were chosen to investigate the effect of calcination temperature on the degradation of triblock copolymers and the morphology of the resulting mesoporous Pt product. The beauty of the triblock copolymer is the distinct contribution of each block in core-shellcorona type PS192-b-P2VP143-b-PEO613. The hydrophobic PS block forms the core of the micelles to control the pore size. The pH-sensitive P2VP block is the key binding site of inorganic species. In acidic media, anionic ions preferably interact with P2VP + . The outer free PEO block acts as a micelle stabilizer through steric repulsion, leading to well-dispersed micelles in precursor solutions [26]. From the TEM image, the highlighted PS core by 0.1 wt% phosphotungstic acid has a diameter of approximately 15 nm (Figure 1b). We examined the effect of the inorganic precursor concentration on the structure of the mesopores. The molar ratio of PtCl6 2− /P2VP was changed from 1.5:1 and 3:1 to 5:1 while keeping the concentration of micelles constant. When the molar ratio of PtCl6 2− /P2VP was 1.5:1, small-sized Pt particles with incomplete mesoporous structures were obtained ( Figure S1a). The mesoporous structure can be obtained when the molar ratio of PtCl6 2− /P2VP is increased to 3:1 ( Figure 3). However, with a further increase of the molar ratio of PtCl6 2− /P2VP to 5:1, heavily aggregated large-sized Pt particles are observed ( Figure S1b). The extra amount of PtCl6 2− appears to bind several composite micelles to form merged particles. The optimized molar ratio of PtCl6 2− /P2VP is 3:1 in this study. The beauty of the triblock copolymer is the distinct contribution of each block in core-shell-corona type PS 192 -b-P2VP 143 -b-PEO 613 . The hydrophobic PS block forms the core of the micelles to control the pore size. The pH-sensitive P2VP block is the key binding site of inorganic species. In acidic media, anionic ions preferably interact with P2VP + . The outer free PEO block acts as a micelle stabilizer through steric repulsion, leading to well-dispersed micelles in precursor solutions [26]. From the TEM image, the highlighted PS core by 0.1 wt% phosphotungstic acid has a diameter of approximately 15 nm (Figure 1b). We examined the effect of the inorganic precursor concentration on the structure of the mesopores. The molar ratio of PtCl 6 2− /P2VP was changed from 1.5:1 and 3:1 to 5:1 while keeping the concentration of micelles constant. When the molar ratio of PtCl 6 2− /P2VP was 1.5:1, small-sized Pt particles with incomplete mesoporous structures were obtained ( Figure S1a). The mesoporous structure can be obtained when the molar ratio of PtCl 6 2− /P2VP is increased to 3:1 ( Figure 3). However, with a further increase of the molar ratio of PtCl 6 2− /P2VP to 5:1, heavily aggregated large-sized Pt particles are observed ( Figure S1b). The extra amount of PtCl 6 2− appears to bind several composite micelles to form merged particles. The optimized molar ratio of PtCl 6 2− /P2VP is 3:1 in this study. Since large-sized mesoporous noble-metal particles have good thermal stability [27], we applied a simple calcination to remove the organic template. According to the thermogravimetric (TG) analysis, the thermal degradation temperature of the used triblock copolymer is around 400 °C ( Figure S2). Three samples of Pt-250, Pt-350 and Pt-450 were prepared at different calcination temperatures (Note: Pt-0 is the as-prepared sample before the removal of the template). From SEM images ( Figure S3a,b), both Pt-0 and Pt-250 have organic residues on their surface. The presence of organic residues devalues the electrocatalytic activity of Pt catalysts. Well-designed mesoporous structures were observed on the surface of Pt-350 with pore sizes ranging from 15-20 nm (Figure 3a). However, a higher calcination temperature of 450 °C can facilitate rapid removal of the template and collapse of the mesoporous structure due to significant rearrangement of Pt atoms and rapid growth to aggregated Pt crystals ( Figure S3c). Hence, it is necessary to carefully investigate the thermal treatment process to synthesize mesoporous Pt particles with desirable structure and morphology. Since large-sized mesoporous noble-metal particles have good thermal stability [27], we applied a simple calcination to remove the organic template. According to the thermogravimetric (TG) analysis, the thermal degradation temperature of the used triblock copolymer is around 400 • C ( Figure S2). Three samples of Pt-250, Pt-350 and Pt-450 were prepared at different calcination temperatures (Note: Pt-0 is the as-prepared sample before the removal of the template). From SEM images ( Figure S3a,b), both Pt-0 and Pt-250 have organic residues on their surface. The presence of organic residues devalues the electrocatalytic activity of Pt catalysts. Well-designed mesoporous structures were observed on the surface of Pt-350 with pore sizes ranging from 15-20 nm (Figure 3a). However, a higher calcination temperature of 450 • C can facilitate rapid removal of the template and collapse of the mesoporous structure due to significant rearrangement of Pt atoms and rapid growth to aggregated Pt crystals ( Figure S3c). Hence, it is necessary to carefully investigate the thermal treatment process to synthesize mesoporous Pt particles with desirable structure and morphology.
In this study, Pt-350 calcined at 350 • C was selected as a representative sample for further characterization. TEM and high-resolution TEM images (Figure 3b,c) indicate that the observed fringe spacing is around 0.23 nm, which can be assigned to the (111) plane of a fcc Pt crystal [28]. The black powder scratched from the glass substrate was used for the wide-angle X-ray diffraction (XRD) analysis (Figure 3e). The observed diffraction peaks of (111), (200), (220), (311), and (222) match well with the Pt fcc structure (JCPDS Card No. 65-2868) and these results are consistent with the selected-area electron diffraction (SAED) pattern (Figure 3d), suggesting that this mesoporous Pt sample has a fcc atomic arrangement. By analyzing the (111) diffraction peak of Pt-350 using the Scherrer equation, the average crystallite size of the Pt nanoparticles was calculated to be 8.6 nm. This value is slightly larger than the value measured from the high-resolution TEM image (Figure 3c), because the volume-weighted measurements of XRD sometimes tend to overestimate the geometric particle size [29]. From the N 2 adsorption-desorption isotherm, the surface area of Pt-350 is measured to be approximately 12.6 m 2 ·g −1 .
Methanol Electro-Oxidation
Mesoporous Pt particles have demonstrated good electrocatalytic activity toward methanol electro-oxidation owing to their high surface area and easy access of the interior area. Three samples (Pt-250, Pt-350 and Pt-450) and the commercially available Pt black ( Figure S4) were investigated in a three-electrode system. Figure 4a shows the typical cyclic voltammetry (CV) curves detected in 0.5 M H 2 SO 4 at a scan rate of 50 mV·s −1 . The ECSA of each sample was obtained by calculating the charge passed during hydrogen desorption in the potential range from -0.2 V to 0.2 V. Pt-350 has the largest specific ECSA of 14.6 m 2 ·g −1 due to the presence of a high density of accessible active sites. It is 5.5, 1.7, and 3.5 times higher than that of Pt-250 (2.66 m 2 ·g −1 ), Pt-450 (8.36 m 2 ·g −1 ), and Pt black (4.18 m 2 ·g −1 ), respectively. Both less-conductive organic layers coated on the surface (in the case of Pt-250) and significant thermal aggregation of the Pt crystals (in the case of Pt-450) hinder electrolyte contact with the catalysts, and lower the utilization of active sites. Furthermore, the representative methanol electro-oxidation test was detected in 0.5 M H 2 SO 4 containing 0.5 M CH 3 OH solution, as shown in Figure S5. Two typical anodic peaks are observed during the forward and backward sweeps. Pt-350 still exhibits the best catalytic performance. Normalized by ECSA, the peak current densities of the forward sweep are 10.04, 5.17, 3.84, and 7.15 A·m −2 for Pt-350, Pt-450, Pt-250, and Pt black, respectively. The mass-specific current density of Pt-350 is 146.6 mA·mg −1 , which is comparable with the data published in the literature [14,30]. However, there is still a lot of potential for further improvement in catalytic performance. The excellent performance of Pt-350 can be ascribed to the formation of mesoporous structure with more accessible active sites. Typical chronoamperometric measurements were performed at 0.6 V to investigate their stability (Figure 4b). All samples show a downward trend. Among these samples, Pt-350 has the highest initial current density and the slowest decay rate over a period of 2000 s due to the contribution of well-defined mesoporous structure. In this study, Pt-350 calcined at 350 °C was selected as a representative sample for further characterization. TEM and high-resolution TEM images (Figure 3b,c) indicate that the observed fringe spacing is around 0.23 nm, which can be assigned to the (111) plane of a fcc Pt crystal [28]. The black powder scratched from the glass substrate was used for the wide-angle X-ray diffraction (XRD) analysis (Figure 3e). The observed diffraction peaks of (111), (200), (220), (311), and (222) match well with the Pt fcc structure (JCPDS Card No. 65-2868) and these results are consistent with the selectedarea electron diffraction (SAED) pattern (Figure 3d), suggesting that this mesoporous Pt sample has a fcc atomic arrangement. By analyzing the (111) diffraction peak of Pt-350 using the Scherrer equation, the average crystallite size of the Pt nanoparticles was calculated to be 8.6 nm. This value is slightly larger than the value measured from the high-resolution TEM image (Figure 3c), because the volume-weighted measurements of XRD sometimes tend to overestimate the geometric particle size [29]. From the N2 adsorption-desorption isotherm, the surface area of Pt-350 is measured to be approximately 12.6 m 2 ·g −1 .
Methanol Electro-Oxidation
Mesoporous Pt particles have demonstrated good electrocatalytic activity toward methanol electro-oxidation owing to their high surface area and easy access of the interior area. Three samples (Pt-250, Pt-350 and Pt-450) and the commercially available Pt black ( Figure S4) were investigated in a three-electrode system. Figure 4a shows the typical cyclic voltammetry (CV) curves detected in 0.5 M H2SO4 at a scan rate of 50 mV·s −1 . The ECSA of each sample was obtained by calculating the charge passed during hydrogen desorption in the potential range from -0.2 V to 0.2 V. Pt-350 has the largest specific ECSA of 14.6 m 2 ·g −1 due to the presence of a high density of accessible active sites. It is 5.5, 1.7, and 3.5 times higher than that of Pt-250 (2.66 m 2 ·g −1 ), Pt-450 (8.36 m 2 ·g −1 ), and Pt black (4.18 m 2 ·g −1 ), respectively. Both less-conductive organic layers coated on the surface (in the case of Pt-250) and significant thermal aggregation of the Pt crystals (in the case of Pt-450) hinder electrolyte contact with the catalysts, and lower the utilization of active sites. Furthermore, the representative methanol electro-oxidation test was detected in 0.5 M H2SO4 containing 0.5 M CH3OH solution, as shown in Figure S5. Two typical anodic peaks are observed during the forward and backward sweeps. Pt-350 still exhibits the best catalytic performance. Normalized by ECSA, the peak current densities of the forward sweep are 10.04, 5.17, 3.84, and 7.15 A·m −2 for Pt-350, Pt-450, Pt-250, and Pt black, respectively. The mass-specific current density of Pt-350 is 146.6 mA·mg −1 , which is comparable with the data published in the literature [14,30]. However, there is still a lot of potential for further improvement in catalytic performance. The excellent performance of Pt-350 can be ascribed to the formation of mesoporous structure with more accessible active sites. Typical chronoamperometric measurements were performed at 0.6 V to investigate their stability (Figure 4b). All samples show a downward trend. Among these samples, Pt-350 has the highest initial current density and the slowest decay rate over a period of 2000 s due to the contribution of well-defined mesoporous structure.
Conclusions
We proposed a realizable approach for the synthesis of mesoporous Pt particles with accessible pores using the core-shell-corona type PS-b-P2VP-b-PEO triblock copolymer as a soft template. The triblock copolymer PS-b-P2VP-b-PEO is critical to direct the formation of large mesopores, and each block serves a distinct contribution. The hydrophobic PS cores determine the size of the mesopores, the protonated P2VP + units are the selective binding sites for anionic PtCl 6 2− , the hydrophilic PEO coronas are critical for stability of the micelles. The molar ratio of PtCl 6 2− /P2VP plays an important role in determining the mesoporous structure. The excessively large proportion of PtCl 6 2− can lead to the aggregation of Pt particles, while an insufficient amount of PtCl 6 2− results in the incomplete mesoporous structure. Here, the optimum molar ratio of PtCl 6 2− /P2VP is identified to be 3:1.
Furthermore, it is demonstrated that 350 • C is the optimum calcination temperature, as organic residues were not completely removed at 250 • C, and the mesoporous structure would be destroyed at 450 • C. The obtained mesoporous Pt particles were shown to be highly active electrocatalysts for methanol electro-oxidation compared to commercially available Pt black. The processes of Pt deposition and removal of template are simple and easy to implement, and easy preparation of other mesoporous Pt-based alloys may also be achieved using the same methodology. These results provide an important finding for boosting the catalytic performance of Pt, especially for fuel cells. | 5,726 | 2018-10-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |